skip to main content
research-article

Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System

Published: 01 January 2022 Publication History

Abstract

Cybersecurity in front of attacks to a face recognition system is an emerging issue in the cloud era, especially due to its strong bonds with the privacy of the users registered to the system. A possible attack is the model inversion attack (MIA) which aims to reveal the identity of a targeted user by generating the most proper datapoint input to the system with maximum corresponding confidence score at the output. The generated data of a registered user can be maliciously used as a serious invasion of the user privacy. In literature, MIA processes are categorized into white-box and black-box scenarios which are respectively with and without information about the system structure, parameters, and partially about the users. This research work assumes the MIA under semi-white box scenario of availability of system model structure and parameters but not any user data information, and verifies it as a severe threat even for a deep-learning-based face recognition system despite its complex structure and the diversity of registered user data. The alert state is promoted by Deep MIA which is the integration of deep generative models in MIA, and <inline-formula> <tex-math notation="LaTeX">$\alpha $ </tex-math></inline-formula>-GAN integrated MIA-initilized by a face based seed (<inline-formula> <tex-math notation="LaTeX">$\alpha $ </tex-math></inline-formula>-GAN-MIA-FS) is proposed. As a novel MIA search strategy, a pre-trained deep generative model with capability of generating a face image from a random feature vector is used for narrowing down the image search space to the feature vectors space, which has much lower dimensions. This allows the MIA process to efficiently search for a low-dimensional feature vector whose corresponding face image maximizes the confidence score. We have experimentally evaluated the proposed method by two objective criteria and three subjective criteria in comparison to <inline-formula> <tex-math notation="LaTeX">$\alpha $ </tex-math></inline-formula>-GAN-integrated MIA initialized with a random seed (<inline-formula> <tex-math notation="LaTeX">$\alpha $ </tex-math></inline-formula>-GAN-MIA-RS), DCGAN-integrated MIA (DCGAN-MIA), and the conventional MIA. The evaluation results approve the efficiency and superiority of the proposed technique in generating natural looking face clones with high recognizability as the targeted users.

References

[1]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 1097–1105.
[2]
J. Zhao, L. Xiong, J. Li, J. Xing, S. Yan, and J. Feng, “3D-aided dual-agent GANs for unconstrained face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 10, pp. 2380–2394, Oct. 2019.
[3]
J.-C. Chenet al., “Unconstrained still/video-based face verification with deep convolutional neural networks,” Int. J. Comput. Vis., vol. 126, nos. 2–4, pp. 272–291, Apr. 2018.
[4]
D. Heinsohn and D. Mery, “Blur adaptive sparse representation of random patches for face recognition on blurred images,” M.S. thesis, Pontificia Univ. Católica de Chile, 2016.
[5]
R. Gopalan, P. Turaga, and R. Chellappa, “A blur-robust descriptor with applications to face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 6, pp. 1220–1226, Jun. 2012.
[6]
P. Li, L. Prieto, D. Mery, and P. Flynn, “Face recognition in low quality images: A survey,” 2018, arXiv:1805.11519.
[7]
X. Tuet al., “3D face reconstruction from a single image assisted by 2D face images in the wild,” IEEE Trans. Multimedia, vol. 23, pp. 1160–1172, 2021.
[8]
J. Wan, J. Li, Z. Lai, B. Du, and L. Zhang, “Robust face alignment by cascaded regression and de-occlusion,” Neural Netw., vol. 123, pp. 261–272, Mar. 2020.
[9]
H. Kong, J. Zhao, X. Tu, J. Xing, S. Shen, and J. Feng, “Cross-resolution face recognition via prior-aided face hallucination and residual knowledge distillation,” 2019, arXiv:1905.10777.
[10]
S. Ge, S. Zhao, C. Li, Y. Zhang, and J. Li, “Efficient low-resolution face recognition via bridge distillation,” IEEE Trans. Image Process., vol. 29, pp. 6898–6908, 2020.
[11]
J. Liet al., “Task relation networks,” in Proc. IEEE Winter Conf. Appl. Comput. Vis. (WACV), May 2019, pp. 932–940.
[12]
J. Zhaoet al., “Towards pose invariant face recognition in the wild,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Oct. 2018, pp. 2207–2216.
[13]
P. Li, L. Prieto, D. Mery, and P. J. Flynn, “On low-resolution face recognition in the wild: Comparisons and new techniques,” IEEE Trans. Inf. Forensics Security, vol. 14, no. 8, pp. 2000–2012, Aug. 2019.
[14]
X. Tuet al., “Joint face image restoration and frontalization for recognition,” IEEE Trans. Circuits Syst. Video Technol., early access, May 10, 2021. 10.1109/TCSVT.2021.3078517.
[15]
X. Tu, H. Zhang, M. Xie, Y. Luo, Y. Zhang, and Z. Ma, “Enhance the motion cues for face anti-spoofing using CNN-LSTM architecture,” 2019, arXiv:1901.05635.
[16]
X. Tu, Z. Ma, J. Zhao, G. Du, M. Xie, and J. Feng, “Learning generalizable and identity-discriminative representations for face anti-spoofing,” ACM Trans. Intell. Syst. Technol., vol. 11, no. 5, pp. 1–19, Oct. 2020.
[17]
L. T. Phong, Y. Aono, T. Hayashi, L. Wang, and S. Moriai, “Privacy-preserving deep learning via additively homomorphic encryption,” IEEE Trans. Inf. Forensics Security, vol. 13, no. 5, pp. 1333–1345, May 2018.
[18]
K. Weiet al., “Federated learning with differential privacy: Algorithms and performance analysis,” IEEE Trans. Inf. Forensics Security, vol. 15, pp. 3454–3469, 2020.
[19]
B. W. Tseng and P. Y. Wu, “Compressive privacy generative adversarial network,” IEEE Trans. Inf. Forensics Security, vol. 15, pp. 2499–2513, 2020.
[20]
M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proc. 22nd ACM SIGSAC Conf. Comput. Commun. Secur., Oct. 2015, pp. 1322–1333.
[21]
M. Al-Rubaie and J. M. Chang, “Reconstruction attacks against mobile-based continuous authentication systems in the cloud,” IEEE Trans. Inf. Forensics Security, vol. 11, no. 12, pp. 2648–2663, Dec. 2016.
[22]
J. Feng and A. K. Jain, “Fingerprint reconstruction: From minutiae to phase,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 2, pp. 209–223, Feb. 2011.
[23]
L. Liu, R. Chen, X. Liu, J. Su, and L. Qiao, “Towards practical privacy-preserving decision tree training and evaluation in the cloud,” IEEE Trans. Inf. Forensics Security, vol. 15, pp. 2914–2929, 2020.
[24]
Z. Yang, J. Zhang, E.-C. Chang, and Z. Liang, “Neural network inversion in adversarial setting via background knowledge alignment,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2019, pp. 225–240.
[25]
Y. Zhang, R. Jia, H. Pei, W. Wang, B. Li, and D. Song, “The secret revealer: Generative model-inversion attacks against deep neural networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 253–261.
[26]
M. Khosravy, K. Nakamura, Y. Hirose, N. Nitta, and N. Babaguchi, “Model inversion attack: Analysis under gray-box scenario on deep learning based face recognition system,” KSII Trans. Internet Inf. Syst. (TIIS), vol. 15, no. 3, pp. 1100–1118, 2021.
[27]
I. Goodfellowet al., “Generative adversarial nets,” in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 2672–2680.
[28]
L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. D. Tygar, “Adversarial machine learning,” in Proc. 4th ACM Workshop Secur. Artif. Intell., 2011, pp. 43–58.
[29]
M. Kloft and P. Laskov, “Online anomaly detection under adversarial impact,” in Proc. 13th Int. Conf. Artif. Intell. Statist., 2010, pp. 405–412.
[30]
F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction APIs,” in Proc. 25th Secur. Symp., 2016, pp. 601–618.
[31]
M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart, “Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing,” in Proc. 23rd Secur. Symp., 2014, pp. 17–32.
[32]
X. Wu, M. Fredrikson, S. Jha, and J. F. Naughton, “A methodology for formalizing model-inversion attacks,” in Proc. IEEE 29th Comput. Secur. Found. Symp. (CSF), May 2016, pp. 355–370.
[33]
S. Hidano, T. Murakami, S. Katsumata, S. Kiyomoto, and G. Hanaoka, “Model inversion attacks for online prediction systems: Without knowledge of non-sensitive attributes,” IEICE Trans. Inf. Syst., vol. E101.D, no. 11, pp. 2665–2676, 2018.
[34]
Z. Yang, E.-C. Chang, and Z. Liang, “Adversarial neural network inversion via auxiliary knowledge alignment,” 2019, arXiv:1902.08552.
[35]
T. A. O. Alves, F. M. G. França, and S. Kundu, “MLPrivacyGuard: Defeating confidence information based model inversion attacks on machine learning systems,” in Proc. Great Lakes Symp. VLSI, May 2019, pp. 411–415.
[36]
B. Wang and N. Z. Gong, “Stealing hyperparameters in machine learning,” in Proc. IEEE Symp. Secur. Privacy (SP), May 2018, pp. 36–52.
[37]
S. J. Oh, B. Schiele, and M. Fritz, “Towards reverse-engineering black-box neural networks,” in Explainable AI: Interpreting, Explaining Visualizing Deep Learning (Lecture Notes in Computer Science), vol. 11700. Cham, Switzerland: Springer, 2019, pp. 121–144.
[38]
Y. Sun, J. Tang, X. Shu, Z. Sun, and M. Tistarelli, “Facial age synthesis with label distribution-guided generative adversarial network,” IEEE Trans. Inf. Forensics Security, vol. 15, pp. 2679–2691, 2020.
[39]
X. Chen, Y. Sun, X. Shu, and Q. Li, “Attention-aware conditional generative adversarial networks for facial age synthesis,” Neurocomputing, vol. 451, pp. 167–180, May 2021.
[40]
S. Liuet al., “Face aging with contextual generative adversarial nets,” in Proc. 25th ACM Int. Conf. Multimedia, Oct. 2017, pp. 82–90.
[41]
M. Rosca, B. Lakshminarayanan, D. Warde-Farley, and S. Mohamed, “Variational approaches for auto-encoding generative adversarial networks,” 2017, arXiv:1706.04987.
[42]
D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” 2013, arXiv:1312.6114.
[43]
F. Zhao, J. Zhao, S. Yan, and J. Feng, “Dynamic conditional networks for few-shot learning,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 19–35.
[44]
Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “VGGFace2: A dataset for recognising faces across pose and age,” in Proc. 13th IEEE Int. Conf. Autom. Face Gesture Recognit., May 2018, pp. 67–74.
[45]
Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 3730–3738.
[46]
N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, “Attribute and simile classifiers for face verification,” in Proc. IEEE 12th Int. Conf. Comput. Vis., Sep. 2009, pp. 365–372.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security  Volume 17, Issue
2022
1497 pages

Publisher

IEEE Press

Publication History

Published: 01 January 2022

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media