skip to main content
10.1145/3604078.3604157acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicdipConference Proceedingsconference-collections
research-article

Efficient and High-Quality Black-Box Face Reconstruction Attack

Published: 26 October 2023 Publication History

Abstract

Recent studies have shown that machine learning as a service is susceptible to reconstruction attacks that would expose users' privacy. When it comes to face recognition systems, privacy leakage could incur unpredictable effects on confidential or financial matters. This paper focuses on black-box reconstruction attacks on face recognition systems, including face verification and face identification applications. Considering a face recognition system in the real world, the adversary only has access to the similarity score of a query compared to a certain ID. We build our method based on a pretrained StyleGAN face generator and optimize its latent code to obtain a face image that maximizes the similarity score. Given the similarity score alone, we adopt zeroth-order optimization to estimate the gradients faithfully. Our method can attack 1:1 verification system with efficient and high-quality. To validate the effectiveness of the proposed method, we perform extensive experiments on face recognition systems, and reveal their risks of privacy leakage.

References

[1]
Goodfellow I, Bengio Y, and Courville A., [Deep learning], The MIT Press, Cambridge (2016).
[2]
Matt B., “How gdpr is failing,” WIRED, 23 May 2022, https://www.wired.com/story/gdpr-2022 (2022).
[3]
M. H, and Saleh A., “Face recognition: Challenges, achievements and future directions, IET Computer Vision Journal,” Vol. 9, No. 4, pp. 614-626 (2015).
[4]
Ramachandra, R., and Busch, C., “Presentation attack detection methods for face recognition systems - a comprehensive survey,” ACM Computing Surveys (CSUR). 50(1), 8.1-8.37 (2017). 
[5]
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., and Ozair, S., “Generative Adversarial Nets,” Neural Information Processing Systems. MIT Press. (2014).
[6]
Wenger, E., Falzon, F., Passananti, J., Zheng, H., and Zhao, B. Y., “Assessing privacy risks from feature vector reconstruction attacks,” arXiv preprint arXiv:2202.05760 (2022).
[7]
Massoli, F. V., Carrara, F., Amato, G., and Falchi, F., “Detection of face recognition adversarial attacks,” Computer Vision and Image Understanding. 202:103103 (2021).
[8]
Rigaki, M., and Garcia, S., “A survey of privacy attacks in machine learning,” arXiv preprint arXiv:2007.07646 (2020).
[9]
Goodfellow, I. J., Shlens, J., and Szegedy, C., "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572 (2014).
[10]
Nasr, M., Shokri, R., and Houmansadr, A., “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” 2019 IEEE symposium on security and privacy (SP). Pages 739-753 (2019).
[11]
Chen, P. Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C. J., “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models,” Proceedings of the 10th ACM workshop on artificial intelligence and security. Pages 15-26 (2017).
[12]
Geiping, J., Bauermeister, H., Dröge, H., and Moeller, M., “Inverting gradients-how easy is it to break privacy in federated learning?,” Advances in Neural Information Processing Systems. 33: 16937-16947 (2020).
[13]
Zhmoginov, A., and Sandler, M., "Inverting face embeddings with convolutional neural networks,” arXiv preprint arXiv:1606.04189 (2016).
[14]
Mai, G., Cao, K., Yuen, P. C., and Jain, A. K., “On the reconstruction of face images from deep face templates,” IEEE transactions on pattern analysis and machine intelligence. 41(5), 1188-1202 (2018).
[15]
Duong, C. N., Truong, T. D., Luu, K., Quach, K. G., Bui, H., and Roy, K., “Vec2face: Unveil human faces from their blackbox features in face recognition,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Pages 6132-6141 (2020).
[16]
Truong, T. D., Duong, C. N., Le, N., Savvides, M., and Luu, K., "Vec2Face-v2: Unveil Human Faces from their Blackbox Features via Attention-based Network in Face Recognition,” arXiv preprint arXiv:2209.04920 (2022).
[17]
Razzhigaev, A., Kireev, K., Udovichenko, I., and Petiushko, A., "Darker than black-box: Face reconstruction from similarity queries,” arXiv preprint arXiv:2106.14290 (2021).
[18]
Razzhigaev, A., Kireev, K., Kaziakhmedov, E., Tursynbek, N., and Petiushko, A., “Black-box face recovery from identity features,” Proceedings of the Computer Vision–ECCV 2020 Workshops. Part V 16: 462-475 (2020).
[19]
Vendrow, E., and Vendrow, J., "Realistic face reconstruction from deep embeddings,” NeurIPS 2021 Workshop Privacy in Machine Learning. (2021).
[20]
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T., “Analyzing and improving the image quality of stylegan,” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Pages 8110-8119 (2020).
[21]
Dong, X., Jin, Z., Guo, Z., and Teoh, A. B. J., “Towards generating high definition face images from deep templates,” 2021 International Conference of the Biometrics Special Interest Group (BIOSIG) Pages 1-11 (2021).
[22]
Wu, Z., Lischinski, D., and Shechtman, E., “Stylespace analysis: Disentangled controls for stylegan image generation,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Pages 12863-12872 (2021).
[23]
Karras, T., Laine, S., and Aila, T., “A style-based generator architecture for generative adversarial networks,” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Pages 4401-4410 (2019).
[24]
Liu, S., Chen, P. Y., Kailkhura, B., Zhang, G., Hero III, A. O., and Varshney, P. K., “A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications,” IEEE Signal Processing Magazine. 37(5): 43-54 (2020).
[25]
Kahla, M., Chen, S., Just, H. A., and Jia, R., “Label-only model inversion attacks via boundary repulsion,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Pages 15045-15053 (2022).
[26]
Liu, Z., Luo, P., Wang, X., and Tang, X., “Deep learning face attributes in the wild,” Proceedings of the IEEE international conference on computer vision. Pages 3730-3738 (2015).
[27]
Huang, G. B., Mattar, M., Berg, T., and Learned-Miller, E., “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” In Workshop on faces in'Real-Life'Images: detection, alignment, and recognition. (2008).
[28]
Guo, Y., Zhang, L., Hu, Y., He, X., and Gao, J., “Ms-celeb-1m: A dataset and benchmark for large-scale face recognition,” In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III 14: 87-102. Springer International Publishing. (2016).
[29]
Deng, J., and Guo, J., “Insightface: 2d and 3d face analysis project,” https://github.com/deepinsight/insightface (2021).
[30]
Zhang, K., Zhang, Z., Li, Z., and Qiao, Y., “Joint face detection and alignment using multitask cascaded convolutional networks,” IEEE signal processing letters, 23(10), 1499-1503 (2016).
[31]
He, K., Zhang, X., Ren, S., and Sun, J., “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition. Pages 770-778 (2016).
[32]
Chen, S., Liu, Y., Gao, X., and Han, Z., “Mobilefacenets: Efficient cnns for accurate real-time face verification on mobile devices,” In Biometric Recognition: 13th Chinese Conference, CCBR 2018, Urumqi, China, August 11-12, 2018, Proceedings 13: 428-438. Springer International Publishing. (2018).
[33]
Zhong, Y., and Deng, W., "Face transformer for recognition,” arXiv preprint arXiv:2103.14803 (2021).
[34]
Deng, J., Guo, J., Xue, N., and Zafeiriou, S., “Arcface: Additive angular margin loss for deep face recognition,” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Pages 4690-4699 (2019).
[35]
Ciftci, U. A., Demir, I., and Yin, L., "Fakecatcher: Detection of synthetic portrait videos using biological signals,” IEEE transactions on pattern analysis and machine intelligence (2020).

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICDIP '23: Proceedings of the 15th International Conference on Digital Image Processing
May 2023
711 pages
ISBN:9798400708237
DOI:10.1145/3604078
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 October 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Reconstruction attack
  2. black-box
  3. face recognition
  4. zeroth-order optimization

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICDIP 2023

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)15
  • Downloads (Last 6 weeks)1
Reflects downloads up to 03 Feb 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media