skip to main content
10.1145/3534678.3539242acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

DICE: Domain-attack Invariant Causal Learning for Improved Data Privacy Protection and Adversarial Robustness

Published: 14 August 2022 Publication History

Abstract

The adversarial attack reveals the vulnerability of deep models by incurring test domain shift, while delusive attack relieves the privacy concern about personal data by injecting malicious noise into the training domain to make data unexploitable. However, beyond their successful applications, the two attacks can be easily defended by adversarial training (AT). While AT is not the panacea, it suffers from poor generalization for robustness. For the limitations of attack and defense, we argue that to fit data well, DNNs can learn the spurious relations between inputs and outputs, which are consequently utilized by the attack and defense and degrade their effectiveness, and DNNs can not easily capture the causal relations like humans to make robust decisions under attacks. In this paper, to better understand and improve attack and defense, we first take a bottom-up perspective to describe the correlations between latent factors and observed data, then analyze the effect of domain shift on DNNs induced by attack and finally develop our causal graph, namely Domain-attack Invariant Causal Model (DICM). Based on DICM, we propose a coherent causal invariant principle, which guides our algorithm design to infer the human-like causal relations. We call our algorithm Domain-attack Invariant Causal Learning (DICE) and the experimental results on two attacks and one defense task verify its effectiveness.

References

[1]
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. 2019. Invariant risk minimization. arXiv preprint arXiv:1907.02893 (2019).
[2]
Anish Athalye, Nicholas Carlini, and David Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In ICML.
[3]
Peter Bühlmann. 2020. Invariance, causality and robustness. Statist. Sci. (2020).
[4]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy.
[5]
Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. 2019. Unlabeled data improves adversarial robustness. In NeurIPS, Vol. 32.
[6]
Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant rationalization. In ICML.
[7]
Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, and Zhangyang Wang. 2020. Robust overfitting may be mitigated by properly learned smoothening. In ICLR.
[8]
Francesco Croce and Matthias Hein. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML.
[9]
Ji Feng, Qi-Zhi Cai, and Zhi-Hua Zhou. 2019. Learning to confuse: generating training time adversarial data with auto-encoder. In NeurIPS.
[10]
Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojciech Czaja, and Tom Goldstein. 2021. Adversarial examples make strong poisons. In NeurIPS.
[11]
Jonas Geiping, Liam Fowl, W Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, and Tom Goldstein. 2021. Witches' brew: Industrial scale data poisoning via gradient matching. In ICLR.
[12]
Madelyn Glymour, Judea Pearl, and Nicholas P Jewell. 2016. Causal inference in statistics: A primer. John Wiley & Sons.
[13]
I. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. arXiv preprint arxiv:1412.6572 (2015).
[14]
Alison Gopnik, Clark Glymour, David M Sobel, Laura E Schulz, Tamar Kushnir, and David Danks. 2004. A theory of causal learning in children: causal maps and Bayes nets. Psychological review 111, 1 (2004), 3.
[15]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR.
[16]
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017).
[17]
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In CVPR.
[18]
Hanxun Huang, Xingjun Ma, Sarah Monazam Erfani, James Bailey, and Yisen Wang. 2021. Unlearnable examples: Making personal data unexploitable. In ICLR.
[19]
W Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, and Tom Goldstein. 2020. Metapoison: Practical general-purpose clean-label data poisoning. In NeurIPS.
[20]
Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. 2020. Variational autoencoders and nonlinear ica: A unifying framework. In AISTATS.
[21]
Beomsu Kim, Junghoon Seo, and Taegyun Jeon. 2019. Bridging adversarial robustness and gradient interpretability. arXiv preprint arXiv:1903.11626 (2019).
[22]
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
[23]
David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. 2021. Out-ofdistribution generalization via risk extrapolation (rex). In ICML.
[24]
Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, and Tie-Yan Liu. 2021. Learning causal semantic representation for out-ofdistribution prediction. In NeurIPS.
[25]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
[26]
Jovana Mitrovic, Brian McWilliams, Jacob Walker, Lars Buesing, and Charles Blundell. 2020. Representation learning via invariant causal mechanisms. arXiv preprint arXiv:2010.07922 (2020).
[27]
Amir Najafi, Shin-ichi Maeda, Masanori Koyama, and Takeru Miyato. 2019. Robustness to adversarial perturbations in learning from incomplete data. In NeurIPS, Vol. 32.
[28]
Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. 2019. Improving adversarial robustness via promoting ensemble diversity. In ICML.
[29]
Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, and Jun Zhu. 2020. Bag of tricks for adversarial training. arXiv preprint arXiv:2010.00467 (2020).
[30]
Judea Pearl. 2009. Causality. Cambridge university press.
[31]
Judea Pearl. 2014. Interpretation and identification of causal mediation. Psychological methods 19, 4 (2014), 459.
[32]
Jonas Peters, Peter Bühlmann, and Nicolai Meinshausen. 2016. Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology) (2016).
[33]
Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. 2017. Elements of causal inference: foundations and learning algorithms. The MIT Press.
[34]
Fengchun Qiao, Long Zhao, and Xi Peng. 2020. Learning to learn single domain generalization. In CVPR.
[35]
Qibing Ren, Qingquan Bao, Runzhong Wang, and Junchi Yan. 2022. Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond. In CVPR.
[36]
Leslie Rice, Eric Wong, and Zico Kolter. 2020. Overfitting in adversarially robust deep learning. In ICML.
[37]
Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. 2018. Adversarially robust generalization requires more data. In NeurIPS.
[38]
Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. 2012. On causal and anticausal learning. In ICML.
[39]
Karen Simonyan and AndrewZisserman. 2015. Very deep convolutional networks for large-scale image recognition. In ICLR.
[40]
Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. 2018. Is Robustness the Cost of Accuracy?--A Comprehensive Study on the Robustness of 18 Deep Image Classification Models. In ECCV.
[41]
Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, and Tie-yan Liu. 2021. Recovering Latent Causal Factor for Generalization to Distributional Shifts. In NeurIPS.
[42]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In CVPR.
[43]
Christian Szegedy, W. Zaremba, Ilya Sutskever, Joan Bruna, D. Erhan, I. Goodfellow, and R. Fergus. 2014. Intriguing properties of neural networks. arXiv preprint arxiv:1312.6199 (2014).
[44]
Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, and Songcan Chen. 2021. Better safe than sorry: Preventing delusive adversaries with adversarial training. In NeurIPS.
[45]
Florian Tramer, Nicholas Carlini, Wieland Brendel, and Aleksander Madry. 2020. On adaptive attacks to adversarial example defenses. In NeurIPS.
[46]
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 (2017).
[47]
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. 2018. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152 (2018).
[48]
Haohan Wang, Xindi Wu, Zeyi Huang, and Eric P Xing. 2020. High-frequency component helps explain the generalization of convolutional neural networks. In CVPR.
[49]
TanWang, Chang Zhou, Qianru Sun, and Hanwang Zhang. 2021. Causal attention for unbiased visual recognition. In ICCV.
[50]
Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. 2019. Improving adversarial robustness requires revisiting misclassified examples. In ICLR.
[51]
Qitian Wu, Hengrui Zhang, Junchi Yan, and David Wipf. 2022. Handling Distribution Shifts on Graphs: An Invariance Perspective. In ICLR.
[52]
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML.
[53]
Xu Yang, Hanwang Zhang, Guojun Qi, and Jianfei Cai. 2021. Causal attention for vision-language tasks. In CVPR.
[54]
Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Russ R Salakhutdinov, and Kamalika Chaudhuri. 2020. A closer look at accuracy vs. robustness. In NeurIPS.
[55]
Sergey Zagoruyko and Nikos Komodakis. 2016. Wide residual networks. arXiv preprint arXiv:1605.07146 (2016).
[56]
Cheng Zhang, Kun Zhang, and Yingzhen Li. 2020. A causal view on robustness of neural networks. In NeurIPS.
[57]
Dong Zhang, Hanwang Zhang, Jinhui Tang, Xian-Sheng Hua, and Qianru Sun. 2020. Causal intervention for weakly-supervised semantic segmentation. In NeurIPS.
[58]
Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In ICLR.
[59]
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019. Theoretically principled trade-off between robustness and accuracy. In ICML.
[60]
Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, and Mohan Kankanhalli. 2021. Geometry-aware instance-reweighted adversarial training. In ICLR.
[61]
Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, and Kun Zhang. 2021. Adversarial robustness through the lens of causality. arXiv preprint arXiv:2106.06196 (2021).

Cited By

View all

Index Terms

  1. DICE: Domain-attack Invariant Causal Learning for Improved Data Privacy Protection and Adversarial Robustness

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
      August 2022
      5033 pages
      ISBN:9781450393850
      DOI:10.1145/3534678
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 14 August 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. attack transferability
      2. causal inference
      3. data privacy
      4. robustness

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      KDD '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

      Upcoming Conference

      KDD '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)145
      • Downloads (Last 6 weeks)9
      Reflects downloads up to 09 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Unbiased Feature Learning with Causal Intervention for Visible-Infrared Person Re-IdentificationACM Transactions on Multimedia Computing, Communications, and Applications10.1145/367473720:10(1-20)Online publication date: 27-Jun-2024
      • (2024)CausalPC: Improving the Robustness of Point Cloud Classification by Causal Effect Identification2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.01870(19779-19789)Online publication date: 16-Jun-2024
      • (2024)Inductive link prediction on temporal networks through causal inferenceInformation Sciences: an International Journal10.1016/j.ins.2024.121202681:COnline publication date: 1-Oct-2024
      • (2023)How re-sampling helps for long-tail learning?Proceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3669429(75669-75687)Online publication date: 10-Dec-2023
      • (2023)Measuring the Effect of Causal Disentanglement on the Adversarial Robustness of Neural Network ModelsProceedings of the 32nd ACM International Conference on Information and Knowledge Management10.1145/3583780.3614960(1907-1916)Online publication date: 21-Oct-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media