Adversarial Attack on Graph Structured Data

Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1115-1124, 2018.

Abstract

Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool deep learning models by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. We further propose attack methods based on genetic algorithms and gradient descent in the scenario where additional prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-dai18b, title = {Adversarial Attack on Graph Structured Data}, author = {Dai, Hanjun and Li, Hui and Tian, Tian and Huang, Xin and Wang, Lin and Zhu, Jun and Song, Le}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1115--1124}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/dai18b/dai18b.pdf}, url = {https://proceedings.mlr.press/v80/dai18b.html}, abstract = {Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool deep learning models by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. We further propose attack methods based on genetic algorithms and gradient descent in the scenario where additional prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.} }
Endnote
%0 Conference Paper %T Adversarial Attack on Graph Structured Data %A Hanjun Dai %A Hui Li %A Tian Tian %A Xin Huang %A Lin Wang %A Jun Zhu %A Le Song %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-dai18b %I PMLR %P 1115--1124 %U https://proceedings.mlr.press/v80/dai18b.html %V 80 %X Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool deep learning models by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. We further propose attack methods based on genetic algorithms and gradient descent in the scenario where additional prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.
APA
Dai, H., Li, H., Tian, T., Huang, X., Wang, L., Zhu, J. & Song, L.. (2018). Adversarial Attack on Graph Structured Data. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1115-1124 Available from https://proceedings.mlr.press/v80/dai18b.html.

Related Material