skip to main content
10.1145/3589334.3645411acmconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Dynamic Graph Information Bottleneck

Published: 13 May 2024 Publication History

Abstract

Dynamic Graphs widely exist in the real world, which carry complicated spatial and temporal feature patterns, challenging their representation learning. Dynamic Graph Neural Networks (DGNNs) have shown impressive predictive abilities by exploiting the intrinsic dynamics. However, DGNNs exhibit limited robustness, prone to adversarial attacks. This paper presents the novelDynamic Graph Information Bottleneck (DGIB) framework to learn robust and discriminative representations. Leveraged by the Information Bottleneck (IB) principle, we first propose the expected optimal representations should satisfy theMinimal-Sufficient-Consensual (MSC) Condition. To compress redundant as well as conserve meritorious information into latent representation, DGIB iteratively directs and refines the structural and feature information flow passing through graph snapshots. To meet theMSC Condition, we decompose the overall IB objectives into DGIBMS and DGIBC, in which the DGIB_MS channel aims to learn the minimal and sufficient representations, with the DGIBC channel guarantees the predictive consensus. Extensive experiments on real-world and synthetic dynamic graph datasets demonstrate the superior robustness of DGIB against adversarial attacks compared with state-of-the-art baselines in the link prediction task. To the best of our knowledge, DGIB is the first work to learn robust representations of dynamic graphs grounded in the information-theoretic IB principle.

Supplemental Material

MP4 File
presentation video
MP4 File
Supplemental video

References

[1]
Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. 2017. Deep Variational Information Bottleneck. In ICLR.
[2]
Mart'i n Arjovsky, Lé on Bottou, Ishaan Gulrajani, and David Lopez-Paz. 2019. Invariant Risk Minimization. arXiv (2019).
[3]
Normand J Beaudry and Renato Renner. 2012. An Intuitive Proof of the Data Processing Inequality. Quantum Information & Computation, Vol. 12 (2012), 432--441.
[4]
Tanya Y Berger-Wolf and Jared Saia. 2006. A Framework for Analysis of Dynamic Social Networks. In KDD. 523--528.
[5]
Lei Cai, Zhengzhang Chen, Chen Luo, Jiaping Gui, Jingchao Ni, Ding Li, and Haifeng Chen. 2021. Structural Temporal Graph Neural Networks for Anomaly Detection in Dynamic Graphs. In CIKM. 3747--3756.
[6]
Zhiyuan Cai, Kaiqi Zhao, Kenny Q Zhu, and Haixun Wang. 2013. Wikification via Link Co-occurrence. In CIKM. 1087--1096.
[7]
Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. 2020. Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View. In AAAI, Vol. 34. 3438--3445.
[8]
Kyunghyun Cho, Bart van Merrienboer, cC aglar Gü lcc ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations Using RNN Encoder-decoder for Statistical Machine Translation. In EMNLP. 1724--1734.
[9]
Jiameng Fan and Wenchao Li. 2022. DRIBO: Robust Deep Reinforcement Learning via Multi-View Information Bottleneck. In ICML. 6074--6102.
[10]
Dongqi Fu and Jingrui He. 2021. SDG: A Simplified and Dynamic Graph Neural Network. In SIGIR. 2273--2277.
[11]
Xingcheng Fu, Yuecen Wei, Qingyun Sun, Haonan Yuan, Jia Wu, Hao Peng, and Jianxin Li. 2023. Hyperbolic Geometric Graph Representation Learning for Hierarchy-imbalance Node Classification. In WWW. 460--468.
[12]
Walter R Gilks, Sylvia Richardson, and David Spiegelhalter. 1995. Markov Chain Monte Carlo in Practice.
[13]
Gordon, Greenspan, and Goldberger. 2003. Applying the Information Bottleneck Principle to Unsupervised Clustering of Discrete and Continuous Image Representations. In ICCV. 370--377.
[14]
Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, and Yulin Wang. 2021. Dynamic Neural Networks: A Survey. IEEE TPAMI, Vol. 44, 11 (2021), 7436--7456.
[15]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-term Memory. Neural computation, Vol. 9, 8 (1997), 1735--1780.
[16]
Maximilian Igl, Kamil Ciosek, Yingzhen Li, Sebastian Tschiatschek, Cheng Zhang, Sam Devlin, and Katja Hofmann. 2019. Generalization in Reinforcement Learning with Selective Noise Injection and Information Bottleneck. In NeurIPS, Vol. 32.
[17]
Masahiro Ito, Kotaro Nakayama, Takahiro Hara, and Shojiro Nishio. 2008. Association Thesaurus Construction Methods Based on Link Co-occurrence Analysis for Wikipedia. In CIKM. 817--826.
[18]
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR.
[19]
Thomas N Kipf and Max Welling. 2016a. Semi-supervised Classification with Graph Convolutional Networks. arXiv (2016).
[20]
Thomas N Kipf and Max Welling. 2016b. Variational Graph Auto-encoders. arXiv (2016).
[21]
David Krueger, Ethan Caballero, Jö rn-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Ré mi Le Priol, and Aaron C. Courville. 2021. Out-of-distribution Generalization via Risk Extrapolation. In ICML, Vol. 139. 5815--5826.
[22]
Srijan Kumar, Xikun Zhang, and Jure Leskovec. 2019. Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks. In KDD. 1269--1278.
[23]
Honglin Li, Chenglu Zhu, Yunlong Zhang, Yuxuan Sun, Zhongyi Shui, Wenwei Kuang, Sunyi Zheng, and Lin Yang. 2023. Task-Specific Fine-Tuning via Variational Information Bottleneck for Weakly-supervised Pathology Whole Slide Image Classification. In CVPR. 7454--7463.
[24]
Mengzhang Li and Zhanxing Zhu. 2021. Spatial-temporal Fusion Graph Neural Networks for Traffic Flow Forecasting. In AAAI, Vol. 35. 4189--4196.
[25]
Antonio Longa, Steve Azzolin, Gabriele Santin, Giulia Cencetti, Pietro Liò, Bruno Lepri, and Andrea Passerini. 2022. Explaining the Explainers in Graph Neural Networks: A Comparative Study. arXiv (2022).
[26]
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv (2013).
[27]
Vinod Nair and Geoffrey E Hinton. 2010. Rectified Linear Units Improve Restricted Boltzmann Machines. In ICML. 807--814.
[28]
XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. 2010. Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization. IEEE Transactions on Information Theory, Vol. 56, 11 (2010), 5847--5861.
[29]
Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction. In EMNLP. 1938--1952.
[30]
Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao Schardl, and Charles Leiserson. 2020. EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs. In AAAI, Vol. 34. 5363--5370.
[31]
Ben Poole, Sherjil Ozair, Aaron Van Den Oord, Alex Alemi, and George Tucker. 2019. On Variational Bounds of Mutual Information. In ICML. 5171--5180.
[32]
Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. 2019. Distributionally Robust Neural Networks for Group Shifts: On The Importance of Regularization for Worst-case Generalization. arXiv (2019).
[33]
Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. 2020. DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-attention Networks. In WSDM. 519--527.
[34]
Michael Schweinberger and Mark S Handcock. 2015. Local Dependence in Random Graph Models: Characterization, Properties and Statistical Inference. Journal of the Royal Statistical Society Series B, Vol. 77, 3 (2015), 647--676.
[35]
Youngjoo Seo, Michaël Defferrard, Pierre Vandergheynst, and Xavier Bresson. 2018. Structured Sequence Modeling with Graph Convolutional Recurrent Networks. In ICONIP. 362--373.
[36]
Ravid Shwartz-Ziv and Naftali Tishby. 2017. Opening the Black Box of Deep Neural Networks via Information. arXiv (2017).
[37]
Qingyun Sun, Jianxin Li, Hao Peng, Jia Wu, Xingcheng Fu, Cheng Ji, and S Yu Philip. 2022a. Graph Structure Learning with Variational Information Bottleneck. In AAAI, Vol. 36. 4165--4174.
[38]
Qingyun Sun, Jianxin Li, Hao Peng, Jia Wu, Yuanxing Ning, Philip S Yu, and Lifang He. 2021. SUGAR: Subgraph Neural Network with Reinforcement Pooling and Self-supervised Mutual Information Mechanism. In WWW. 2081--2091.
[39]
Qingyun Sun, Jianxin Li, Haonan Yuan, Xingcheng Fu, Hao Peng, Cheng Ji, Qian Li, and Philip S Yu. 2022b. Position-aware Structure Learning for Graph Topology-imbalance by Relieving Under-reaching and Over-squashing. In CIKM. 1848--1857.
[40]
Hui Tang and Xun Liang. 2023. Where to Find Fascinating Inter-Graph Supervision: Imbalanced Graph Classification with Kernel Information Bottleneck. In ACM MM. 3240--3249.
[41]
Jie Tang, Sen Wu, Jimeng Sun, and Hang Su. 2012. Cross-domain Collaboration Recommendation. In KDD. 1285--1293.
[42]
Naftali Tishby, Fernando C Pereira, and William Bialek. 2000. The Information Bottleneck Method. arXiv (2000).
[43]
Naftali Tishby and Noga Zaslavsky. 2015. Deep Learning and the Information Bottleneck Principle. In IEEE Information Theory Workshop. 1--5.
[44]
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph Attention Networks. In ICLR.
[45]
Yuecen Wei, Haonan Yuan, Xingcheng Fu, Qingyun Sun, Hao Peng, Xianxian Li, and Chunming Hu. 2024. Poincaré Differential Privacy for Hierarchy-aware Graph Embedding. In AAAI.
[46]
Tailin Wu, Hongyu Ren, Pan Li, and Jure Leskovec. 2020. Graph Information Bottleneck. In NeurIPS, Vol. 33. 20437--20448.
[47]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How Powerful are Graph Neural Networks?. In ICLR.
[48]
Kuo Yang, Zhengyang Zhou, Wei Sun, Pengkun Wang, Xu Wang, and Yang Wang. 2023. EXTRACT and REFINE: Finding a Support Subgraph Set for Graph Representation. In KDD. 2953--2964.
[49]
Liang Yang, Yuanfang Guo, Junhua Gu, Di Jin, Bo Yang, and Xiaochun Cao. 2020. Probabilistic Graph Convolutional Network via Topology-constrained Latent Space Model. IEEE Transactions on Cybernetics, Vol. 52, 4 (2020), 2123--2136.
[50]
Liang Yang, Zesheng Kang, Xiaochun Cao, Di Jin, Bo Yang, and Yuanfang Guo. 2019. Topology Optimization based Graph Convolutional Network. In IJCAI. 4054--4061.
[51]
Liang Yang, Fan Wu, Zichen Zheng, Bingxin Niu, Junhua Gu, Chuan Wang, Xiaochun Cao, and Yuanfang Guo. 2021. Heterogeneous Graph Information Bottleneck. In IJCAI. 1638--1645.
[52]
Junchi Yu, Tingyang Xu, Yu Rong, Yatao Bian, Junzhou Huang, and Ran He. 2021. Recognizing Predictive Substructures with Subgraph Information Bottleneck. IEEE TPAMI (2021).
[53]
Haonan Yuan, Qingyun Sun, Xingcheng Fu, Ziwei Zhang, Cheng Ji, Hao Peng, and Jianxin Li. 2023. Environment-Aware Dynamic Graph Learning for Out-of-Distribution Generalization. In NeurIPS.
[54]
Shilei Zhang, Toyotaro Suzumura, and Li Zhang. 2021b. DynGraphTrans: Dynamic Graph Embedding via Modified Universal Transformer Networks for Financial Transaction Data. In SMDS. 184--191.
[55]
Xiyue Zhang, Chao Huang, Yong Xu, Lianghao Xia, Peng Dai, Liefeng Bo, Junbo Zhang, and Yu Zheng. 2021a. Traffic Flow Forecasting with Spatial-temporal Graph Diffusion Network. In AAAI, Vol. 35. 15008--15015.
[56]
Zeyang Zhang, Xin Wang, Ziwei Zhang, Haoyang Li, Zhou Qin, and Wenwu Zhu. 2022. Dynamic Graph Neural Networks under Spatio-Temporal Distribution Shift. In NeurIPS, Vol. 35. 6074--6089.
[57]
Yanping Zheng, Zhewei Wei, and Jiajun Liu. 2023. Decoupled Graph Neural Networks for Large Dynamic Graphs. VLDB, Vol. 16, 9 (2023), 2239--2247.
[58]
Dingyuan Zhu, Ziwei Zhang, Peng Cui, and Wenwu Zhu. 2019. Robust Graph Convolutional Networks Against Adversarial Attacks. In KDD. 1399--1407.
[59]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial Attacks on Neural Networks for Graph Data. In KDD. 2847--2856.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
WWW '24: Proceedings of the ACM Web Conference 2024
May 2024
4826 pages
ISBN:9798400701719
DOI:10.1145/3589334
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 May 2024

Check for updates

Badges

Author Tags

  1. dynamic graph neural networks
  2. information bottleneck
  3. robust representation learning

Qualifiers

  • Research-article

Funding Sources

Conference

WWW '24
Sponsor:
WWW '24: The ACM Web Conference 2024
May 13 - 17, 2024
Singapore, Singapore

Acceptance Rates

Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)275
  • Downloads (Last 6 weeks)32
Reflects downloads up to 22 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media