skip to main content
10.1145/3620665.3640414acmconferencesArticle/Chapter ViewAbstractPublication PagesasplosConference Proceedingsconference-collections
research-article
Open access

TGLite: A Lightweight Programming Framework for Continuous-Time Temporal Graph Neural Networks

Published: 27 April 2024 Publication History

Abstract

In recent years, Temporal Graph Neural Networks (TGNNs) have achieved great success in learning tasks for graphs that change over time. These dynamic/temporal graphs represent topology changes as either discrete static graph snapshots (called DTDGs), or a continuous stream of timestamped edges (called CTDGs). Because continuous-time graphs have richer time information, it will be crucial to have abstractions for programming CTDG-based models so that practitioners can easily explore new designs and optimizations in this space. A few recent frameworks have been proposed for programming and accelerating TGNN models, but these either do not support continuous-time graphs, lack easy composability, and/or do not facilitate CTDG-specific optimizations.
In this paper, we propose a lightweight framework called TGLite to fill this apparent gap in the status quo. It provides abstractions that serve as composable building blocks for implementing TGNN models for CTDGs. It introduces a novel TBlock representation for capturing message-flow dependencies between nodes, with explicit support for temporal-related attributes, which is well-suited for common TGNN computation patterns. TBlocks serve as a central representation on which many different operators can be defined, such as temporal neighborhood sampling, scatter/segmented computations, as well as optimizations tailored to CTDGs. We use TGLite to implement four existing TGNN models. Compared to the TGL framework, TGLite is able to accelerate runtime performance of training (1.06 -- 3.43×) and inference (1.09 -- 4.65×) of these models on V100 and A100 GPUs across different experimental settings. Notably, when scaling to larger datasets, TGL runs out-of-memory in some cases on the V100 while TGLite is able to run successfully.

References

[1]
Ilya V. Afanasyev, Vladimir V. Voevodin, Kazuhiko Komatsu, and Hiroaki Kobayashi. Vgl: a high-performance graph processing framework for the nec sx-aurora tsubasa vector architecture. The Journal of Super-computing, 77(8):8694--8715, Aug 2021.
[2]
Dawei Cheng, Xiaoyang Wang, Ying Zhang, and Liqing Zhang. Graph neural network for fraud detection via spatial-temporal attention. IEEE Transactions on Knowledge and Data Engineering, 34(8):3800--3813, 2022.
[3]
Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds, 2019. https://arxiv.org/abs/1903.02428.
[4]
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 1263--1272. JMLR.org, 2017.
[5]
Joseph E. Gonzalez, Yucheng Low, Haijie Gu, Danny Bickson, and Carlos Guestrin. Powergraph: Distributed graph-parallel computation on natural graphs. In Proceedings of the 10th USENIX Conference on Operating Systems Design and Implementation, OSDI'12, page 17--30, USA, 2012. USENIX Association.
[6]
Mingyu Guan, Anand Padmanabha Iyer, and Taesoo Kim. Dynagraph: Dynamic graph neural networks at scale. In Proceedings of the 5th ACM SIGMOD Joint International Workshop on Graph Data Management Experiences & Systems (GRADES) and Network Data Analytics (NDA), GRADES-NDA '22, New York, NY, USA, 2022. Association for Computing Machinery.
[7]
Yuwei Hu, Zihao Ye, Minjie Wang, Jiali Yu, Da Zheng, Mu Li, Zheng Zhang, Zhiru Zhang, and Yida Wang. Featgraph: A flexible and efficient backend for graph neural network systems. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '20. IEEE Press, 2020.
[8]
Wengong Jin, Kevin Yang, Regina Barzilay, and Tommi Jaakkola. Learning multimodal graph-to-graph translation for molecular optimization. ICLR, 2019. https://arxiv.org/abs/1812.01070.
[9]
Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, and Pascal Poupart. Representation learning for dynamic graphs: A survey. J. Mach. Learn. Res., 21(1), jan 2020. https://www.jmlr.org/papers/volume21/19-447/19-447.pdf.
[10]
Sai Charan Koduru, Rajiv Gupta, and Iulian Neamtiu. Size oblivious programming with infinimem. In Revised Selected Papers of the 28th International Workshop on Languages and Compilers for Parallel Computing - Volume 9519, LCPC 2015, page 3--19, Berlin, Heidelberg, 2015. Springer-Verlag.
[11]
Srijan Kumar, Xikun Zhang, and Jure Leskovec. Predicting dynamic embedding trajectory in temporal interaction networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19, page 1269--1278, New York, NY, USA, 2019. Association for Computing Machinery.
[12]
Aapo Kyrola, Guy Blelloch, and Carlos Guestrin. Graphchi: Large-scale graph computation on just a pc. In Proceedings of the 10th USENIX Conference on Operating Systems Design and Implementation, OSDI'12, page 31--46, USA, 2012. USENIX Association.
[13]
Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection, June 2014. http://snap.stanford.edu/data.
[14]
Xue Li, Ke Meng, Lu Qin, Longbin Lai, Wenyuan Yu, Zhengping Qian, Xuemin Lin, and Jingren Zhou. Flash: A framework for programming distributed graph processing algorithms. In 2023 IEEE 39th International Conference on Data Engineering (ICDE), pages 232--244, 2023.
[15]
Yucheng Low, Joseph Gonzalez, Aapo Kyrola, Danny Bickson, Carlos Guestrin, and Joseph Hellerstein. Graphlab: A new framework for parallel machine learning. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, UAI'10, page 340--349, Arlington, Virginia, USA, 2010. AUAI Press.
[16]
Lingxiao Ma, Zhi Yang, Youshan Miao, Jilong Xue, Ming Wu, Lidong Zhou, and Yafei Dai. Neugraph: Parallel deep neural network computation on large graphs. In Proceedings of the 2019 USENIX Conference on Usenix Annual Technical Conference, USENIX ATC '19, page 443--457, USA, 2019. USENIX Association. https://www.usenix.org/system/files/atc19-ma_0.pdf.
[17]
Aldo Pareja, Giacomo Domeniconi, Jie Chen, Tengfei Ma, Toyotaro Suzumura, Hiroki Kanezashi, Tim Kaler, Tao B. Schardl, and Charles E. Leiserson. EvolveGCN: Evolving graph convolutional networks for dynamic graphs. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 5363--5370. AAAI Press, 2020. https://arxiv.org/abs/1902.10191.
[18]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Curran Associates Inc., Red Hook, NY, USA, 2019.
[19]
Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. In ICML 2020 Workshop on Graph Representation Learning, 2020. https://arxiv.org/abs/2006.10637.
[20]
Benedek Rozemberczki, Paul Scherer, Yixuan He, George Panagopoulos, Alexander Riedel, Maria Astefanoaei, Oliver Kiss, Ferenc Beres, Guzmán López, Nicolas Collignon, and Rik Sarkar. Pytorch geometric temporal: Spatiotemporal signal processing with neural machine learning models. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM '21, page 4564--4573, New York, NY, USA, 2021. Association for Computing Machinery.
[21]
Aravind Sankar, Yanhong Wu, Liang Gou, Wei Zhang, and Hao Yang. Dysat: Deep neural representation learning on dynamic graphs via self-attention networks. In Proceedings of the 13th International Conference on Web Search and Data Mining, WSDM '20, page 519--527, New York, NY, USA, 2020. Association for Computing Machinery.
[22]
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. Trans. Neur. Netw., 20(1):61--80, jan 2009.
[23]
Zechao Shang, Jeffrey Xu Yu, and Zhiwei Zhang. Tufast: A lightweight parallelization library for graph analytics. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pages 710--721, 2019.
[24]
Julian Shun and Guy E. Blelloch. Ligra: A lightweight graph processing framework for shared memory. In Proceedings of the 18th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '13, page 135--146, New York, NY, USA, 2013. Association for Computing Machinery.
[25]
Wen Torng and Russ B. Altman. Graph convolutional neural networks for predicting drug-target interactions. Journal of Chemical Information and Modeling, 59(10):4131--4149, 2019.
[26]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, page 6000--6010, Red Hook, NY, USA, 2017. Curran Associates Inc.
[27]
Daixin Wang, Jianbin Lin, Peng Cui, Quanhui Jia, Zhen Wang, Yanming Fang, Quan Yu, Jun Zhou, Shuang Yang, and Yuan Qi. A semi-supervised graph attentive network for financial fraud detection. In 2019 IEEE International Conference on Data Mining (ICDM), pages 598--607, 2019. https://arxiv.org/abs/2003.01171.
[28]
Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, and Zheng Zhang. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv preprint arXiv:1909.01315, 2019. https://arxiv.org/abs/1909.01315.
[29]
Ruijie Wang, Zheng Li, Dachun Sun, Shengzhong Liu, Jinning Li, Bing Yin, and Tarek Abdelzaher. Learning to sample and aggregate: Few-shot reasoning over temporal knowledge graphs. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 16863--16876. Curran Associates, Inc., 2022. https://arxiv.org/abs/2210.08654.
[30]
Xuhong Wang, Ding Lyu, Mengjian Li, Yang Xia, Qi Yang, Xinwen Wang, Xinguang Wang, Ping Cui, Yupu Yang, Bowen Sun, and Zhenyu Guo. Apan: Asynchronous propagation attention network for real-time temporal graph embedding. In Proceedings of the 2021 International Conference on Management of Data, SIGMOD '21, page 2628--2638, New York, NY, USA, 2021. Association for Computing Machinery.
[31]
Yangzihao Wang, Andrew Davidson, Yuechao Pan, Yuduo Wu, Andy Riffel, and John D. Owens. Gunrock: A high-performance graph processing library on the gpu. In Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP '16, New York, NY, USA, 2016. Association for Computing Machinery.
[32]
Yufeng Wang and Charith Mendis. Tgopt: Redundancy-aware optimizations for temporal graph attention networks. In Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming, PPoPP '23, page 354--368, New York, NY, USA, 2023. Association for Computing Machinery.
[33]
Yidi Wu, Kaihao Ma, Zhenkun Cai, Tatiana Jin, Boyang Li, Chenguang Zheng, James Cheng, and Fan Yu. Seastar: Vertex-centric programming for graph neural networks. In Proceedings of the Sixteenth European Conference on Computer Systems, EuroSys '21, page 359--375, New York, NY, USA, 2021. Association for Computing Machinery.
[34]
Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Inductive representation learning on temporal graphs. In International Conference on Learning Representations (ICLR), 2020. https://arxiv.org/abs/2002.07962.
[35]
Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, page 5171--5181, Red Hook, NY, USA, 2018. Curran Associates Inc.
[36]
Yuyue Zhao, Xiang Wang, Jiawei Chen, Yashen Wang, Wei Tang, Xiangnan He, and Haiyong Xie. Time-aware path reasoning on knowledge graph for recommendation. ACM Trans. Inf. Syst., 41(2), dec 2022.
[37]
Hongkuan Zhou, Da Zheng, Israt Nisa, Vasileios Ioannidis, Xiang Song, and George Karypis. Tgl: A general framework for temporal gnn training on billion-scale graphs. Proc. VLDB Endow., 15(8):1572--1580, apr 2022.
[38]
Yangjie Zhou, Jingwen Leng, Yaoxu Song, Shuwen Lu, Mian Wang, Chao Li, Minyi Guo, Wenting Shen, Yong Li, Wei Lin, Xiangwen Liu, and Hanqing Wu. Ugrapher: High-performance graph operator computation via unified abstraction for graph neural networks. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, ASPLOS 2023, page 878--891, New York, NY, USA, 2023. Association for Computing Machinery.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASPLOS '24: Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2
April 2024
1299 pages
ISBN:9798400703850
DOI:10.1145/3620665
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 April 2024

Check for updates

Badges

Author Tags

  1. temporal graph neural networks
  2. dynamic graphs
  3. programming framework
  4. data abstractions

Qualifiers

  • Research-article

Funding Sources

Conference

ASPLOS '24

Acceptance Rates

Overall Acceptance Rate 487 of 2,426 submissions, 20%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)527
  • Downloads (Last 6 weeks)76
Reflects downloads up to 31 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media