skip to main content
research-article

Graph Domain Adaptation: A Generative View

Published: 12 January 2024 Publication History

Abstract

Recent years have witnessed tremendous interest in deep learning on graph-structured data. Due to the high cost of collecting labeled graph-structured data, domain adaptation is important to supervised graph learning tasks with limited samples. However, current graph domain adaptation methods are generally adopted from traditional domain adaptation tasks, and the properties of graph-structured data are not well utilized. For example, the observed social networks on different platforms are controlled not only by the different crowds or communities but also by domain-specific policies and background noise. Based on these properties in graph-structured data, we first assume that the graph-structured data generation process is controlled by three independent types of latent variables, i.e., the semantic latent variables, the domain latent variables, and the random latent variables. Based on this assumption, we propose a disentanglement-based unsupervised domain adaptation method for the graph-structured data, which applies variational graph auto-encoders to recover these latent variables and disentangles them via three supervised learning modules. Extensive experimental results on two real-world datasets in the graph classification task reveal that our method not only significantly outperforms the traditional domain adaptation methods and the disentangled-based domain adaptation methods but also outperforms the state-of-the-art graph domain adaptation algorithms. The code is available at https://github.com/rynewu224/GraphDA.

References

[1]
Lada A. Adamic and Eytan Adar. 2003. Friends and neighbors on the web. Soc. Netw. 25, 3 (2003), 211–230.
[2]
Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. 2006. Group formation in large social networks: Membership, growth, and evolution. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 44–54.
[3]
Vladimir Batagelj and Matjaz Zaversnik. 2003. An O (m) algorithm for cores decomposition of networks. arXiv:cs/0310049. Retrieved from https://arxiv.org/abs/cs/0310049
[4]
Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35, 8 (2013), 1798–1828.
[5]
Phillip Bonacich. 1987. Power and centrality: A family of measures. Am. J. Sociol. 92, 5 (1987), 1170–1182.
[6]
Ruichu Cai, Jiawei Chen, Zijian Li, Wei Chen, Keli Zhang, Junjian Ye, Zhuozhang Li, Xiaoyan Yang, and Zhenjie Zhang. 2021. Time series domain adaptation via sparse associative structure alignment. Proc. AAAI Conf. Artif. Intell. 35, 8 (May2021), 6859–6867. https://ojs.aaai.org/index.php/AAAI/article/view/16846
[7]
Ruichu Cai, Zijian Li, Pengfei Wei, Jie Qiao, Kun Zhang, and Zhifeng Hao. 2019. Learning disentangled semantic representation for domain adaptation. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 2060–2066.
[8]
Ruichu Cai, Zhenjie Zhang, Zhifeng Hao, and Marianne Winslett. 2017. Understanding social causalities behind human action sequences. IEEE Trans. Neural Netw. Learn. Syst. 28, 8 (2017), 1801–1813.
[9]
Soumen Chakrabarti, Byron E. Dom, S. Ravi Kumar, Prabhakar Raghavan, Sridhar Rajagopalan, Andrew Tomkins, David Gibson, and Jon Kleinberg. 1999. Mining the Web’s link structure. Computer 32, 8 (1999), 60–67.
[10]
Wei Chen, Ruichu Cai, Zhifeng Hao, Chang Yuan, and Feng Xie. 2020. Mining hidden non-redundant causal relationships in online social networks. Neural Comput. Appl. 32, 11 (2020), 6913–6923.
[11]
Quanyu Dai, Xiao-Ming Wu, Jiaren Xiao, Xiao Shen, and Dan Wang. 2022. Graph transfer learning via adversarial domain adaptation with graph convolution. IEEE Trans. Knowl. Data Eng. 35, 5 (2022), 4908–4922.
[12]
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. arXiv:1606.09375. Retrieved from https://arxiv.org/abs/1606.09375
[13]
Yuxiao Dong, Nitesh V. Chawla, and Ananthram Swami. 2017. metapath2vec: Scalable representation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 135–144.
[14]
Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the 32nd International Conference on International Conference on Machine Learning-Volume 37. 1180–1189.
[15]
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17 (2016), 1–35.
[16]
Hongyang Gao and Shuiwang Ji. 2019. Graph u-nets. In International Conference on Machine Learning. PMLR, 2083–2092.
[17]
Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the International Conference on Machine Learning (ICML ’17).
[18]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27 (2014), 2672–2680.
[19]
William L. Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. arXiv:1706.02216. Retrieved from https://arxiv.org/abs/1706.02216
[20]
Zhifeng Hao, Di Lv, Zijian Li, Ruichu Cai, Wen Wen, and Boyan Xu. 2021. Semi-supervised disentangled framework for transferable named entity recognition. Neural Netw. 135 (2021), 127–138.
[21]
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. 2020. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 639–648.
[22]
Jingjia Huang, Zhangheng Li, Nannan Li, Shan Liu, and Ge Li. 2019. Attpool: Towards hierarchical feature representation in graph convolutional networks via attention mechanism. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6480–6489.
[23]
Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, and Max Welling. 2020. Diva: Domain invariant variational autoencoders. In Medical Imaging with Deep Learning. PMLR, 322–348.
[24]
Amir Hosein Khasahmadi, Kaveh Hassani, Parsa Moradi, Leo Lee, and Quaid Morris. 2020. Memory-based graph networks. arXiv:2002.09518. Retrieved from https://arxiv.org/abs/2002.09518
[25]
Thomas N. Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv:1609.02907. Retrieved from https://arxiv.org/abs/1609.02907
[26]
Thomas N. Kipf and Max Welling. 2016. Variational graph auto-encoders. arXiv:1611.07308. Retrieved from https://arxiv.org/abs/1611.07308
[27]
Junhyun Lee, Inyeop Lee, and Jaewoo Kang. 2019. Self-attention graph pooling. In International Conference on Machine Learning. PMLR, 3734–3743.
[28]
Zijian Li, Ruichu Cai, Hong Wei Ng, Marianne Winslett, Tom Z. J. Fu, Boyan Xu, Xiaoyan Yang, and Zhenjie Zhang. 2021. Causal mechanism transfer network for time series domain adaptation in mechanical systems. ACM Trans. Intell. Syst. Technol. 12, 2 (2021), 1–21.
[29]
Bill Yuchen Lin and Wei Lu. 2018. Neural adaptation layers for cross-domain named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2012–2022.
[30]
Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning. PMLR, 97–105.
[31]
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. 2017. Deep transfer learning with joint adaptation networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2208–2217.
[32]
Yao Ma, Suhang Wang, Charu C. Aggarwal, and Jiliang Tang. 2019. Graph convolutional networks with eigenpooling. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 723–731.
[33]
Dwarikanath Mahapatra, Steven Korevaar, Behzad Bozorgtabar, and Ruwan Tennakoon. 2023. Unsupervised domain adaptation using feature disentanglement and GCNs for medical image classification. In Proceedings of the European Conference on Computer Vision Workshops (ECCV ’22 Workshops)), Part VII. Springer, 735–748.
[34]
Diego Mesquita, Amauri H. Souza, and Samuel Kaski. 2020. Rethinking pooling in graph neural networks. arXiv:2010.11418. Retrieved from https://arxiv.org/abs/2010.11418
[35]
Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the International Conference on Machine Learning (ICML ’10).
[36]
Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank Citation Ranking: Bringing Order to the Web.Technical Report. Stanford InfoLab.
[37]
Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 701–710.
[38]
Mehmet Pilanci and Elif Vural. 2020. Domain adaptation on graphs by learning aligned graph bases. IEEE Trans. Knowl. Data Eng. (2020).
[39]
Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, and Jie Tang. 2018. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In Proceedings of the 11th ACM International Conference on Web Search and Data Mining. 459–467.
[40]
Jiezhong Qiu, Jian Tang, Hao Ma, Yuxiao Dong, Kuansan Wang, and Jie Tang. 2018. Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2110–2119.
[41]
Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. 2019. Dropedge: Towards deep graph convolutional networks on node classification. In International Conference on Learning Representations.
[42]
Xiao Shen, Quanyu Dai, Fu-lai Chung, Wei Lu, and Kup-Sze Choi. 2020. Adversarial deep network embedding for cross-network node classification. arXiv:2002.07366. Retrieved from https://arxiv.org/abs/2002.07366
[43]
Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles Ling, and Boyu Wang. 2021. Aggregating from multiple target-shifted sources. arXiv:2105.04051. Retrieved from https://arxiv.org/abs/2105.04051
[44]
Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web. 1067–1077.
[45]
Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv:1412.3474. Retrieved from https://arxiv.org/abs/1412.3474
[46]
Johan Ugander, Lars Backstrom, Cameron Marlow, and Jon Kleinberg. 2012. Structural diversity in social contagion. Proc. Natl. Acad. Sci. U.S.A. 109, 16 (2012), 5962–5966.
[47]
Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv:1710.10903. Retrieved from https://arxiv.org/abs/1710.10903
[48]
Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. 2015. Order matters: Sequence to sequence for sets. arXiv:1511.06391. Retrieved from https://arxiv.org/abs/1511.06391
[49]
Elif Vural. 2019. Domain adaptation on graphs by learning graph topologies: theoretical analysis and an algorithm. Turk. J. Electr. Eng. Comput. Sci. 27, 3 (2019), 1619–1635.
[50]
Daixin Wang, Peng Cui, and Wenwu Zhu. 2016. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1225–1234.
[51]
Ximei Wang, Liang Li, Weirui Ye, Mingsheng Long, and Jianmin Wang. 2019. Transferable attention for domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 5345–5352.
[52]
Duncan J. Watts and Steven H. Strogatz. 1998. Collective dynamics of ‘small-world’ networks. Nature 393, 6684 (1998), 440–442.
[53]
Man Wu, Shirui Pan, Chuan Zhou, Xiaojun Chang, and Xingquan Zhu. 2020. Unsupervised domain adaptive graph convolutional networks. In Proceedings of the Web Conference. 1457–1467.
[54]
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv:1810.00826. Retrieved from https://arxiv.org/abs/1810.00826
[55]
Pinar Yanardag and S. V. N. Vishwanathan. 2015. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1365–1374.
[56]
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. 2018. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 974–983.
[57]
Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. 2018. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems. 4800–4810.
[58]
Cheng Zhang, Kun Zhang, and Yingzhen Li. 2020. A causal view on robustness of neural networks. arXiv:2005.01095. Retrieved from https://arxiv.org/abs/2005.01095
[59]
Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. 2018. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.
[60]
Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. 2019. Bridging theory and algorithm for domain adaptation. In International Conference on Machine Learning. 7404–7413.
[61]
Yizhou Zhang, Guojie Song, Lun Du, Shuwen Yang, and Yilun Jin. 2019. DANE: Domain adaptive network embedding. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press, 4362–4368.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Knowledge Discovery from Data
ACM Transactions on Knowledge Discovery from Data  Volume 18, Issue 3
April 2024
663 pages
EISSN:1556-472X
DOI:10.1145/3613567
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 January 2024
Online AM: 14 November 2023
Accepted: 22 October 2023
Revised: 19 July 2023
Received: 21 September 2022
Published in TKDD Volume 18, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Graph Neural Network
  2. Graph Generative Models
  3. Domain Adaptation

Qualifiers

  • Research-article

Funding Sources

  • National Key R&D Program of China
  • National Science Fund for Excellent Young Scholars
  • Natural Science Foundation of China
  • Science and Technology Planning Project of Guangzhou
  • Guangdong Provincial Science and Technology Innovation Strategy Fund

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)895
  • Downloads (Last 6 weeks)71
Reflects downloads up to 26 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media