skip to main content
research-article

Domain-invariant Graph for Adaptive Semi-supervised Domain Adaptation

Published: 04 March 2022 Publication History

Abstract

Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain. Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain. However, these algorithms will be infeasible when only a few labeled data exist in the source domain, thus the performance decreases significantly. To address this challenge, we propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples. Firstly, DGL introduces the Nyström method to construct a plastic graph that shares similar geometric property with the target domain. Then, DGL flexibly employs the Nyström approximation error to measure the divergence between the plastic graph and source graph to formalize the distribution mismatch from the geometric perspective. Through minimizing the approximation error, DGL learns a domain-invariant geometric graph to bridge the source and target domains. Finally, we integrate the learned domain-invariant graph with the semi-supervised learning and further propose an adaptive semi-supervised model to handle the cross-domain problems. The results of extensive experiments on popular datasets verify the superiority of DGL, especially when only a few labeled source samples are available.

References

[1]
M. Andersen, J. Dahl, Z. Liu, and L. Vandenberghe. 2011. Interior-point Methods for Large-scale Cone Programming.
[2]
M. Belkin, P. Niyogi, and V. Sindhwani. 2006. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research 7 (2006), 2399–2434.
[3]
S. Chen, F. Zhou, and Q. Liao. 2016. Visual domain adaptation using weighted subspace alignment. In Proceedings of the Visual Communications and Image Processing (VCIP).
[4]
W. Chu, F. Torre, and J. F. Cohn. 2017. Selective transfer machine for personalized facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 3 (2017), 529–545.
[5]
N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy. 2015. Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 9 (2015).
[6]
W. Dai, Q. Yang, G. Xue, and Y. Yu. 2007. Boosting for transfer learning. In Proceedings of the International Conference on Machine Learning (ICML). 193–200.
[7]
D. Das and C. S. G. Lee. 2018. Graph matching and pseudo-label guided deep unsupervised domain adaptation. In Proceedings of the International Conference on Artificial Neural Networks (ICANN). 342–352.
[8]
Z. Ding and Y. Fu. 2017. Robust transfer metric learning for image classification. IEEE Transactions on Image Processing 26, 2 (2017), 660–670.
[9]
P. Drineas and M. Mahoney. 2005. On the Nystr\(\ddot{o}\)m method for approximating a Gram matrix for improved kernel-based learning. Journal of Machine Learning Research 6 (2005), 2153–2175.
[10]
C. Fowlkes, S. Belongie, F. Chung, and J. Malik. 2004. Spectral grouping using the Nyström method. IEEE Transactions on Pattern Analysis and Machine Intelligence 26, 2 (2004), 214–225.
[11]
J. Gao, W. Fan, J. Jiang, and J. Han. 2008. Knowledge transfer via multiple model local structure mapping. In The ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). 283–291.
[12]
B. Gong, Y. Shi, F. Sha, and K. Grauman. 2012. Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2066–2073.
[13]
A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schlkopf, and A. J. Smola. 2006. A kernel method for the two-sample-problem. In Proceedings of the Advances in Neural Information Processing Systems (NIPS). 513–520.
[14]
G. Griffin, A. Holub, and P. Perona.2007. Caltech-256 object category dataset. CalTech Report (2007).
[15]
Y. Hao, T. Mu, R. Hong, M. Wang, X. Liu, and J. Y. Goulermas. 2020. Cross-domain sentiment encoding through stochastic word embedding. IEEE Transactions on Knowledge and Data Engineering 32, 10 (2020), 1909–1922.
[16]
J. Jiang, X. Wang, M. Long, and J. Wang. 2020. Resource efficient domain adaptation. In Proceedings of the 28th ACM International Conference on Multimedia. 2220–2228.
[17]
T. Jing, H. Xia, and Z. Ding. 2020. Adaptively-accumulated knowledge transfer for partial domain adaptation. In Proceedings of the ACM International Conference on Multimedia. 1606–1614.
[18]
M. Li, W. Bi, J. Kwok, and B. Lu. 2015. Large-Scale Nystr\(\ddot{o}\)m Kernel Matrix Approximation Using Randomized SVD. IEEE Transactions on Neural Networks and Learning System 26, 1 (2015), 152–164.
[19]
Y. Li, B. Wei, L. Yao, H. Chen, and Z. Li. 2017. Knowledge-based document embedding for cross-domain text classification. In Proceedings of the International Joint Conference on Neural Networks (IJCNN). 1395–1402.
[20]
C. Chang.and C. Lin. 2011. Libsvm: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2, 3 (2011), 1–27.
[21]
H. Liu, M. Shao, Z. Ding, and Y. Fu. 2019. Structure-preserved unsupervised domain adaptation. IEEE Transactions on Knowledge and Data Engineering 31, 4 (2019), 799–812.
[22]
J. Liu, Z. Zha, D. Chen, R. Hong, and M. Wang. 2019. Adaptive transfer network for cross-domain person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7202–7211.
[23]
M. Long, J. Wang, G. Ding, S. J. Pan, and P. S. Yu. 2014. Adaptation regularization: A general framework for transfer learning. IEEE Transactions on Knowledge and Data Engineering 26, 5 (2014), 1076–1089.
[24]
M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu. 2014. Transfer joint matching for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1410–1417.
[25]
M. Long, J. Wang, J. Sun, and P. S. Yu. 2015. Domain invariant transfer kernel learning. IEEE Transactions on Knowledge and Data Engineering 27, 6 (2015), 1519–1532.
[26]
S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang. 2011. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks 22, 2 (2011), 199–210.
[27]
S. J. Pan and Q. Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22, 10 (2010), 1345–1359.
[28]
M. Pilanci and E. Vural. 2020. Domain adaptation on graphs by learning aligned graph bases. IEEE Transactions on Knowledge and Data Engineering (2020).
[29]
H. Wang, F. Nie, H. Huang, and C. Ding. 2011. Dyadic transfer learning for cross-domain image classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). 551–556.
[30]
J. Wang, Y. Chen, S. Hao, W. Feng, and Z. Shen. 2017. Balanced distribution adaptation for transfer learning. In Proceedings of the IEEE International Conference on Data Mining (ICDM). 1129–1134.
[31]
J. Wang, W. Feng, Y. Chen, H. Yu, M. Huang, and P. S. Yu. 2018. Visual domain adaptation with manifold embedded distribution alignment. In Proceedings of the ACM International Conference on Multimedia (ACMMM).
[32]
P. Wei, Y. Ke, and C. Goh. 2019. A general domain specific feature transfer framework for hybrid domain adaptation. IEEE Transactions on Knowledge and Data Engineering 31, 8 (2019), 1440–1451.
[33]
C. Williams and M. Seeger. 2000. Using the Nystr\(\ddot{o}\)m Method to speed up kernel machines. In Proceedings of Advances in Neural Information Processing Systems (NIPS). 682–688.
[34]
W. Li, L. Duan, D. Xu, and I. W. Tsang. 2014. Learning with augmented features for supervised and semi-supervised heterogeneous domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence 36, 6 (2014), 1134–1148.
[35]
K. Yan, L. Kou, and D. Zhang. 2018. Learning domain-invariant subspace using domain features and independence maximization. IEEE Transactions on Cybernetics 48, 1 (2018), 288–299.
[36]
X. Yang, T. Zhang, C. Xu, and M. Yang. 2015. Boosted multifeature learning for cross-domain transfer. ACM Transactions on Multimedia Computing, Communications, and Applications 11, 3 (2015).
[37]
T. Yao, Y. Pan, C. Ngo, H. Li, and T. Mei. 2015. Semi-supervised domain adaptation with subspace learning for visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2142–2150.
[38]
J. Yu, Y. Rui, and B. Chen. 2014. Exploiting click constraints and multi-view features for image re-ranking. IEEE Transactions on Multimedia 16, 1 (2014), 159–168.
[39]
J. Yu, Y. Rui, and D. Tao. 2014. Click prediction for web image reranking using multimodal sparse coding. IEEE Transactions on Image Processing 23, 5 (2014), 2019–2032.
[40]
J. Zhang, W. Li, and P. Ogunbona. 2017. Joint geometrical and statistical alignment for visual domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 5150–5158.
[41]
K. Zhang, I. W. Tsang, and J. T. Kwok. 2008. Improved Nystr\(\ddot{o}\)m Low-Rank Approximation and Error Analysis. In Proceedings of the International Conference on Machine Learning (ICML). 1232–1239.
[42]
L. Zhang. 2019. Transfer adaptation learning: A decade survey. Corr abs/1903.04687 (2019). arxiv:1903.04687http://arxiv.org/abs/1903.04687
[43]
L. Zhang, J. Fu, S. Wang, D. Zhang, Z. Dong, and C. L. P. Chen. 2019. Guide subspace learning for unsupervised domain adaptation. IEEE Transactions on Neural Networks and Learning Systems (2019), 1–15.
[44]
Y. Zhang, S. Miao, and R. Liao. 2018. Structural domain adaptation with latent graph alignment. In Proceedings of the IEEE International Conference on Image Processing (ICIP). 3753–3757.
[45]
F. Zhuang, P. Luo, Z. Shen, Q. He, Y. Xiong, Z. Shi, and H. Xiong. 2012. Mining distinction and commonality across multiple domains using generative model for text classification. IEEE Transactions on Knowledge and Data Engineering 24, 11 (2012), 2025–2039.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Multimedia Computing, Communications, and Applications
ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 18, Issue 3
August 2022
478 pages
ISSN:1551-6857
EISSN:1551-6865
DOI:10.1145/3505208
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 March 2022
Accepted: 01 September 2021
Revised: 01 August 2021
Received: 01 December 2020
Published in TOMM Volume 18, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Domain adaptation
  2. domain-invariant graph
  3. the Nyström method
  4. few labeled source samples

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • National Natural Science Foundation of China
  • Major Scientific and Technological Projects of CNPC
  • Open Project Program of the National Laboratory of Pattern Recognition (NLPR)

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)82
  • Downloads (Last 6 weeks)14
Reflects downloads up to 21 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media