skip to main content
10.1145/3430984.3431017acmotherconferencesArticle/Chapter ViewAbstractPublication PagescodsConference Proceedingsconference-collections
research-article

Deep Domain Adaptation under Label Scarcity

Published: 02 January 2021 Publication History

Abstract

The goal behind Domain Adaptation (DA) is to leverage the labeled examples from a source domain to infer an accurate model for a target domain where labels are not available or in scarce at the best. Recently, there has been a surge in adversarial learning based deep-net approaches for DA problem – a prominent example being DANN approach [9]. These methods require a large number of source labeled examples to infer a good model for the target domain; but start performing poorly with reduced labels. In this paper, we study the behavior of such approaches (especially DANN) under such scarce label scenarios. Further, we propose an architecture, namely TRAVERS, that amalgamates TRAnsductive learning principles with adVERSarial learning so as to provide a cushion to the performance of these approaches under label scarcity. Experimental results (both on text and images) show a significant boost in the performance of TRAVERS over approaches such as DANN under scarce label scenarios.

References

[1]
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning 79, 1 (2010), 151–175.
[2]
Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2006. Analysis of Representations for Domain Adaptation. In Proceedings of the 19th International Conference on Neural Information Processing Systems(NIPS). 137–144.
[3]
Shai Ben-David, Tyler Lu, Teresa Luu, and David Pal. 2010. Impossibility Theorems for Domain Adaptation. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics(AISTATS). 129–136.
[4]
John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman. 2007. Learning Bounds for Domain Adaptation. In Proceedings of the 20th International Conference on Neural Information Processing Systems(NIPS’07). 129–136.
[5]
John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boomboxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics(ACL’07). 187–205.
[6]
Xilun Chen and Claire Cardie. 2018. Multinomial Adversarial Networks for Multi-Domain Text Classification. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics(NAACL’18). 1226–1240.
[7]
Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research 12 (2011), 2493–2537.
[8]
Yong Dai, Jian Liu, Xiancong Ren, and Zenglin Xu. 2020. Adversarial Training Based Multi-Source Unsupervised Domain Adaptation for Sentiment Analysis. arxiv:2006.05602 [cs.CL]
[9]
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor S. Lempitsky. 2016. Domain-Adversarial Training of Neural Networks. Journal of Machine Learning Research 17 (2016), 1–35.
[10]
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems(NIPS’17). 2672–2680.
[11]
Thorsten Joachims. 1999. Transductive Inference for Text Classification Using Support Vector Machines. In Proceedings of the 16th International Conference on Machine Learning(ICML’99). 200–209.
[12]
Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the Annual Meeting of the Association for Computational Linguistics(ACL’14).
[13]
Y. LeCun, L. Bottou, Y Bengio, and P. Haffner. 1998. Gradient- based learning applied to document recognition. In Proceedings of the IEEE, Vol. 86(11). 2278–2324.
[14]
Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In arXiv preprint arXiv:1703.03130.
[15]
Pengfei Liu, Xipeng Qiu, Jifan Chen, and Xuanjing Huang. 2016. Deep fusion LSTMs for text semantic matching. In Proceedings of the Annual Meeting of the Association for Computational Linguistics(ACL’16).
[16]
Pengfei Liu, Xipeng Qiu, Xinchi Chen, Shiyu Wu, and Xuanjing Huang. 2015. Multi-timescale long short-term memory neural network for modelling sentences and documents. In Proceedings of the Conference on Empirical Methods in Natural Language Processing(EMNLP’15).
[17]
Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial Multi-task Learning for Text Classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics(ACL’17). 1–10.
[18]
Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2009. Domain Adaptation: Learning Bounds and Algorithms. In Proceedings of the 22nd Conference on Learning Theory(COLT’09).
[19]
Saeid Motiian, Quinn Jones, Seyed Mehdi Iranmanesh, and Gianfranco Doretto. 2017. Few-Shot Adversarial Domain Adaptation. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 6673?6683.
[20]
Vishal M. Patel, Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. 2015. Visual Domain Adaptation: A survey of recent advances. IEEE Signal Process. Mag. 32, 3 (2015), 53–69.
[21]
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing(EMNLP’14). 1532–1543.
[22]
Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. 2017. Asymmetric Tri-training for Unsupervised Domain Adaptation. In Proceedings of the 6th International Conference on Learning Representations(ICLR’17). 2988–2997.
[23]
Hidetoshi Shimodaira. 2000. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference 90, 2 (2000), 227 – 244.
[24]
Rui Shu, Hung H. Bui, Hirokazu Narui, and Stefano Ermon. 2018. A DIRT-T Approach to Unsupervised Domain Adaptation. In Proceedings of the 34th International Conference on Machine Learning(ICML’18).
[25]
Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing(EMNLP’13).
[26]
Baochen Sun and Kate Saenko. 2014. From Virtual to Reality: Fast Adaptation of Virtual Object Detectors to Real Domains. In British Machine Vision Conference(BMVC’14).
[27]
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In in Proceedings of the Conference on Neural Information Processing Systems(NIPS’14). 3104–3112.
[28]
E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of Conference on Computer Vision and Pattern Recognition(CVPR’17).
[29]
David Vazquez, Antonio M Lopez, Javier Marin, Daniel Ponsa, and David Geronimo. 2014. Virtual and real world adaptation for pedestrian detection. In IEEE transactions on pattern analysis and machine intelligence, Vol. 36(4). 797–809.
[30]
Mei Wang and Weihong Deng. 2018. Deep visual domain adaptation: A survey. Neurocomputing 312(2018), 135 – 153.
[31]
F. Wu and Y. Huang. 2015. Collaborative Multi-domain Sentiment Classification. In IEEE International Conference on Data Mining(ICDM’15). 459–468.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CODS-COMAD '21: Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD)
January 2021
453 pages
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 January 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial learning
  2. cross domain representation
  3. domain adaptation

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CODS COMAD 2021
CODS COMAD 2021: 8th ACM IKDD CODS and 26th COMAD
January 2 - 4, 2021
Bangalore, India

Acceptance Rates

Overall Acceptance Rate 197 of 680 submissions, 29%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 72
    Total Downloads
  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 07 Jan 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media