skip to main content
research-article

Deep visual domain adaptation: : A survey

Published: 27 October 2018 Publication History

Abstract

Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.

References

[1]
K. Sohn, S. Liu, G. Zhong, X. Yu, M.-H. Yang, M. Chandraker, Unsupervised domain adaptation for face recognition in unlabeled videos, arxiv:1708.02191 (2018).
[2]
L. Bruzzone, M. Marconcini, Domain adaptation problems: a DASVM classification technique and a circular validation strategy, IEEE Trans. Pattern Anal. Mach. Intell. 32 (5) (2010) 770–787.
[3]
Chu W.-S., F. De la Torre, Cohn J.F., Selective transfer machine for personalized facial action unit detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3515–3522.
[4]
Gong B., K. Grauman, Sha F., Connecting the dots with landmarks: discriminatively learning domain-invariant features for unsupervised domain adaptation, Proceedings of the International Conference on Machine Learning, 2013, pp. 222–230.
[5]
Pan S.J., Tsang I.W., Kwok J.T., Yang Q., Domain adaptation via transfer component analysis, IEEE Trans. Neural Netw. 22 (2) (2011) 199–210.
[6]
M. Gheisari, M.S. Baghshah, Unsupervised domain adaptation via representation learning and adaptive classifier learning, Neurocomputing 165 (2015) 300–311.
[7]
S. Pachori, A. Deshpande, S. Raman, Hashing in the zero shot framework with domain adaptation, Neurocomputing 275 (2018) 2137–2149.
[8]
A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, Proceedings of the Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
[9]
Y. Taigman, Yang M., M. Ranzato, L. Wolf, DeepFace: closing the gap to human-level performance in face verification, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1701–1708.
[10]
R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580–587.
[11]
Liu W., Wang Z., Liu X., Zeng N., Liu Y., F.E. Alsaadi, A survey of deep neural network architectures and their applications, Neurocomputing 234 (2017) 11–26.
[12]
K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv:1409.1556 (2018).
[13]
C. Szegedy, Liu W., Jia Y., P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
[14]
He K., Zhang X., Ren S., Sun J., Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
[15]
G.E. Hinton, S. Osindero, Teh Y.-W., A fast learning algorithm for deep belief nets, Neural Comput. 18 (7) (2006) 1527–1554.
[16]
P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion, J. Mach. Learn. Res. 11 (2010) 3371–3408.
[17]
J. Donahue, Jia Y., O. Vinyals, J. Hoffman, Zhang N., Tzeng E., T. Darrell, DeCAF: a deep convolutional activation feature for generic visual recognition, Proceedings of the International Conference on Machine Learning, 2014, pp. 647–655.
[18]
Pan S.J., Yang Q., A survey on transfer learning, IEEE Trans. Knowl. Data Eng. 22 (10) (2010) 1345–1359.
[19]
Shao L., Zhu F., Li X., Transfer learning for visual categorization: a survey, IEEE Trans. Neural Netw. Learn. Syst. 26 (5) (2015) 1019–1034.
[20]
Day O., T.M. Khoshgoftaar, A survey on heterogeneous transfer learning, J. Big Data 4 (1) (2017) 29.
[21]
V.M. Patel, R. Gopalan, Li R., R. Chellappa, Visual domain adaptation: a survey of recent advances, IEEE Signal Process. Mag. 32 (3) (2015) 53–69.
[22]
J. Zhang, W. Li, P. Ogunbona, Transfer Learning for Cross-dataset Recognition: A Survey, arXiv:1705.04396 (2017).
[23]
G. Csurka, Domain adaptation for visual applications: a comprehensive survey, arXiv:1702.05374 (2018).
[24]
Tan B., Song Y., Zhong E., Yang Q., Transitive transfer learning, Proceedings of the Twenty First ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2015, pp. 1155–1164.
[25]
Tan B., Zhang Y., Pan S.J., Yang Q., Distant domain transfer learning, Proceedings of the AAAI, 2017, pp. 2604–2610.
[26]
Tzeng E., J. Hoffman, T. Darrell, K. Saenko, Simultaneous deep transfer across domains and tasks, Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 4068–4076.
[27]
Peng X., J. Hoffman, X.Y. Stella, K. Saenko, Fine-to-coarse knowledge transfer for low-res image classification, IEEE International Conference on Image Processing (ICIP), IEEE, 2016, pp. 3683–3687.
[28]
S. Motiian, M. Piccirilli, D.A. Adjeroh, G. Doretto, Unified deep supervised domain adaptation and generalization, Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2, 2017.
[29]
K. Saito, Y. Ushiku, T. Harada, Asymmetric tri-training for unsupervised domain adaptation, arXiv:1702.08400 (2018).
[30]
Hu J., Lu J., Tan Y.-P., Deep transfer metric learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 325–333.
[31]
G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, arXiv:1503.02531 (2018).
[32]
Long M., Zhu H., Wang J., Jordan M.I., Unsupervised domain adaptation with residual transfer networks, Proceedings of the Advances in Neural Information Processing Systems, 2016, pp. 136–144.
[33]
X. Zhang, F.X. Yu, S.-F. Chang, S. Wang, Deep transfer network: unsupervised domain adaptation, arXiv:1503.00591 (2018).
[34]
H. Yan, Y. Ding, P. Li, Q. Wang, Y. Xu, W. Zuo, Mind the class weight bias: weighted maximum mean discrepancy for unsupervised domain adaptation, arXiv:1705.00609 (2018).
[35]
T. Gebru, J. Hoffman, L. Fei-Fei, Fine-grained recognition in the wild: A multi-task domain adaptation approach, in: 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, 2017, pp. 1358–1367.
[36]
W. Ge, Y. Yu., Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning, in: IEEE Conference on Computer Vision and Pattern Recognition, 6, Honolulu, HI, 2017, pp. 10–19.
[37]
M. Long, J. Wang, M.I. Jordan, Deep transfer learning with joint adaptation networks, arXiv:1605.06636 (2018).
[38]
Long M., Cao Y., Wang J., M. Jordan, Learning transferable features with deep adaptation networks, Proceedings of the International Conference on Machine Learning, 2015, pp. 97–105.
[39]
E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, T. Darrell, Deep domain confusion: maximizing for domain invariance, arXiv:1412.3474 (2018).
[40]
M. Ghifary, W.B. Kleijn, Zhang M., Domain adaptive neural networks for object recognition, Proceedings of the Pacific Rim International Conference on Artificial Intelligence, Springer, 2014, pp. 898–904.
[41]
Sun B., K. Saenko, Deep coral: correlation alignment for deep domain adaptation, Proceedings of the Workshops of Computer Vision–ECCV, Springer, 2016, pp. 443–450.
[42]
X. Peng, K. Saenko, Synthetic to real adaptation with deep generative correlation alignment networks, arXiv:1701.05524 (2017).
[43]
Zhuang F., Cheng X., Luo P., Pan S.J., He Q., Supervised representation learning: transfer learning with deep autoencoders., Proceedings of the International Joint Conferences on Artificial Intelligence, IJCAI, 2015, pp. 4119–4125.
[44]
Y. Li, N. Wang, J. Shi, J. Liu, X. Hou, Revisiting batch normalization for practical domain adaptation, arXiv:1603.04779 (2018).
[45]
X. Huang, S. Belongie, Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization, in: 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2018, pp. 1510–1519.
[46]
Y. Li, N. Wang, J. Liu, X. Hou, Demystifying neural style transfer, arXiv:1701.01036 (2018).
[47]
A. Rozantsev, M. Salzmann and P. Fua. Beyond Sharing Weights for Deep Domain Adaptation. In IEEE Transactions on Pattern Analysis & Machine Intelligence. https://doi.org/10.1109/TPAMI.2018.2814042
[48]
Xiao T., Li H., Ouyang W., Wang X., Learning deep feature representations with domain guided dropout for person re-identification, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1249–1258.
[49]
S.-A. Rebuffi, H. Bilen, A. Vedaldi, Learning multiple visual domains with residual adapters, in: Advances in Neural Information Processing Systems, 2017, pp. 506–516.
[50]
S. Chopra, S. Balakrishnan, R. Gopalan, DLID: deep learning for domain adaptation by interpolating between domains, Proceedings of the ICML workshop on Challenges in Representation Learning, 2, 2013.
[51]
Liu M.-Y., O. Tuzel, Coupled generative adversarial networks, Proceedings of the Advances in Neural Information Processing Systems, 2016, pp. 469–477.
[52]
K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, D. Krishnan, Unsupervised pixel-level domain adaptation with generative adversarial networks, in: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1, 2017, p. 7.
[53]
P. Isola, J.-Y. Zhu, T. Zhou, A.A. Efros, Image-to-image translation with conditional adversarial networks, arXiv:1611.07004 (2018).
[54]
Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, V. Lempitsky, Domain-adversarial training of neural networks, J. Mach. Learn. Res. 17 (59) (2016) 1–35.
[55]
Y. Ganin, V. Lempitsky, Unsupervised domain adaptation by backpropagation, Proceedings of the International Conference on Machine Learning, 2015, pp. 1180–1189.
[56]
Tzeng E., C. Devin, J. Hoffman, C. Finn, P. Abbeel, S. Levine, K. Saenko, T. Darrell, Adapting deep visuomotor representations with weak pairwise constraints, CoRR, vol. abs/1511.07111 (2015).
[57]
K.-C. Peng, Z. Wu, J. Ernst, Zero-shot deep domain adaptation, arXiv:1707.01922 (2017).
[58]
E. Tzeng, J. Hoffman, K. Saenko, T. Darrell, Adversarial discriminative domain adaptation, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1, 2017, p. 4.
[59]
K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, D. Erhan, Domain separation networks, Proceedings of the Advances in Neural Information Processing Systems, 2016, pp. 343–351.
[60]
M. Ghifary, W.B. Kleijn, Zhang M., D. Balduzzi, Li W., Deep reconstruction-classification networks for unsupervised domain adaptation, Proceedings of the European Conference on Computer Vision, Springer, 2016, pp. 597–613.
[61]
M. Ghifary, W. Bastiaan Kleijn, Zhang M., D. Balduzzi, Domain generalization for object recognition with multi-task autoencoders, Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2551–2559.
[62]
Z. Yi, H. Zhang, P.T. Gong, et al., DualGAN: unsupervised dual learning for image-to-image translation, arXiv:1704.02510 (2018).
[63]
J.-Y. Zhu, T. Park, P. Isola, A.A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, arXiv:1703.10593 (2018).
[64]
T. Kim, M. Cha, H. Kim, J. Lee, J. Kim, Learning to discover cross-domain relations with generative adversarial networks, arXiv:1703.05192 (2018).
[65]
M. Xie, N. Jean, M. Burke, D. Lobell, S. Ermon, Transfer learning from deep features for remote sensing and poverty mapping, 1510.00098 (2015).
[66]
A.A. Rusu, N.C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, R. Hadsell, Progressive neural networks, arXiv:1606.04671 (2018).
[67]
J. Hoffman, E. Tzeng, J. Donahue, Y. Jia, K. Saenko, T. Darrell, One-shot adaptation of supervised deep convolutional models, arXiv:1312.6204 (2018).
[68]
A. Raj, V.P. Namboodiri, T. Tuytelaars, Subspace alignment based domain adaptation for rcnn detector, arXiv:1507.05578 (2018).
[69]
Nguyen H.V., Ho H.T., V.M. Patel, R. Chellappa, DASH-N: joint hierarchical domain adaptation and feature learning, IEEE Trans. Image Process. 24 (12) (2015) 5479–5491.
[70]
Zhang L., He Z., Liu Y., Deep object recognition across domains based on adaptive extreme learning machine, Neurocomputing 239 (2017) 194–203.
[71]
Lu H., Zhang L., Cao Z., Wei W., Xian K., Shen C., A. van den Hengel, When unsupervised domain adaptation meets tensor representations, Proceedings of the IEEE International Conference on Computer Vision, (ICCV), 2, 2017.
[72]
J. Yosinski, J. Clune, Y. Bengio, H. Lipson, How transferable are features in deep neural networks?, Proceedings of the Advances in Neural Information Processing Systems, 2014, pp. 3320–3328.
[73]
Chu B., V. Madhavan, O. Beijbom, J. Hoffman, T. Darrell, Best practices for fine-tuning visual classifiers to new domains, Proceedings of the Workshops of Computer Vision–ECCV, Springer, 2016, pp. 435–442.
[74]
Wang X., Duan X., Bai X., Deep sketch feature for cross-domain image retrieval, Neurocomputing 207 (2016) 387–397.
[75]
C.H. Lampert, H. Nickisch, S. Harmeling, Learning to detect unseen object classes by between-class attribute transfer, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, IEEE, 2009, pp. 951–958.
[76]
K.M. Borgwardt, A. Gretton, M.J. Rasch, H.-P. Kriegel, B. Schölkopf, A.J. Smola, Integrating structured biological data by kernel maximum mean discrepancy, Bioinformatics 22 (14) (2006) e49–e57.
[77]
Sun B., Feng J., K. Saenko, Return of frustratingly easy domain adaptation., Proceedings of the AAAI, 6, 2016, p. 8.
[78]
Li Y., K. Swersky, R. Zemel, Generative moment matching networks, Proceedings of the Thirty Second International Conference on Machine Learning (ICML), 2015, pp. 1718–1727.
[79]
W. Zellinger, T. Grubinger, E. Lughofer, T. Natschläger, S. Saminger-Platz, Central moment discrepancy (CMD) for domain-invariant representation learning, 2017 arXiv:1702.08811.
[80]
P. Haeusser, T. Frerix, A. Mordvintsev, D. Cremers, Associative domain adaptation, Proceedings of the International Conference on Computer Vision (ICCV), 2, 2017, p. 6.
[81]
Shu X., Qi G.-J., Tang J., Wang J., Weakly-shared deep transfer networks for heterogeneous-domain knowledge propagation, Proceedings of the Twenty Third ACM International Conference on Multimedia, ACM, 2015, pp. 35–44.
[82]
S. Ioffe, C. Szegedy, Batch normalization: accelerating deep network training by reducing internal covariate shift, Proceedings of the International Conference on Machine Learning, 2015, pp. 448–456.
[83]
F.M. Carlucci, L. Porzi, B. Caputo, E. Ricci, S.R. Bulò, AutoDIAL: automatic domain alignment layers, Proceedings of the International Conference on Computer Vision, 2017.
[84]
D. Ulyanov, A. Vedaldi, V. Lempitsky, Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis, in: IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[85]
Li D., Yang Y., Song Y.-Z., T.M. Hospedales, Deeper, broader and artier domain generalization, Proceedings of the IEEE International Conference on Computer Vision (ICCV), IEEE, 2017, pp. 5543–5551.
[86]
R. Gopalan, Li R., R. Chellappa, Domain adaptation for object recognition: an unsupervised approach, Proceedings of the IEEE International Conference on Computer Vision (ICCV), IEEE, 2011, pp. 999–1006.
[87]
Gong B., Shi Y., Sha F., K. Grauman, Geodesic flow kernel for unsupervised domain adaptation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2012, pp. 2066–2073.
[88]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, Xu B., D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial nets, Proceedings of the Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
[89]
Yoo D., Kim N., Park S., Paek A.S., I.S. Kweon, Pixel-level domain transfer, Proceedings of the European Conference on Computer Vision, Springer, 2016, pp. 517–532.
[90]
A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, R. Webb, Learning from simulated and unsupervised images through adversarial training, in: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3, 2017, p. 6.
[91]
D. Eigen, C. Puhrsch, R. Fergus, Depth map prediction from a single image using a multi-scale deep network, Proceedings of the Advances in Neural Information Processing Systems, 2014, pp. 2366–2374.
[92]
Z. Cao, M. Long, J. Wang, M.I. Jordan, Partial transfer learning with selective adversarial networks, arXiv:1707.07901 (2017).
[93]
S. Motiian, Q. Jones, S. Iranmanesh, G. Doretto, Few-shot adversarial domain adaptation, Proceedings of the Advances in Neural Information Processing Systems, 2017, pp. 6673–6683.
[94]
R. Volpi, P. Morerio, S. Savarese, V. Murino, Adversarial feature augmentation for unsupervised domain adaptation, arXiv:1711.08561 (2018).
[95]
M. Arjovsky, S. Chintala, L. Bottou, Wasserstein gan, arXiv:1701.07875 (2017).
[96]
J. Shen, Y. Qu, W. Zhang, Y. Yu, Wasserstein distance guided representation learning for domain adaptation, arXiv:1707.01217 (2017).
[97]
K. Saito, K. Watanabe, Y. Ushiku, T. Harada, Maximum classifier discrepancy for unsupervised domain adaptation, arXiv:1712.02560 (2018).
[98]
Y. Bengio, Learning deep architectures for AI, Found. Trends Mach. Learn. 2 (1) (2009) 1–127.
[99]
X. Glorot, A. Bordes, Y. Bengio, Domain adaptation for large-scale sentiment classification: a deep learning approach, Proceedings of the Twenty Eighth International Conference on Machine Learning (ICML), 2011, pp. 513–520.
[100]
M. Chen, Z. Xu, K. Weinberger, F. Sha, Marginalized denoising autoencoders for domain adaptation, arXiv:1206.4683 (2018).
[101]
Tsai J.-C., Chien J.-T., Adversarial domain separation and adaptation, Proceedings of the IEEE Twenty Seventh International Workshop on Machine Learning for Signal Processing (MLSP), IEEE, 2017, pp. 1–6.
[102]
He D., Xia Y., Qin T., Wang L., Yu N., Liu T., Ma W.-Y., Dual learning for machine translation, Proceedings of the Advances in Neural Information Processing Systems, 2016, pp. 820–828.
[103]
O. Ronneberger, P. Fischer, T. Brox, U-Net: convolutional networks for biomedical image segmentation, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2015, pp. 234–241.
[104]
Li C., Wand M., Precomputed real-time texture synthesis with Markovian generative adversarial networks, Proceedings of the European Conference on Computer Vision, Springer, 2016, pp. 702–716.
[105]
L. Duan, D. Xu, I. Tsang, Learning with augmented features for heterogeneous domain adaptation, arXiv:1206.4660 (2018).
[106]
Wang C., S. Mahadevan, Heterogeneous domain adaptation using manifold alignment, Proceedings of the International Joint Conference on Artificial Intelligence, 22, 2011, p. 1541.
[107]
Zhou J.T., Tsang I.W., Pan S.J., Tan M., Heterogeneous domain adaptation for multiple classes, Proceedings of the Artificial Intelligence and Statistics, 2014, pp. 1095–1103.
[108]
B. Kulis, K. Saenko, T. Darrell, What you saw is not what you get: domain adaptation using asymmetric kernel transforms, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2011, pp. 1785–1792.
[109]
K. Saenko, B. Kulis, M. Fritz, T. Darrell, Adapting visual category models to new domains, Proceedings of the European Conference on Computer Vision, Springer, 2010, pp. 213–226.
[110]
S. Gupta, J. Hoffman, J. Malik, Cross modal distillation for supervision transfer, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2827–2836.
[111]
J. Hoffman, S. Gupta, Leong J., S. Guadarrama, T. Darrell, Cross-modal adaptation for RGB-D detection, Proceedings of the IEEE International Conference on Robotics and Automation, (ICRA), IEEE, 2016, pp. 5032–5039.
[112]
P. Mittal, M. Vatsa, R. Singh, Composite sketch recognition via deep network-a transfer learning approach, Proceedings of the International Conference on Biometrics, (ICB), IEEE, 2015, pp. 251–256.
[113]
Liu X., Song L., Wu X., Tan T., Transferring deep representation for NIR-VIS heterogeneous face recognition, Proceedings of the International Conference on Biometrics, (ICB), IEEE, 2016, pp. 1–8.
[114]
Chen W.-Y., Hsu T.-M. H., Tsai Y.-H. H., Wang Y.-C. F., Chen M.-S., Transfer neural trees for heterogeneous domain adaptation, Proceedings of the European Conference on Computer Vision, Springer, 2016, pp. 399–414.
[115]
S. Rota Bulo, P. Kontschieder, Neural decision forests for semantic image labelling, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 81–88.
[116]
P. Kontschieder, M. Fiterau, A. Criminisi, S. Rota Bulo, Deep neural decision forests, Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1467–1475.
[117]
Y. Taigman, A. Polyak, L. Wolf, Unsupervised cross-domain image generation, arXiv:1611.02200 (2018).
[118]
S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, H. Lee, Generative adversarial text to image synthesis, arXiv:1605.05396 (2018).
[119]
Zhang H., Xu T., Li H., Zhang S., Huang X., Wang X., D. Metaxas, StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks, Proceedings of the IEEE International Conference Computer Vision, (ICCV), 2017, pp. 5907–5915.
[120]
L. Wang, V.A. Sindagi, V.M. Patel, High-quality facial photo-sketch synthesis using multi-adversarial networks, arXiv:1710.10182 (2018).
[121]
Kan M., Shan S., Chen X., Bi-shifting auto-encoder for unsupervised domain adaptation, Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3846–3854.
[122]
S. Hong, W. Im, J. Ryu, H.S. Yang, SSPP-DAN: deep domain adaptation network for face recognition with single sample per person 2018 arXiv:1702.04069.
[123]
Xia Y., Huang D., Wang Y., Detecting smiles of young children via deep transfer learning, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1673–1681.
[124]
R. Girshick, Fast R-CNN, Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1440–1448.
[125]
Ren S., He K., R. Girshick, Sun J., Faster R-CNN: towards real-time object detection with region proposal networks, Proceedings of the Advances in Neural Information Processing Systems, 2015, pp. 91–99.
[126]
J. Hoffman, S. Guadarrama, Tzeng E.S., Hu R., J. Donahue, R. Girshick, T. Darrell, K. Saenko, LSDA: large scale detection through adaptation, Proceedings of the Advances in Neural Information Processing Systems, 2014, pp. 3536–3544.
[127]
M. Rochan, Wang Y., Weakly supervised localization of novel objects using appearance transfer, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 4315–4324.
[128]
Tang Y., Wang J., Gao B., E. Dellandréa, R. Gaizauskas, Chen L., Large scale semi-supervised object detection using visual and semantic knowledge transfer, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2119–2128.
[129]
Y. Chen, W. Li, C. Sakaridis, D. Dai, L. Van Gool, Domain adaptive faster R-CNN for object detection in the wild, arXiv:1803.03243 (2018).
[130]
N. Inoue, R. Furuta, T. Yamasaki, K. Aizawa, Cross-domain weakly-supervised object detection through progressive domain adaptation, arXiv:1803.11365 (2018).
[131]
Hong S., Oh J., Lee H., Han B., Learning transferrable knowledge for semantic segmentation with deep convolutional neural network, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3204–3212.
[132]
A. Kolesnikov, C.H. Lampert, Seed, expand and constrain: three principles for weakly-supervised image segmentation, Proceedings of the European Conference on Computer Vision, Springer, 2016, pp. 695–711.
[133]
W. Shimoda, K. Yanai, Distinct class-specific saliency maps for weakly supervised semantic segmentation, Proceedings of the European Conference on Computer Vision, Springer, 2016, pp. 218–234.
[134]
J. Hoffman, D. Wang, F. Yu, T. Darrell, FCNS in the wild: pixel-level adversarial and constraint-based adaptation, arXiv:1612.02649 (201).
[135]
Zhang Y., P. David, Gong B., Curriculum domain adaptation for semantic segmentation of urban scenes, Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2, 2017, p. 6.
[136]
Y.-H. Chen, W.-Y. Chen, Y.-T. Chen, B.-C. Tsai, Y.-C.F. Wang, M. Sun, No more discrimination: Cross city adaptation of road scene segmenters, in: 2017 IEEE International Conference on Computer Vision (ICCV), IEEE, 2017, pp. 2011–2020.
[137]
Y. Chen, W. Li, L. Van Gool, ROAD: reality oriented adaptation for semantic segmentation of urban scenes, arXiv:1711.11556 (2018b).
[138]
S. Sankaranarayanan, Y. Balaji, A. Jain, S.N. Lim, R. Chellappa, Learning from synthetic data: addressing domain shift for semantic segmentation, arXiv:1711.06969 (2017).
[139]
L.A. Gatys, A.S. Ecker, M. Bethge, Image style transfer using convolutional neural networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2414–2423.
[140]
W. Deng, L. Zheng, G. Kang, Y. Yang, Q. Ye, J. Jiao, Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification, arXiv:1711.07027 (2018).
[141]
Chen T.-H., Liao Y.-H., Chuang C.-Y., Hsu W.-T., Fu J., Sun M., Show, adapt and tell: adversarial training of cross-domain image captioner, Proceedings of the IEEE International Conference on Computer Vision, (ICCV), 2, 2017.
[142]
Zhao W., Xu W., Yang M., Ye J., Zhao Z., Feng Y., Qiao Y., Dual learning for cross-domain image captioning, Proceedings of the Conference on Information and Knowledge Management, ACM, 2017, pp. 29–38.
[143]
P.P. Busto, J. Gall, Open set domain adaptation, Proceedings of the IEEE International Conference on Computer Vision, (ICCV), 1, 2017, p. 3.
[144]
J. Zhang, Z. Ding, W. Li, P. Ogunbona, Importance weighted adversarial nets for partial domain adaptation, arXiv:1803.09210 (2018).

Cited By

View all

Index Terms

  1. Deep visual domain adaptation: A survey
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Neurocomputing
        Neurocomputing  Volume 312, Issue C
        Oct 2018
        390 pages

        Publisher

        Elsevier Science Publishers B. V.

        Netherlands

        Publication History

        Published: 27 October 2018

        Author Tags

        1. Deep domain adaptation
        2. Deep networks
        3. Transfer learning
        4. Computer vision applications

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 28 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media