skip to main content
research-article

Deep cycle autoencoder for unsupervised domain adaptation with generative adversarial networks

Published: 25 October 2019 Publication History

Abstract

Deep learning is a powerful tool for domain adaptation by learning robust high‐level domain invariant representations. Recently, adversarial domain adaptation models are applied to learn representations with adversarial training manners in feature space. However, existing models often ignore the generation process for domain adaptation. To tackle this problem, deep cycle autoencoder (DCA) is proposed that integrates a generation procedure into the adversarial adaptation methods. The proposed DCA consists of four parts, a shared encoder, two separated decoders, a discriminator and a linear classifier. With the labelled source images and unlabelled target images as inputs, the encoder extracts high‐level representations for both source and target domains, and the two decoders reconstruct the inputs with the latent representations separately. The shared encoder is pitted against the discriminator; the encoder tries to confuse the discriminator while discriminator aims at distinguishing which domain the latent representations come from. DCA adopts both adversarial loss and maximum mean discrepancy loss in the latent space for distribution alignment. The classifier is trained with both the source original and reconstructed image representations. Extensive experimental results have demonstrated the effectiveness and the reliability of the proposed methods.

7 References

[1]
Venkateswara, H., Eusebio, J., Chakraborty, S., et al: ‘Deep hashing network for unsupervised domain adaptation’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Hawaii, USA, July 2017, pp. 5385–5394
[2]
Pan, S.J., Yang, Q.: ‘A survey on transfer learning’, IEEE Trans. Knowl. Data Eng., 2010, 22, (10), pp. 1345–1359
[3]
Gong, B., Grauman, K., Sha, F.: ‘Connecting the dots with landmarks: discriminatively learning domain‐invariant features for unsupervised domain adaptation’. Proc. Int. Conf. on Machine Learning, Atlanta, USA, June 2013, pp. 222–230
[4]
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘ImageNet classification with deep convolutional neural networks’. Proc. Advances in Neural Information Processing Systems, Quebec, Canada, December 2012, pp. 84–90
[5]
Pan, S.J., Tsang, I.W., Kwok, J.T., et al: ‘Domain adaptation via transfer component analysis’, IEEE Trans. Neural Netw., 2011, 22, (2), pp. 199–210
[6]
Cao, X., Wipf, D.P., Wen, F., et al: ‘A practical transfer learning algorithm for face verification’. Proc. Int. Conf. on Computer Vision, Sydney, Australia, December 2013, pp. 3208–3215
[7]
Gong, B., Shi, Y., Sha, F., et al: ‘Geodesic flow kernel for unsupervised domain adaptation’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Rhode, Island, June 2012, pp. 2066–2073
[8]
Donahue, J., Jia, Y., Vinyals, O., et al: ‘DeCAF: a deep convolutional activation feature for generic visual recognition’. Proc. Int. Conf. on Machine Learning, Beijing, China, June 2014, pp. 647–655
[9]
Pan, S.J., Kwok, J.T., Yang, Q., et al: ‘Transfer learning via dimensionality reduction’. Proc. AAAI Conf. on Artificial Intelligence, Illinois, USA, July 2008, pp. 677–682
[10]
Yosinski, J., Clune, J., Bengio, Y., et al: ‘How transferable are features in deep neural networks?’. Advances in neural information processing systems, Montreal, Canada, December 2014, pp. 3320–3328
[11]
Long, M., Wang, J., Ding, G., et al: ‘Transfer feature learning with joint distribution adaptation’. Proc. Int. Conf. on Int. Conf. on Computer Vision, Sydney, Australia, December 2013, pp. 2200–2207
[12]
Tzeng, E., Hoffman, J., Darrell, T., et al: ‘Simultaneous deep transfer across domains and tasks’. Proc. IEEE Int. Conf. on Computer Vision, Santiago, Chile, December 2015, pp. 4068–4076
[13]
Tzeng, E., Hoffman, J., Zhang, N., et al: ‘Deep domain confusion: maximizing for domain invariance’, 2014, arXiv preprint arXiv:1412.3474
[14]
Tzeng, E., Hoffman, J., Saenko, K., et al: ‘Adversarial discriminative domain adaptation’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Hawaii, USA, July 2017, pp. 2962–2971
[15]
Ghifary, M., Kleijn, W.B., Zhang, M., et al: ‘Deep reconstruction‐classification networks for unsupervised domain adaptation’. Proc. European Conf. on Computer Vision, Amsterdam, the Netherlands, October 2016, pp. 597–613
[16]
Hou, C.A., Tsai, Y.H., Yeh, Y.R., et al: ‘Unsupervised domain adaptation with label and structural consistency’, IEEE Trans. Image Process., 2016, 25, (12), pp. 5552–5562
[17]
Ganin, Y., Ustinova, E., Ajakan, H., et al: ‘Domain‐adversarial training of neural networks’, J. Mach. Learn. Res., 2016, 17, (1), pp. 1–35
[18]
Bousmalis, K., Trigeorgis, G., Silberman, N., et al: ‘Domain separation networks’. Proc. Advances in Neural Information Processing Systems, Barcelona, Spain, December 2016, pp. 343–351
[19]
Glorot, X., Bordes, A., Bengio, Y.: ‘Domain adaptation for large‐scale sentiment classification: a deep learning approach’. Proc. Int. Conf. on machine learning – Washington, USA, June 2011, pp. 513–520
[20]
Zhao, Z., Chen, Y., Liu, J., et al: ‘Cross‐people mobile‐phone based activity recognition’. Twenty‐second Int. Joint Conf. on Artificial Intelligence, Catalonia, Spain, July 2011, pp. 1–6
[21]
Vincent, P., Larochelle, H., Lajoie, I., et al: ‘Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion’, J. Mach. Learn. Res., 2010, 11, (38), pp. 3371–3408
[22]
Vincent, P., Larochelle, H., Bengio, Y., et al: ‘Extracting and composing robust features with denoising autoencoders’. Proc. Int. Conf. on Machine learning, Helsinki, Finland, July 2008, pp. 1096–1103
[23]
Zhuang, F., Cheng, X., Luo, P., et al: ‘Supervised representation learning: transfer learning with deep autoencoders’. Proc. Int. Joint Conf. on Artificial Intelligence, Aires, Argentina, July 2015, pp. 1–6
[24]
He, K., Zhang, X., Ren, S, et al: ‘Deep residual learning for image recognition’. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, USA, June 2016, pp. 770–778
[25]
Cheng, G., Zhou, P., Han, J.: ‘Learning rotation‐invariant convolutional neural networks for object detection in vhr optical remote sensing images’, IEEE Trans. Geosci. Remote Sens., 2016, 54, (12), pp. 7405–7415
[26]
Cheng, G., Han, J., Zhou, P., et al: ‘Learning rotation‐invariant and fisher discriminative convolutional neural networks for object detection’, IEEE Trans. Image Process., 2019, 28, (1), pp. 265–278
[27]
Cheng, G., Zhou, P., Han, J.: ‘Duplex metric learning for image set classification’, IEEE Trans. Image Process., 2018, 27, (1), pp. 281–292
[28]
Long, M., Cao, Y., Wang, J., et al: ‘Learning transferable features with deep adaptation networks’. Proc. Int. Conf. on Int. Conf. on Machine Learning, Lille, France, July 2015, pp. 97–105
[29]
Goodfellow, I., Pouget Abadie, J., Mirza, M., et al: ‘Generative adversarial nets’. Proc. Advances in Neural Information Processing Systems, Montreal, Canada, December 2014, pp. 2672–2680
[30]
Zhu, J.Y., Park, T., Isola, P., et al: ‘Unpaired image‐to‐image translation using cycle‐consistent adversarial networks’. Proc. IEEE Int. Conf. on Computer Vision, Venice, Italy, October 2017, pp. 2242–2251
[31]
Hoffman, J., Tzeng, E., Park, T., et al: ‘CyCADA: cycle‐consistent adversarial domain adaptation’. Proc. Int. Conf. on Machine Learning, Stockholm, Sweden, July 2018, pp. 1994–2003
[32]
Hu, L., Kan, M., Shan, S., et al: ‘Duplex generative adversarial network for unsupervised domain adaptation’. Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Utah, USA, June 2018, pp. 1498–1507
[33]
Cao, Z., Long, M., Wang, J., et al: ‘Partial transfer learning with selective adversarial networks’. Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Utah, USA, June 2018, pp. 2724–2732
[34]
Pei, Z., Cao, Z., Long, M., et al: ‘Multi‐adversarial domain adaptation’. Proc. AAAI Conf. on Artificial Intelligence, Louisiana, USA, February 2018, pp. 1–6
[35]
Saenko, K., Kulis, B., Fritz, M., et al: ‘Adapting visual category models to new domains’. Proc. European Conf. on Computer Vision, Crete, Greece, September 2010, pp. 213–226
[36]
Paszke, A., Gross, S., Chintala, S., et al: ‘Automatic differentiation in pytorch’, 2017
[37]
Russakovsky, O., Deng, J., Su, H., et al: ‘ImageNet large scale visual recognition challenge’, Int. J. Comput. Vis., 2015, 115, (3), pp. 211–252

Cited By

View all

Index Terms

  1. Deep cycle autoencoder for unsupervised domain adaptation with generative adversarial networks
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 24 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media