Domain Generalization via Multidomain Discriminant Analysis

Shoubo Hu, Kun Zhang, Zhitang Chen, Laiwan Chan
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:292-302, 2020.

Abstract

Domain generalization (DG) aims to incorporate knowledge from multiple source domains into a single model that could generalize well on unseen target domains. This problem is ubiquitous in practice since the distributions of the target data may rarely be identical to those of the source data. In this paper, we propose Multidomain Discriminant Analysis (MDA) to address DG of classification tasks in general situations. MDA learns a domain-invariant feature transformation that aims to achieve appealing properties, including a minimal divergence among domains within each class, a maximal separability among classes, and overall maximal compactness of all classes. Furthermore, we provide the bounds on excess risk and generalization error by learning theory analysis. Comprehensive experiments on synthetic and real benchmark datasets demonstrate the effectiveness of MDA.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-hu20a, title = {Domain Generalization via Multidomain Discriminant Analysis}, author = {Hu, Shoubo and Zhang, Kun and Chen, Zhitang and Chan, Laiwan}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {292--302}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/hu20a/hu20a.pdf}, url = {https://proceedings.mlr.press/v115/hu20a.html}, abstract = {Domain generalization (DG) aims to incorporate knowledge from multiple source domains into a single model that could generalize well on unseen target domains. This problem is ubiquitous in practice since the distributions of the target data may rarely be identical to those of the source data. In this paper, we propose Multidomain Discriminant Analysis (MDA) to address DG of classification tasks in general situations. MDA learns a domain-invariant feature transformation that aims to achieve appealing properties, including a minimal divergence among domains within each class, a maximal separability among classes, and overall maximal compactness of all classes. Furthermore, we provide the bounds on excess risk and generalization error by learning theory analysis. Comprehensive experiments on synthetic and real benchmark datasets demonstrate the effectiveness of MDA.} }
Endnote
%0 Conference Paper %T Domain Generalization via Multidomain Discriminant Analysis %A Shoubo Hu %A Kun Zhang %A Zhitang Chen %A Laiwan Chan %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-hu20a %I PMLR %P 292--302 %U https://proceedings.mlr.press/v115/hu20a.html %V 115 %X Domain generalization (DG) aims to incorporate knowledge from multiple source domains into a single model that could generalize well on unseen target domains. This problem is ubiquitous in practice since the distributions of the target data may rarely be identical to those of the source data. In this paper, we propose Multidomain Discriminant Analysis (MDA) to address DG of classification tasks in general situations. MDA learns a domain-invariant feature transformation that aims to achieve appealing properties, including a minimal divergence among domains within each class, a maximal separability among classes, and overall maximal compactness of all classes. Furthermore, we provide the bounds on excess risk and generalization error by learning theory analysis. Comprehensive experiments on synthetic and real benchmark datasets demonstrate the effectiveness of MDA.
APA
Hu, S., Zhang, K., Chen, Z. & Chan, L.. (2020). Domain Generalization via Multidomain Discriminant Analysis. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:292-302 Available from https://proceedings.mlr.press/v115/hu20a.html.

Related Material