Abstract

Domain shift refers to the well known problem that a model trained in one source domain performs poorly when appliedto a target domain with different statistics. Domain Generalization (DG) techniques attempt to alleviate this issue by producing models which by design generalize well to novel testing domains. We propose a novel meta-learning method for domain generalization. Rather than designing a specific model that is robust to domain shift as in most previous DG work, we propose a model agnostic training procedure for DG. Our algorithm simulates train/test domain shift during training by synthesizing virtual testing domains within each mini-batch. The meta-optimization objective requires that steps to improve training domain performance should also improve testing domain performance. This meta-learning procedure trains models with good generalization ability to novel domains. We evaluate our method and achieve state of the art results on a recent cross-domain image classification benchmark, as well demonstrating its potential on two classic reinforcement learning tasks.

Keywords

Computer scienceGeneralizationDomain (mathematical analysis)Artificial intelligenceBenchmark (surveying)Machine learningReinforcement learningMeta learning (computer science)MathematicsTask (project management)

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
article
Volume
32
Issue
1
Citations
1146
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1146
OpenAlex

Cite This

Da Li, Yongxin Yang, Yi-Zhe Song et al. (2018). Learning to Generalize: Meta-Learning for Domain Generalization. Proceedings of the AAAI Conference on Artificial Intelligence , 32 (1) . https://doi.org/10.1609/aaai.v32i1.11596

Identifiers

DOI
10.1609/aaai.v32i1.11596