Abstract

Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.

Keywords

Computer scienceBenchmark (surveying)Domain adaptationConfusionArtificial intelligenceMetric (unit)Domain (mathematical analysis)Layer (electronics)Machine learningTask (project management)Adaptation (eye)Deep learningPattern recognition (psychology)Mathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2014
Type
preprint
Citations
2347
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

2347
OpenAlex

Cite This

Eric Tzeng, Judy Hoffman, Ning Zhang et al. (2014). Deep Domain Confusion: Maximizing for Domain Invariance. arXiv (Cornell University) . https://doi.org/10.48550/arxiv.1412.3474

Identifiers

DOI
10.48550/arxiv.1412.3474