Abstract

Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.

Keywords

Adaptation (eye)Domain adaptationPsychologyDomain (mathematical analysis)Computer scienceCognitive scienceMathematicsArtificial intelligenceNeuroscienceMathematical analysis

Affiliated Institutions

Related Publications

Recognizing indoor scenes

We propose a scheme for indoor place identification based on the recognition of global scene views. Scene views are encoded using a holistic representation that provides low-res...

2009 2009 IEEE Conference on Computer Visi... 1464 citations

Publication Info

Year
2007
Type
book-chapter
Pages
137-144
Citations
1963
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1963
OpenAlex

Cite This

Shai Ben-David, John Blitzer, Koby Crammer et al. (2007). Analysis of Representations for Domain Adaptation. The MIT Press eBooks , 137-144. https://doi.org/10.7551/mitpress/7503.003.0022

Identifiers

DOI
10.7551/mitpress/7503.003.0022