Abstract

Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a "dog" can be seen, heard, and felt). We investigate the classic hypothesis that a powerful representation is one that models view-invariant factors. We study this hypothesis under the framework of multiview contrastive learning, where we learn a representation that aims to maximize mutual information between different views of the same scene but is otherwise compact. Our approach scales to any number of views, and is view-agnostic. We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics. Our approach achieves state-of-the-art results on image and video unsupervised learning benchmarks. Code is released at: this http URL.

Keywords

Computer scienceCoding (social sciences)Artificial intelligenceComputer visionMathematics

Affiliated Institutions

Related Publications

Recognizing indoor scenes

We propose a scheme for indoor place identification based on the recognition of global scene views. Scene views are encoded using a holistic representation that provides low-res...

2009 2009 IEEE Conference on Computer Visi... 1464 citations

Publication Info

Year
2020
Type
book-chapter
Pages
776-794
Citations
1682
Access
Closed

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

1682
OpenAlex
282
Influential

Cite This

Yonglong Tian, Dilip Krishnan, Phillip Isola (2020). Contrastive Multiview Coding. Lecture notes in computer science , 776-794. https://doi.org/10.1007/978-3-030-58621-8_45

Identifiers

DOI
10.1007/978-3-030-58621-8_45
PMID
41079152
PMCID
PMC12513904
arXiv
1906.05849

Data Quality

Data completeness: 79%