Abstract

We propose GeoNet, a jointly unsupervised learning framework for monocular depth, optical flow and egomotion estimation from videos. The three components are coupled by the nature of 3D scene geometry, jointly learned by our framework in an end-to-end manner. Specifically, geometric relationships are extracted over the predictions of individual modules and then combined as an image reconstruction loss, reasoning about static and dynamic scene parts separately. Furthermore, we propose an adaptive geometric consistency loss to increase robustness towards outliers and non-Lambertian regions, which resolves occlusions and texture ambiguities effectively. Experimentation on the KITTI driving dataset reveals that our scheme achieves state-of-the-art results in all of the three tasks, performing better than previously unsupervised methods and comparably with supervised ones.

Keywords

Artificial intelligenceRobustness (evolution)Computer scienceComputer visionOutlierOptical flowUnsupervised learningMonocularView synthesisConsistency (knowledge bases)Pattern recognition (psychology)Image (mathematics)

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
article
Citations
1234
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1234
OpenAlex

Cite This

Zhichao Yin, Jianping Shi (2018). GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose. . https://doi.org/10.1109/cvpr.2018.00212

Identifiers

DOI
10.1109/cvpr.2018.00212