Abstract

In this paper we describe a method that estimates the motion of a calibrated camera (settled on an experimental vehicle) and the tridimensional geometry of the environment. The only data used is a video input. In fact, interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in real-time, key-frames are selected and permit the features 3D reconstruction. The algorithm is particularly appropriate to the reconstruction of long images sequences thanks to the introduction of a fast and local bundle adjustment method that ensures both good accuracy and consistency of the estimated camera poses along the sequence. It also largely reduces computational complexity compared to a global bundle adjustment. Experiments on real data were carried out to evaluate speed and robustness of the method for a sequence of about one kilometer long. Results are also compared to the ground truth measured with a differential GPS.

Keywords

Bundle adjustmentComputer visionRobustness (evolution)Artificial intelligenceComputer scienceGround truthGlobal Positioning SystemMotion estimationStructure from motionSingle camera3D reconstructionPhotogrammetry

Affiliated Institutions

Related Publications

Viewing Real-World Faces in 3D

We present a data-driven method for estimating the 3D shapes of faces viewed in single, unconstrained photos (aka "in-the-wild"). Our method was designed with an emphasis on rob...

2013 153 citations

Publication Info

Year
2006
Type
preprint
Pages
363-370
Citations
372
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

372
OpenAlex
33
Influential
244
CrossRef

Cite This

E. Mouragnon, Maxime Lhuillier, Michel Dhome et al. (2006). Real Time Localization and 3D Reconstruction. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06) , 363-370. https://doi.org/10.1109/cvpr.2006.236

Identifiers

DOI
10.1109/cvpr.2006.236

Data Quality

Data completeness: 81%