Abstract

In this paper, we propose a dense visual SLAM method for RGB-D cameras that minimizes both the photometric and the depth error over all pixels. In contrast to sparse, feature-based methods, this allows us to better exploit the available information in the image data which leads to higher pose accuracy. Furthermore, we propose an entropy-based similarity measure for keyframe selection and loop closure detection. From all successful matches, we build up a graph that we optimize using the g2o framework. We evaluated our approach extensively on publicly available benchmark datasets, and found that it performs well in scenes with low texture as well as low structure. In direct comparison to several state-of-the-art methods, our approach yields a significantly lower trajectory error. We release our software as open-source.

Keywords

Computer scienceArtificial intelligenceComputer visionBenchmark (surveying)RGB color modelSimultaneous localization and mappingEntropy (arrow of time)ExploitPixelFeature extractionPattern recognition (psychology)RobotMobile robot

Affiliated Institutions

Related Publications

Publication Info

Year
2013
Type
article
Pages
2100-2106
Citations
895
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

895
OpenAlex

Cite This

Christian Kerl, Jürgen Sturm, Daniel Cremers (2013). Dense visual SLAM for RGB-D cameras. , 2100-2106. https://doi.org/10.1109/iros.2013.6696650

Identifiers

DOI
10.1109/iros.2013.6696650