Abstract

A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty.

Keywords

LandmarkComputer visionArtificial intelligenceMobile robotComputer scienceScale-invariant feature transformRobotInvariant (physics)Simultaneous localization and mappingFeature extractionMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2002
Type
article
Volume
21
Issue
8
Pages
735-758
Citations
312
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

312
OpenAlex

Cite This

Stephen Se, David Lowe, Jim Little (2002). Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks. The International Journal of Robotics Research , 21 (8) , 735-758. https://doi.org/10.1177/027836402128964611

Identifiers

DOI
10.1177/027836402128964611