visual odometry

{{short description|Determining the position and orientation of a robot by analyzing associated camera images}}

Image:Optical flow example v2.png vector of a moving object in a video sequence]]

In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers.{{cite journal

| author = Maimone, M. |author2=Cheng, Y. |author3=Matthies, L.

| year = 2007

| title = Two years of Visual Odometry on the Mars Exploration Rovers

| journal = Journal of Field Robotics

| volume = 24

| issue = 3

| pages = 169–186

| url = http://www-robotics.jpl.nasa.gov/publications/Mark_Maimone/rob-06-0081.R4.pdf

| access-date = 2008-07-10

| doi = 10.1002/rob.20184

|citeseerx=10.1.1.104.3110 |s2cid=17544166 }}

Overview

In navigation, odometry is the use of data from the movement of actuators to estimate change in position over time through devices such as rotary encoders to measure wheel rotations. While useful for many wheeled or tracked vehicles, traditional odometry techniques cannot be applied to mobile robots with non-standard locomotion methods, such as legged robots. In addition, odometry universally suffers from precision problems, since wheels tend to slip and slide on the floor creating a non-uniform distance traveled as compared to the wheel rotations. The error is compounded when the vehicle operates on non-smooth surfaces. Odometry readings become increasingly unreliable as these errors accumulate and compound over time.

Visual odometry is the process of determining equivalent odometry information using sequential camera images to estimate the distance traveled. Visual odometry allows for enhanced navigational accuracy in robots or vehicles using any type of locomotion on any{{Citation needed|date=January 2021|reason=Water included? Maybe "solid"?}} surface.

Types

There are various types of VO.

=Monocular and stereo=

Depending on the camera setup, VO can be categorized as Monocular VO (single camera), Stereo VO (two camera in stereo setup).

File:VIO sensor in various commercial quadcopters .jpg

=Feature-based and direct method=

Traditional VO's visual information is obtained by the feature-based method, which extracts the image feature points and tracks them in the image sequence. Recent developments in VO research provided an alternative, called the direct method, which uses pixel intensity in the image sequence directly as visual input. There are also hybrid methods.

=Visual inertial odometry=

If an inertial measurement unit (IMU) is used within the VO system, it is commonly referred to as Visual Inertial Odometry (VIO).

Algorithm

Most existing approaches to visual odometry are based on the following stages.

  1. Acquire input images: using either single cameras.,{{cite conference

|author = Chhaniyara, Savan

|author2 = KASPAR ALTHOEFER

|author3 = LAKMAL D. SENEVIRATNE

|year = 2008

|title = Visual Odometry Technique Using Circular Marker Identification For Motion Parameter Estimation

|conference = The Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines

|book-title = Advances in Mobile Robotics: Proceedings of the Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Coimbra, Portugal

|volume = 11

|publisher = World Scientific, 2008

|url = http://eproceedings.worldscinet.com/9789812835772/9789812835772_0128.html

|conference-url = https://books.google.com/books?id=8L7izBmmCuQC&q=savan+chhaniyara&pg=PA1069

|access-date = 2010-01-22

|archive-date = 2012-02-24

|archive-url = https://web.archive.org/web/20120224015522/http://eproceedings.worldscinet.com/9789812835772/9789812835772_0128.html

|url-status = dead

}} stereo cameras,{{cite conference

|author1=Nister, D |author2=Naroditsky, O. |author3=Bergen, J | conference = Computer Vision and Pattern Recognition, 2004. CVPR 2004.

| title = Visual Odometry

| pages = I–652 – I–659 Vol.1

| volume = 1

|date=Jan 2004

| doi = 10.1109/CVPR.2004.1315094

}} or omnidirectional cameras.{{cite journal

| author = Scaramuzza, D.

|author2=Siegwart, R.

|s2cid=13894940

|date=October 2008

| title = Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles

| journal = IEEE Transactions on Robotics

|volume=24

|issue=5

| pages = 1015–1026

|doi=10.1109/TRO.2008.2004490

|hdl=20.500.11850/14362

|hdl-access=free

}}{{cite conference

| author = Corke, P. |author2=Strelow, D. |author3=Singh, S.

| title = Omnidirectional visual odometry for a planetary rover

| book-title = Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on

| volume = 4

|doi=10.1109/IROS.2004.1390041 }}

  1. Image correction: apply image processing techniques for lens distortion removal, etc.
  2. Feature detection: define interest operators, and match features across frames and construct optical flow field.
  3. Feature extraction and correlation.
  4. * Use correlation, not long term feature tracking, to establish correspondence of two images.
  5. Construct optical flow field (Lucas–Kanade method).
  6. Check flow field vectors for potential tracking errors and remove outliers.{{cite conference

| author = Campbell, J. |author2=Sukthankar, R. |author3=Nourbakhsh, I. |author4=Pittsburgh, I.R.

| title = Techniques for evaluating optical flow for visual odometry in extreme terrain

| book-title = Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on

| volume = 4

|doi=10.1109/IROS.2004.1389991 }}

  1. Estimation of the camera motion from the optical flow.{{cite book

|author = Sunderhauf, N.

|author2 = Konolige, K.

|author3 = Lacroix, S.

|author4 = Protzel, P.

|year = 2005

|chapter = Visual odometry using sparse bundle adjustment on an autonomous outdoor vehicle

|editor1 = Levi |editor2=Schanz |editor3=Lafrenz |editor4=Avrutin

|title = Tagungsband Autonome Mobile Systeme 2005

|series = Reihe Informatik aktuell

|publisher = Springer Verlag

|pages = 157–163

|url = http://www.tu-chemnitz.de/etit/proaut/index.download.df493a7bc2c27263f7d8ff467ea84879.pdf

|access-date = 2008-07-10

|archive-url = https://web.archive.org/web/20090211031719/http://www.tu-chemnitz.de/etit/proaut/index.download.df493a7bc2c27263f7d8ff467ea84879.pdf

|archive-date = 2009-02-11

|url-status = dead

}}{{cite book

| author = Konolige, K.

|author2=Agrawal, M. |author3=Bolles, R.C. |author4=Cowan, C. |author5=Fischler, M. |author6= Gerkey, B.P.

|title=Experimental Robotics |chapter=Outdoor Mapping and Navigation Using Stereo Vision |volume=39 |pages=179–190 |doi=10.1007/978-3-540-77457-0_17 |series=Springer Tracts in Advanced Robotics |date=2008 |isbn=978-3-540-77456-3 }}{{cite journal

| author = Olson, C.F. |author2=Matthies, L. |author3=Schoppers, M. |author4=Maimone, M.W.

| year = 2002

| title = Rover navigation using stereo ego-motion

| journal = Robotics and Autonomous Systems

| volume = 43

|issue=4 | pages = 215–229

| url = http://faculty.washington.edu/cfolson/papers/pdf/ras03.pdf

| access-date = 2010-06-06

| doi=10.1016/s0921-8890(03)00004-6}}{{cite journal

| author = Cheng, Y. |author2=Maimone, M.W. |author3=Matthies, L.

|s2cid=15149330 | year = 2006

| title = Visual Odometry on the Mars Exploration Rovers

| journal = IEEE Robotics and Automation Magazine

| volume = 13

| issue = 2

| pages = 54–62

| doi = 10.1109/MRA.2006.1638016

|citeseerx=10.1.1.297.4693 }}

  1. Choice 1: Kalman filter for state estimate distribution maintenance.
  2. Choice 2: find the geometric and 3D properties of the features that minimize a cost function based on the re-projection error between two adjacent images. This can be done by mathematical minimization or random sampling.
  3. Periodic repopulation of trackpoints to maintain coverage across the image.

An alternative to feature-based methods is the "direct" or appearance-based visual odometry technique which minimizes an error directly in sensor space and subsequently avoids feature matching and extraction.{{cite journal

| author = Comport, A.I. |author2=Malis, E. |author3=Rives, P.

|s2cid=15139693 | title = Real-time Quadrifocal Visual Odometry

| journal= International Journal of Robotics Research

| number = 2–3

| pages = 245–266

| volume = 29

| year = 2010

| doi = 10.1177/0278364909356601

| editor = F. Chaumette |editor2=P. Corke |editor3=P. Newman

|citeseerx=10.1.1.720.3113 }}{{Cite conference| last1 = Engel | first1 = Jakob | last2 = Schöps | first2 = Thomas | last3 = Cremers | first3 = Daniel |title= LSD-SLAM: Large-Scale Direct Monocular SLAM |book-title=Computer Vision | year = 2014 |editor1=Fleet D. |editor2=Pajdla T. |editor3=Schiele B. |editor4=Tuytelaars T. | conference= European Conference on Computer Vision 2014 | url = https://vision.in.tum.de/_media/spezial/bib/engel14eccv.pdf |doi=10.1007/978-3-319-10605-2_54 |volume=8690 |series=Lecture Notes in Computer Science}}{{Cite conference | last1 = Engel | first1 = Jakob | last2 = Sturm | first2 = Jürgen | last3 = Cremers | first3 = Daniel | title= Semi-Dense Visual Odometry for a Monocular Camera |date=2013 | book-title = IEEE International Conference on Computer Vision (ICCV) | url = https://vision.in.tum.de/_media/spezial/bib/engel2013iccv.pdf |doi=10.1109/ICCV.2013.183 |citeseerx=10.1.1.402.6918}}

Another method, coined 'visiodometry' estimates the planar roto-translations between images using Phase correlation instead of extracting features.{{cite conference

| author = Zaman, M.

| year = 2007

| title = High Precision Relative Localization Using a Single Camera

| book-title = Robotics and Automation, 2007.(ICRA 2007). Proceedings. 2007 IEEE International Conference on

| doi = 10.1109/ROBOT.2007.364078

}}{{cite journal

| author = Zaman, M.

| year = 2007

| title = High resolution relative localisation using two cameras

| journal = Journal of Robotics and Autonomous Systems

| volume = 55

| issue = 9

| pages = 685–692

| doi = 10.1016/j.robot.2007.05.008

}}

Egomotion

File:Egomotion-odometry.gif]]

Egomotion is defined as the 3D motion of a camera within an environment.{{cite journal

| author = Irani, M. |author2=Rousso, B. |author3=Peleg S.

| title = Recovery of Ego-Motion Using Image Stabilization

| url = http://www.vision.huji.ac.il/papers/ego-mtn-cvpr94.pdf

| journal = IEEE Computer Society Conference on Computer Vision and Pattern Recognition

| pages = 21–23

|date=June 1994

| access-date = 7 June 2010

}} In the field of computer vision, egomotion refers to estimating a camera's motion relative to a rigid scene.{{cite journal

| author = Burger, W.

|author2=Bhanu, B.

|s2cid=206418830

| title = Estimating 3D egomotion from perspective image sequence

| journal = IEEE Transactions on Pattern Analysis and Machine Intelligence

| volume = 12

| issue = 11

| pages = 1040–1058

|date=Nov 1990

| doi=10.1109/34.61704

}} An example of egomotion estimation would be estimating a car's moving position relative to lines on the road or street signs being observed from the car itself. The estimation of egomotion is important in autonomous robot navigation applications.{{cite journal

| author = Shakernia, O. |author2=Vidal, R. |author3=Shankar, S.

|s2cid=5494756 | title = Omnidirectional Egomotion Estimation From Back-projection Flow

| url = http://cis.jhu.edu/~rvidal/publications/OMNIVIS03-backflow.pdf

| journal = Conference on Computer Vision and Pattern Recognition Workshop

| volume = 7

| pages = 82

| year = 2003

| access-date = 7 June 2010

|doi=10.1109/CVPRW.2003.10074 |citeseerx=10.1.1.5.8127 }}

=Overview=

The goal of estimating the egomotion of a camera is to determine the 3D motion of that camera within the environment using a sequence of images taken by the camera.{{cite journal|author=Tian, T. |author2=Tomasi, C. |author3=Heeger, D. |title=Comparison of Approaches to Egomotion Computation |url=http://www.cs.duke.edu/~tomasi/papers/tian/tianCvpr96.pdf |journal=IEEE Computer Society Conference on Computer Vision and Pattern Recognition |pages=315 |year=1996 |access-date=7 June 2010 |url-status=dead |archive-url=https://web.archive.org/web/20080808123021/http://www.cs.duke.edu/%7Etomasi/papers/tian/tianCvpr96.pdf |archive-date=August 8, 2008 }} The process of estimating a camera's motion within an environment involves the use of visual odometry techniques on a sequence of images captured by the moving camera. This is typically done using feature detection to construct an optical flow from two image frames in a sequence generated from either single cameras or stereo cameras.{{cite journal

| author = Milella, A.

| author2 = Siegwart, R.

| title = Stereo-Based Ego-Motion Estimation Using Pixel Tracking and Iterative Closest Point

| url = http://asl.epfl.ch/aslInternalWeb/ASL/publications/uploadedFiles/21_amilella_EgoMotion_rev_publication.pdf

| journal = IEEE International Conference on Computer Vision Systems

| pages = 21

| date = January 2006

| access-date = 7 June 2010

| archive-url = https://web.archive.org/web/20100917151342/http://asl.epfl.ch/aslInternalWeb/ASL/publications/uploadedFiles/21_amilella_EgoMotion_rev_publication.pdf

| archive-date = 17 September 2010

| url-status = dead

}} Using stereo image pairs for each frame helps reduce error and provides additional depth and scale information.{{cite journal

| author = Olson, C. F.

|author2=Matthies, L. |author3=Schoppers, M. |author4=Maimoneb M. W.

|date=June 2003

| title = Rover navigation using stereo ego-motion

| journal = Robotics and Autonomous Systems

| volume = 43

| issue = 9

| pages = 215–229

| url = http://faculty.washington.edu/cfolson/papers/pdf/ras03.pdf

| access-date = 7 June 2010

| doi=10.1016/s0921-8890(03)00004-6

}}Sudin Dinesh, Koteswara Rao, K.; Unnikrishnan, M.; Brinda, V.; Lalithambika, V.R.; Dhekane, M.V. "[https://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6749359 Improvements in Visual Odometry Algorithm for Planetary Exploration Rovers]". IEEE International Conference on Emerging Trends in Communication, Control, Signal Processing & Computing Applications (C2SPCA), 2013

Features are detected in the first frame, and then matched in the second frame. This information is then used to make the optical flow field for the detected features in those two images. The optical flow field illustrates how features diverge from a single point, the focus of expansion. The focus of expansion can be detected from the optical flow field, indicating the direction of the motion of the camera, and thus providing an estimate of the camera motion.

There are other methods of extracting egomotion information from images as well, including a method that avoids feature detection and optical flow fields and directly uses the image intensities.

See also

References