SciELO - Scientific Electronic Library Online

 
vol.22 issue4Modeling of Temperature Distribution of Fe3O4 Nanoparticles for Oncological TherapyEvaluation of the Influence of Computational Resources on Transcriptome de Novo Assembly Variability and Quality author indexsubject indexsearch form
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Computación y Sistemas

On-line version ISSN 2007-9737Print version ISSN 1405-5546

Abstract

MARTINEZ, Geovanni. Experimental Results of Testing a Direct Monocular Visual Odometry Algorithm Outdoors on Flat Terrain under Severe Global Illumination Changes for Planetary Exploration Rovers. Comp. y Sist. [online]. 2018, vol.22, n.4, pp.1581-1593.  Epub Feb 10, 2021. ISSN 2007-9737.  https://doi.org/10.13053/cys-22-4-2839.

We present the experimental results obtained by testing a monocular visual odometry algorithm on a real robotic platform outdoors, on flat terrain, and under severe changes of global illumination. The algorithm was proposed as an alternative to the long-established feature based stereo visual odometry algorithms. The rover’s 3D position is computed by integrating the frame to frame rover’s 3D motion over time. The frames are taken by a single video camera rigidly attached to the rover looking to one side tilted downwards to the planet’s surface. The frame to frame rover’s 3D motion is directly estimated by maximizing the likelihood function of the intensity differences at key observation points, without establishing correspondences between features or solving the optical flow as an intermediate step, just directly evaluating the frame to frame intensity differences measured at key observation points. The key observation points are image points with high linear intensity gradients. Comparing the results with the corresponding ground truth data, which was obtained by using a robotic theodolite with a laser range sensor, we concluded that the algorithm is able to deliver the rover’s position in average of 0.06 seconds after an image has been captured and with an average absolute position error of 0.9% of distance traveled. These results are quite similar to those reported in scientific literature for traditional feature based stereo visual odometry algorithms, which were successfully used in real rovers here on Earth and on Mars. We believe that they represent an important step towards the validation of the algorithm and make us think that it may be an excellent tool for any autonomous robotic platform, since it could be very helpful in situations in which the traditional feature based visual odometry algorithms have failed. It may also be an excellent candidate to be merged with other positioning algorithms and/or sensors.

Keywords : Visual-based Autonomous Navigation; Planetary Rover Localization; Ego-Motion Estimation; Visual Odometry; Experimental Validation; Planetary Robots.

        · text in English     · English ( pdf )