ORBSLAM2 and Point Cloud Processing towards Autonomous Underwater Robot Navigation

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations


Autonomous navigation for underwater robots is often approached from the integration of sophisticated imagery and dead-reckoning sensors to achieve a practical implementation. Image processing has become popular in robotics applications for autonomous navigation from object recognition, visual odometry to visual Simultaneous Localization and Mapping (vSLAM). In this paper, a minimal instrumentation setup is proposed towards autonomous navigation for underwater robots based on monocular ORBSLAM2 and Point Cloud Processing in structured environments. ORBSLAM2 is a vSLAM algorithm that generates a point cloud map of Oriented FAST and Rotated BRIEF (ORB) features from video images and localizes the video source in it. We evaluate the feasibility of the point cloud processing in the implementation of a basic navigation method. The cloud is processed to abstract the surroundings into 3D planes where a trajectory (i.e. transects) is easily generated, and a way-point robot controller is capable of driving the robot given the current location as feedback. The basis of the method is tested in a simulated environment.

Original languageEnglish
Title of host publication2020 Global Oceans 2020
Subtitle of host publicationSingapore - U.S. Gulf Coast
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728154466
StatePublished - 5 Oct 2020
Event2020 Global Oceans: Singapore - U.S. Gulf Coast, OCEANS 2020 - Biloxi, United States
Duration: 5 Oct 202030 Oct 2020

Publication series

Name2020 Global Oceans 2020: Singapore - U.S. Gulf Coast


Conference2020 Global Oceans: Singapore - U.S. Gulf Coast, OCEANS 2020
Country/TerritoryUnited States


  • Underwater robots
  • cloud point
  • underwater navigation


Dive into the research topics of 'ORBSLAM2 and Point Cloud Processing towards Autonomous Underwater Robot Navigation'. Together they form a unique fingerprint.

Cite this