Uncertainty-aware visually-attentive navigation using deep neural networks

Huan Nguyen*, Rasmus Andersen, Evangelos Boukas, Kostas Alexis

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

12 Downloads (Pure)

Abstract

Autonomous navigation and information gathering in challenging environments are demanding since the robot’s sensors may be susceptible to non-negligible noise, its localization and mapping may be subject to significant uncertainty and drift, and performing collision-checking or evaluating utility functions using a map often requires high computational costs. We propose a learning-based method to efficiently tackle this problem without relying on a map of the environment or the robot’s position. Our method utilizes a Collision Prediction Network (CPN) for predicting the collision scores of a set of action sequences, and an Information gain Prediction Network (IPN) for estimating their associated information gain. Both networks assume access to a) the depth image (CPN) or the depth image and the detection mask from any visual method (IPN), b) the robot’s partial state (including its linear velocities, z-axis angular velocity, and roll/pitch angles), and c) a library of action sequences. Specifically, the CPN accounts for the estimation uncertainty of the robot’s partial state and the neural network’s epistemic uncertainty by using the Unscented Transform and an ensemble of neural networks. The outputs of the networks are combined with a goal vector to identify the next-best-action sequence. Simulation studies demonstrate the method’s robustness against noisy robot velocity estimates and depth images, alongside its advantages compared to state-of-the-art methods and baselines in (visually-attentive) navigation tasks. Lastly, multiple real-world experiments are presented, including safe flights at 2.5 m/s in a cluttered corridor, and missions inside a dense forest alongside visually-attentive navigation in industrial and university buildings.

Original languageEnglish
JournalInternational Journal of Robotics Research
Number of pages33
ISSN0278-3649
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Aerial robots
  • Autonomous navigation
  • Deep neural networks
  • Uncertainty-aware navigation
  • Visually-attentive navigation

Fingerprint

Dive into the research topics of 'Uncertainty-aware visually-attentive navigation using deep neural networks'. Together they form a unique fingerprint.

Cite this