Abstract
The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision as the main sensor. The system is very robust—we can typically give it a goal position several hundred meters away and expect it to get there. In this paper we describe the main components that comprise the system, including stereo processing, obstacle and free space interpretation, long-range perception, online terrain traversability learning, visual odometry, map registration, planning, and control. At the end of 3 years, the system we developed outperformed all nine other teams in final blind tests over previously unseen terrain.
Original language | English |
---|---|
Journal | Journal of Field Robotics |
Volume | 26 |
Issue number | 1 |
Pages (from-to) | 88-113 |
ISSN | 1556-4959 |
DOIs | |
Publication status | Published - 2009 |