Open access
Date
2020-04Type
- Journal Article
Abstract
Navigation in natural outdoor environments requires a robust and reliable traversability classification method to handle the plethora of situations a robot can encounter. Binary classification algorithms perform well in their native domain but tend to provide overconfident predictions when presented with out-of-distribution samples, which can lead to catastrophic failure when navigating unknown environments. We propose to overcome this issue by using anomaly detection on multi-modal images for traversability classification, which is easily scalable by training in a self-supervised fashion from robot experience. In this work, we evaluate multiple anomaly detection methods with a combination of uni- and multi-modal images in their performance on data from different environmental conditions. Our results show that an approach using a feature extractor and normalizing flow with an input of RGB, depth and surface normals performs best. It achieves over 95% area under the ROC curve and is robust to out-of-distribution samples. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000392927Publication status
publishedExternal links
Journal / series
IEEE Robotics and Automation LettersVolume
Pages / Article No.
Publisher
IEEESubject
Visual-Based Navigation; Visual Learning; RGB-D Perception; AI-Based MethodsOrganisational unit
09570 - Hutter, Marco / Hutter, Marco
Funding
780883 - subTerranean Haptic INvestiGator (EC)
Related publications and datasets
Is supplemented by: https://doi.org/10.3929/ethz-b-000389950
More
Show all metadata