Open access
Date
2019-04Type
- Journal Article
Abstract
Place recognition is an essential capability for
robotic autonomy. While ground robots observe the world from
generally similar viewpoints over repeated visits, other robots,
such as small aircraft, experience far more different viewpoints,
requiring place recognition for images captured from very
wide baselines. While traditional feature-based methods fail
dramatically under extreme viewpoint changes, deep learning
approaches demand heavy runtime processing. Driven by the
need for cheaper alternatives able to run on computationally
restricted platforms, such as small aircraft, this work proposes
a novel real-time pipeline employing depth-completion on sparse
feature maps that are anyway computed during robot localization
and mapping, to enable place recognition at extreme viewpoint
changes. The proposed approach demonstrates unprecedented
precision-recall rates on challenging benchmarking and own
synthetic and real datasets with up to 45◦
difference in viewpoints.
In particular, our synthetic datasets are, to the best of
our knowledge, the first to isolate the challenge of viewpoint
changes for place recognition, addressing a crucial gap in the
literature. All of the new datasets are publicly available to aid
benchmarking. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000321767Publication status
publishedExternal links
Journal / series
IEEE Robotics and Automation LettersVolume
Pages / Article No.
Publisher
IEEESubject
Aerial Systems: Perception and Autonomy; Visual-Based Navigation; SLAM; Localization; Place RecognitionOrganisational unit
09559 - Chli, Margarita (ehemalig) / Chli, Margarita (former)
Funding
157585 - Collaborative vision-based perception for teams of (aerial) robots (SNF)
More
Show all metadata