Abstract
Learning complex locomotion and manipulation tasks presents significant challenges, often requiring extensive engineering of, e.g., reward functions or curricula to provide meaningful feedback to the Reinforcement Learning (RL) algorithm. This paper proposes an intrinsically motivated RL approach to reduce task-specific engineering. The desired task is encoded in a single sparse reward, i.e., a reward of “+1” is given if the task is achieved. Intrinsic motivation enables learning by guiding exploration toward the sparse reward signal. Specifically, we adapt the idea of Random Network Distillation (RND) to the robotics domain to learn holistic motion control policies involving simultaneous locomotion and manipulation. We investigate opening doors as an exemplary task for robotic ap- plications. A second task involving package manipulation from a table to a bin highlights the generalization capabilities of the presented approach. Finally, the resulting RL policies are executed in real-world experiments on a wheeled-legged robot in biped mode. We experienced no failure in our experiments, which consisted of opening push doors (over 15 times in a row) and manipulating packages (over 5 times in a row). Show more
Permanent link
https://doi.org/10.3929/ethz-b-000650515Publication status
publishedExternal links
Book title
Proceedings of The 7th Conference on Robot LearningJournal / series
Proceedings of Machine Learning ResearchVolume
Pages / Article No.
Publisher
PMLREvent
Subject
Curiosity; Reinforcement learning; Wheeled-legged robotsOrganisational unit
09570 - Hutter, Marco / Hutter, Marco
Funding
852044 - Learning Mobility for Real Legged Robots (EC)
166232 - Data-driven control approaches for advanced legged locomotion (SNF)
More
Show all metadata
ETH Bibliography
yes
Altmetrics