Simple Sensor Intentions for Exploration

Abstract

Modern reinforcement learning algorithms can learn solutions to increasingly difficult control problems while at the same time reducing the amount of prior knowledge needed for their application. One of the remaining challenges is the definition of reward schemes that appropriately facilitate exploration without biasing the solution in undesirable ways, and that can be implemented on real robotic systems without expensive instrumentation. In this paper we focus on a setting in which goal tasks are defined via simple sparse rewards, and exploration is facilitated via agent-internal auxiliary tasks.

We introduce the idea of simple sensor intentions (SSIs) as a generic way to define auxiliary tasks. SSIs reduce the amount of prior knowledge that is required to define suitable rewards. They can further be computed directly from raw sensor streams and thus do not require expensive and possibly brittle state estimation on real systems.

We demonstrate that a learning system based on simple rewards computed from statistics of raw images and basic sensors (such as touch) can solve complex robotic tasks in simulation and in real world settings. In particular, we show that a real robotic arm can learn to grasp objects and solve a Ball-in-a-Cup task from scratch, when only raw sensor signals are used for both controller input and in the auxiliary reward definition.


Authors' Notes
By simple color-masking, high-level image statistics can be derived. Rewarding an agent for deliberately changing these statistics, leads to diverse exploration and interesting behavior like, for example, grasping or lifting objects.

Example Skills

Learned from scratch and from pixels and proprioception only. The external rewards are sparse and Simple Sensor Intentions (SSIs) are used as the only auxiliary tasks.


Video

Simple Sensor Intentions for Exploration

2 mins

Publications