AndroidEnv: The Android Learning Environment

Abstract

We introduce AndroidEnv, an open-source platform for Reinforcement Learning (RL) research built on top of the Android ecosystem. AndroidEnv allows RL agents to interact with a wide variety of apps and services commonly used by humans through a universal touchscreen interface. Since agents train on a realistic simulation of an Android device, they have the potential to be deployed on real devices. In this report, we give an overview of the environment, highlighting the significant features it provides for research, and we present an empirical evaluation of some popular reinforcement learning agents on a set of tasks built on this platform.


Authors' Notes

In recent years, the reinforcement learning (RL) research community has made significant progress in the pursuit of general-purpose learning algorithms. The increasing complexity of environments has driven the development of novel algorithms and agents such as DQN (Atari), AlphaGo (Go), PPO (Mujoco), and AlphaStar (StarCraft II). In order to advance the state-of-the-art even further, researchers seek new and more stimulating environments to tackle.

We're excited to introduce AndroidEnv, a platform that allows agents to interact with an Android device and solve custom tasks built on top of the Android OS. In AndroidEnv, an agent makes decisions based on images displayed on the screen, and navigates the interface through touchscreen actions and gestures just like humans.

With access to the entire Android OS, the set of possible services and applications with which the agent could interact is virtually unlimited. For example, an agent might browse the internet, open the YouTube app, set an alarm or play a game. The possibility for RL agents to operate on a real-world platform used by billions of people on a daily basis opens up novel research opportunities.

Apart from the flexibility and real-world aspect of the platform, AndroidEnv is a particularly appealing domain for RL research thanks to its diverse features. Learning to solve tasks in AndroidEnv requires an agent to overcome multiple types of challenges that have long interested researchers:

  • Transfer and generalization: The observation and action space is the same across all applications, allowing for many opportunities to transfer knowledge across tasks of very different nature.
  • Temporal abstraction: Learning gestures and flexible ways to compose actions is necessary for an agent to be able to handle the immense native action space.
  • Real-time dynamics: Services and applications run in real-time simulation, making the environment dynamics similar to robotics control tasks.
  • Scale: The large size of the observation and action space poses an interesting scaling problem for RL agents.

Android’s large ecosystem opens up the possibility for defining varied tasks, enabling agents to learn to achieve different types of objectives on the same platform. For example, one might set the goal of finding directions to the park, booking a flight or maximizing the score in a game. AndroidEnv provides a straightforward mechanism for flexibly creating such custom tasks based on any Android application. In addition to clear instructions for doing so, we're releasing a set of example tasks demonstrating the range of possibilities in AndroidEnv. These include tasks defined on common Android utilities such as the Clock app, as well as well-known games such 2048, Solitaire or Chess.

We are also excited to have started a collaboration with Midjiwan, creators of The Battle of Polytopia, to integrate their game as an AndroidEnv task*. We find this game a particularly interesting challenge due to many of its features, such as the need to handle long term planning, imperfect information, diverse UI elements, and non-determinism.

We're releasing AndroidEnv for the community at large, in hopes that with its unique features it will be a useful complement to the set of existing RL environments, thus helping push the boundaries of RL research further.

For a more detailed description of the platform, see our technical report on arXiv, or take a look at our GitHub repository.

*only available internally at DeepMind at the moment.

Publications