We're excited to introduce AndroidEnv, a platform that allows agents to interact with an Android device and solve custom tasks built on top of the Android OS. In AndroidEnv, an agent makes decisions based on images displayed on the screen, and navigates the interface through touchscreen actions and gestures just like humans.

We are releasing AndroidEnv for the community at large, in hopes that with its unique features it will be a useful complement to the set of existing RL environments, thus helping push the boundaries of RL research further.

For a more detailed description of the platform, look at our GitHub repository.


28 May 2021