Observe and Look Further: Achieving Consistent Performance on Atari
Abstract
Despite significant advances in the field of deep Reinforcement Learning (RL),
today’s algorithms still fail to learn human-level policies consistently over a set of
diverse tasks such as Atari 2600 games. We identify three key challenges that any
algorithm needs to master in order to perform well on all games: processing diverse
reward distributions, reasoning over long time horizons, and exploring efficiently.
In this paper, we propose an algorithm that addresses each of these challenges and
is able to learn human-level policies on nearly all Atari games. A new transformed
Bellman operator allows our algorithm to process rewards of varying densities
and scales; an auxiliary temporal consistency loss allows us to train stably using a
discount factor of γ = 0.999 (instead of γ = 0.99) extending the effective planning
horizon by an order of magnitude; and we ease the exploration problem by using
human demonstrations that guide the agent towards rewarding states. When tested
on a set of 42 Atari games, our algorithm exceeds the performance of an average
human on 40 games using a common set of hyper parameters. Furthermore, it is
the first deep RL algorithm to solve the first level of MONTEZUMA’S REVENGE.