RL Unplugged - An Offline RL Benchmark Suite

RL Unplugged (RLU) is an offline-RL benchmark suite based on Deepmind control-suite, locomotion, and atari environments. Our datasets are generated by recording the transitions from a trained online agent. We introduce this new collection of datasets to provide a challenge for offline RL methods for the years to come. RLU’s collection of datasets illustrates and provides data to measure progress on two difficult problems: 1) Offline Reinforcement Learning (i.e., learning a policy from logged data), and 2) Offline Model Selection (i.e., ranking a set of policies given only access to recorded data). These two problems represent key hurdles to applying RL in many domains. We hope that the scale and diversity of RL Unplugged can offer unparalleled opportunities to researchers in the ML community working on offline methods.

RL Unplugged (RLU) is an offline-RL benchmark suite based on Deepmind control-suite, locomotion, and atari environments. Our datasets are generated by recording the transitions from a trained online agent. We introduce this new collection of datasets to provide a challenge for offline RL methods for the years to come. RLU’s collection of datasets illustrates and provides data to measure progress on two difficult problems: 1) Offline Reinforcement Learning (i.e., learning a policy from logged data), and 2) Offline Model Selection (i.e., ranking a set of policies given only access to recorded data). These two problems represent key hurdles to applying RL in many domains. We hope that the scale and diversity of RL Unplugged can offer unparalleled opportunities to researchers in the ML community working on offline methods.

OpenSource

15 Jul 2020