On Catastrophic Interference in Atari 2600 Games

02/28/2020
by   William Fedus, et al.
8

Model-free deep reinforcement learning algorithms are troubled with poor sample efficiency – learning reliable policies generally requires a vast amount of interaction with the environment. One hypothesis is that catastrophic interference between various segments within the environment is an issue. In this paper, we perform a large-scale empirical study on the presence of catastrophic interference in the Arcade Learning Environment and find that learning particular game segments frequently degrades performance on previously learned segments. In what we term the Memento observation, we show that an identically parameterized agent spawned from a state where the original agent plateaued, reliably makes further progress. This phenomenon is general – we find consistent performance boosts across architectures, learning algorithms and environments. Our results indicate that eliminating catastrophic interference can contribute towards improved performance and data efficiency of deep reinforcement learning algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset