Muesli: Combining Improvements in Policy Optimization

04/13/2021
by   Matteo Hessel, et al.
0

We propose a novel policy update that combines regularized policy optimization with model learning as an auxiliary loss. The update (henceforth Muesli) matches MuZero's state-of-the-art performance on Atari. Notably, Muesli does so without using deep search: it acts directly with a policy network and has computation speed comparable to model-free baselines. The Atari results are complemented by extensive ablations, and by additional results on continuous control and 9x9 Go.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset