Sample-efficient Cross-Entropy Method for Real-time Planning

08/14/2020
by   Cristina Pinneri, et al.
7

Trajectory optimizers for model-based reinforcement learning, such as the Cross-Entropy Method (CEM), can yield compelling results even in high-dimensional control tasks and sparse-reward environments. However, their sampling inefficiency prevents them from being used for real-time planning and control. We propose an improved version of the CEM algorithm for fast planning, with novel additions including temporally-correlated actions and memory, requiring 2.7-22x less samples and yielding a performance increase of 1.2-10x in high-dimensional control problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset