Diversity Actor-Critic: Sample-Aware Entropy Regularization for Sample-Efficient Exploration

06/02/2020
by   Seungyul Han, et al.
0

Policy entropy regularization is commonly used for better exploration in deep reinforcement learning (RL). However, policy entropy regularization is sample-inefficient in off-policy learning since it does not take the distribution of previous samples stored in the replay buffer into account. In order to take advantage of the previous sample distribution from the replay buffer for sample-efficient exploration, we propose sample-aware entropy regularization which maximizes the entropy of weighted sum of the policy action distribution and the sample action distribution from the replay buffer. We formulate the problem of sample-aware entropy regularized policy iteration, prove its convergence, and provide a practical algorithm named diversity actor-critic (DAC) which is a generalization of soft actor-critic (SAC). Numerical results show that DAC outperforms SAC and other state-of-the-art RL algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset