A short variational proof of equivalence between policy gradients and soft Q learning

12/22/2017
by   Pierre H. Richemond, et al.
0

Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other. We relate this result to the well-known convex duality of Shannon entropy and the softmax function. Such a result is also known as the Donsker-Varadhan formula. This provides a short proof of the equivalence. We then interpret this duality further, and use ideas of convex analysis to prove a new policy inequality relative to soft Q-learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset