On the Convergence of Discounted Policy Gradient Methods

12/28/2022
by   Chris Nota, et al.
0

Many popular policy gradient methods for reinforcement learning follow a biased approximation of the policy gradient known as the discounted approximation. While it has been shown that the discounted approximation of the policy gradient is not the gradient of any objective function, little else is known about its convergence behavior or properties. In this paper, we show that if the discounted approximation is followed such that the discount factor is increased slowly at a rate related to a decreasing learning rate, the resulting method recovers the standard guarantees of gradient ascent on the undiscounted objective.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset