Nearly Optimal Policy Optimization with Stable at Any Time Guarantee

12/21/2021
by   Tianhao Wu, et al.
0

Policy optimization methods are one of the most widely used classes of Reinforcement Learning (RL) algorithms. However, theoretical understanding of these methods remains insufficient. Even in the episodic (time-inhomogeneous) tabular setting, the state-of-the-art theoretical result of policy-based method in <cit.> is only Õ(√(S^2AH^4K)) where S is the number of states, A is the number of actions, H is the horizon, and K is the number of episodes, and there is a √(SH) gap compared with the information theoretic lower bound Ω̃(√(SAH^3K)). To bridge such a gap, we propose a novel algorithm Reference-based Policy Optimization with Stable at Any Time guarantee (), which features the property "Stable at Any Time". We prove that our algorithm achieves Õ(√(SAH^3K) + √(AH^4K)) regret. When S > H, our algorithm is minimax optimal when ignoring logarithmic factors. To our best knowledge, RPO-SAT is the first computationally efficient, nearly minimax optimal policy-based algorithm for tabular RL.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset