Near-Optimal Regret Bounds for Multi-batch Reinforcement Learning

10/15/2022
by   Zihan Zhang, et al.
0

In this paper, we study the episodic reinforcement learning (RL) problem modeled by finite-horizon Markov Decision Processes (MDPs) with constraint on the number of batches. The multi-batch reinforcement learning framework, where the agent is required to provide a time schedule to update policy before everything, which is particularly suitable for the scenarios where the agent suffers extensively from changing the policy adaptively. Given a finite-horizon MDP with S states, A actions and planning horizon H, we design a computational efficient algorithm to achieve near-optimal regret of Õ(√(SAH^3Kln(1/δ)))[Õ(·) hides logarithmic terms of (S,A,H,K)] in K episodes using O(H+log_2log_2(K) ) batches with confidence parameter δ. To our best of knowledge, it is the first Õ(√(SAH^3K)) regret bound with O(H+log_2log_2(K)) batch complexity. Meanwhile, we show that to achieve Õ(poly(S,A,H)√(K)) regret, the number of batches is at least Ω(H/log_A(K)+ log_2log_2(K) ), which matches our upper bound up to logarithmic terms. Our technical contribution are two-fold: 1) a near-optimal design scheme to explore over the unlearned states; 2) an computational efficient algorithm to explore certain directions with an approximated transition model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset