Variational Regret Bounds for Reinforcement Learning

05/14/2019
by   Pratik Gajane, et al.
0

We consider undiscounted reinforcement learning in Markov decision processes (MDPs) where both the reward functions and the state-transition probabilities may vary (gradually or abruptly) over time. For this problem setting, we propose an algorithm and provide performance guarantees for the regret evaluated against the optimal non-stationary policy. The upper bound on the regret is given in terms of the total variation in the MDP. This is the first variational regret bound for the general reinforcement learning setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset