Online Reinforcement Learning in Markov Decision Process Using Linear Programming

03/31/2023
by   Vincent Leon, et al.
0

We consider online reinforcement learning in episodic Markov decision process (MDP) with an unknown transition matrix and stochastic rewards drawn from a fixed but unknown distribution. The learner aims to learn the optimal policy and minimize their regret over a finite time horizon through interacting with the environment. We devise a simple and efficient model-based algorithm that achieves Õ(LX√(TA)) regret with high probability, where L is the episode length, T is the number of episodes, and X and A are the cardinalities of the state space and the action space, respectively. The proposed algorithm, which is based on the concept of "optimism in the face of uncertainty", maintains confidence sets of transition and reward functions and uses occupancy measures to connect the online MDP with linear programming. It achieves a tighter regret bound compared to the existing works that use a similar confidence sets framework and improves the computational effort compared to those that use a different framework but with a slightly tighter regret bound.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset