Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs

03/17/2022
by   Andrea Tirinzoni, et al.
0

In probably approximately correct (PAC) reinforcement learning (RL), an agent is required to identify an ϵ-optimal policy with probability 1-δ. While minimax optimal algorithms exist for this problem, its instance-dependent complexity remains elusive in episodic Markov decision processes (MDPs). In this paper, we propose the first (nearly) matching upper and lower bounds on the sample complexity of PAC RL in deterministic episodic MDPs with finite state and action spaces. In particular, our bounds feature a new notion of sub-optimality gap for state-action pairs that we call the deterministic return gap. While our instance-dependent lower bound is written as a linear program, our algorithms are very simple and do not require solving such an optimization problem during learning. Their design and analyses employ novel ideas, including graph-theoretical concepts such as minimum flows and maximum cuts, which we believe to shed new light on this problem.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset