No-Regret Exploration in Goal-Oriented Reinforcement Learning

12/07/2019
by   Jean Tarbouriech, et al.
0

Many popular reinforcement learning problems (e.g., navigation in a maze, some Atari games, mountain car) are instances of the so-called episodic setting or stochastic shortest path (SSP) problem, where an agent has to achieve a predefined goal state (e.g., the top of the hill) while maximizing the cumulative reward or minimizing the cumulative cost. Despite its popularity, most of the literature studying the exploration-exploitation dilemma either focused on different problems (i.e., fixed-horizon and infinite-horizon) or made the restrictive loop-free assumption (which implies that no same state can be visited twice during any episode). In this paper, we study the general SSP setting and introduce the algorithm UC-SSP whose regret scales as O(c_max^3/2 c_min^-1/2 D S √( A D K)) after K episodes for any unknown SSP with S non-terminal states, A actions, an SSP-diameter of D and positive costs in [c_min, c_max]. UC-SSP is thus the first learning algorithm with vanishing regret in the theoretically challenging setting of episodic RL.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset