Model-Based Reinforcement Learning with Value-Targeted Regression

06/01/2020
by   Alex Ayoub, et al.
11

This paper studies model-based reinforcement learning (RL) for regret minimization. We focus on finite-horizon episodic RL where the transition model P belongs to a known family of models P, a special case of which is when models in P take the form of linear mixtures: P_θ = ∑_i=1^dθ_iP_i. We propose a model based RL algorithm that is based on optimism principle: In each episode, the set of models that are `consistent' with the data collected is constructed. The criterion of consistency is based on the total squared error of that the model incurs on the task of predicting values as determined by the last value estimate along the transitions. The next value function is then chosen by solving the optimistic planning problem with the constructed set of models. We derive a bound on the regret, which, in the special case of linear mixtures, the regret bound takes the form Õ(d√(H^3T)), where H, T and d are the horizon, total number of steps and dimension of θ, respectively. In particular, this regret bound is independent of the total number of states or actions, and is close to a lower bound Ω(√(HdT)). For a general model family P, the regret bound is derived using the notion of the so-called Eluder dimension proposed by Russo Van Roy (2014).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset