Sample-Optimal Parametric Q-Learning with Linear Transition Models

02/13/2019
by   Lin F. Yang, et al.
0

Consider a Markov decision process (MDP) that admits a set of state-action features, which can linearly express the process's probabilistic transition model. We propose a parametric Q-learning algorithm that finds an approximate-optimal policy using a sample size proportional to the feature dimension K and invariant with respect to the size of the state space. To further improve its sample efficiency, we exploit the monotonicity property and intrinsic noise structure of the Bellman operator, provided the existence of anchor state-actions that imply implicit non-negativity in the feature space. We augment the algorithm using techniques of variance reduction, monotonicity preservation, and confidence bounds. It is proved to find a policy which is ϵ-optimal from any initial state with high probability using O(K/ϵ^2(1-γ)^3) sample transitions for arbitrarily large-scale MDP with a discount factor γ∈(0,1). A matching information-theoretical lower bound is proved, confirming the sample optimality of the proposed method with respect to all parameters (up to polylog factors).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset