Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning

01/29/2020
by   Ming Yin, et al.
12

We consider the problem of off-policy evaluation for reinforcement learning, where the goal is to estimate the expected reward of a target policy π using offline data collected by running a logging policy μ. Standard importance-sampling based approaches for this problem suffer from a variance that scales exponentially with time horizon H, which motivates a splurge of recent interest in alternatives that break the "Curse of Horizon" (Liu et al. 2018, Xie et al. 2019). In particular, it was shown that a marginalized importance sampling (MIS) approach can be used to achieve an estimation error of order O(H^3/ n) in mean square error (MSE) under an episodic Markov Decision Process model with finite states and potentially infinite actions. The MSE bound however is still a factor of H away from a Cramer-Rao lower bound of order Ω(H^2/n). In this paper, we prove that with a simple modification to the MIS estimator, we can asymptotically attain the Cramer-Rao lower bound, provided that the action space is finite. We also provide a general method for constructing MIS estimators with high-probability error bounds.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset