From Importance Sampling to Doubly Robust Policy Gradient

10/20/2019
by   Jiawei Huang, et al.
0

We show that policy gradient (PG) and its variance reduction variants can be derived by taking finite difference of function evaluations supplied by estimators from the importance sampling (IS) family for off-policy evaluation (OPE). Starting from the doubly robust (DR) estimator [Jiang and Li, 2016], we provide a simple derivation of a very general and flexible form of PG, which subsumes the state-of-the-art variance reduction technique [Cheng et al., 2019] as its special case and immediately hints at further variance reduction opportunities overlooked by existing literature.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset