Variance-Reduced Conservative Policy Iteration

12/12/2022
by   Naman Agarwal, et al.
0

We study the sample complexity of reducing reinforcement learning to a sequence of empirical risk minimization problems over the policy space. Such reductions-based algorithms exhibit local convergence in the function space, as opposed to the parameter space for policy gradient algorithms, and thus are unaffected by the possibly non-linear or discontinuous parameterization of the policy class. We propose a variance-reduced variant of Conservative Policy Iteration that improves the sample complexity of producing a ε-functional local optimum from O(ε^-4) to O(ε^-3). Under state-coverage and policy-completeness assumptions, the algorithm enjoys ε-global optimality after sampling O(ε^-2) times, improving upon the previously established O(ε^-3) sample requirement.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset