Concave Utility Reinforcement Learning with Zero-Constraint Violations

09/12/2021
by   Mridul Agarwal, et al.
0

We consider the problem of tabular infinite horizon concave utility reinforcement learning (CURL) with convex constraints. Various learning applications with constraints, such as robotics, do not allow for policies that can violate constraints. To this end, we propose a model-based learning algorithm that achieves zero constraint violations. To obtain this result, we assume that the concave objective and the convex constraints have a solution interior to the set of feasible occupation measures. We then solve a tighter optimization problem to ensure that the constraints are never violated despite the imprecise model knowledge and model stochasticity. We also propose a novel Bellman error based analysis for tabular infinite-horizon setups which allows to analyse stochastic policies. Combining the Bellman error based analysis and tighter optimization equation, for T interactions with the environment, we obtain a regret guarantee for objective which grows as O(1/√(T)), excluding other factors.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset