Safe Reinforcement Learning via Online Shielding

05/25/2019
by   Osbert Bastani, et al.
0

Reinforcement learning is a promising approach to learning control policies for complex robotics tasks. A key challenge is ensuring safety of the learned control policy---e.g., that a walking robot does not fall over, or a quadcopter does not run into a wall. We focus on the setting where the dynamics are known, and the goal is to prove that a policy learned in simulation satisfies a given safety constraint. Existing approaches for ensuring safety suffer from a number of limitations---e.g., they do not scale to high-dimensional state spaces, or they only ensure safety for a fixed environment. We propose an approach based on shielding, which uses a backup controller to override the learned controller as necessary to ensure that safety holds. Rather than compute when to use the backup controller ahead-of-time, we perform this computation online. By doing so, we ensure that our approach is computationally efficient, and furthermore, can be used to ensure safety even in novel environments. We empirically demonstrate that our approach can ensure safety in experiments on cart-pole and on a bicycle with random obstacles.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset