Actor-Critic Reinforcement Learning for Control with Stability Guarantee
Deep Reinforcement Learning (DRL) has achieved impressive performance in various robotic control tasks, ranging from motion planning and navigation to end-to-end visual manipulation. However, stability is not guaranteed in DRL. From a control-theoretic perspective, stability is the most important property for any control system, since it is closely related to safety, robustness, and reliability of robotic systems. In this paper, we propose a DRL framework with stability guarantee by exploiting the Lyapunov's method in control theory. A sampling-based stability theorem is proposed for stochastic nonlinear systems modeled by the Markov decision process. Then we show that the stability condition could be exploited as a critic in the actor-critic RL framework and propose an efficient DRL algorithm to learn a controller/policy with a stability guarantee. In the simulated experiments, our approach is evaluated on several well-known examples including the classic CartPole balancing, 3-dimensional robot control, and control of synthetic biology gene regulatory networks. As a qualitative evaluation of stability, we show that the learned policies can enable the systems to recover to the equilibrium or tracking target when interfered by uncertainties such as unseen disturbances and system parametric variations to a certain extent.
READ FULL TEXT