Chance-Constrained Stochastic Optimal Control via Path Integral and Finite Difference Methods
This paper addresses a continuous-time continuous-space chance-constrained stochastic optimal control (SOC) problem via a Hamilton-Jacobi-Bellman (HJB) partial differential equation (PDE). Through Lagrangian relaxation, we convert the chance-constrained (risk-constrained) SOC problem to a risk-minimizing SOC problem, the cost function of which possesses the time-additive Bellman structure. We show that the risk-minimizing control synthesis is equivalent to solving an HJB PDE whose boundary condition can be tuned appropriately to achieve a desired level of safety. Furthermore, it is shown that the proposed risk-minimizing control problem can be viewed as a generalization of the problem of estimating the risk associated with a given control policy. Two numerical techniques are explored, namely the path integral and the finite difference method (FDM), to solve a class of risk-minimizing SOC problems whose associated HJB equation is linearizable via the Cole-Hopf transformation. Using a 2D robot navigation example, we validate the proposed control synthesis framework and compare the solutions obtained using path integral and FDM.
READ FULL TEXT