Continuous-Time Mean-Variance Portfolio Optimization via Reinforcement Learning

04/25/2019
by   Haoran Wang, et al.
0

We consider continuous-time Mean-variance (MV) portfolio optimization problem in the Reinforcement Learning (RL) setting. The problem falls into the entropy-regularized relaxed stochastic control framework recently introduced in Wang et al. (2019). We derive the feedback exploration policy as the Gaussian distribution, with time-decaying variance. Close connections between the entropy-regularized MV and the classical MV are also discussed, including the solvability equivalence and the convergence as exploration decays. Finally, we prove a policy improvement theorem (PIT) for the continuous-time MV problem under both entropy regularization and control relaxation. The PIT leads to an implementable RL algorithm for the continuous-time MV problem. Our algorithm outperforms an adaptive control based method that estimates the underlying parameters in real-time and a state-of-the-art RL method that uses deep neural networks for continuous control problems by a large margin in nearly all simulations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset