Adaptive Policy Learning to Additional Tasks

05/24/2023
by   Wenjian Hao, et al.
0

This paper develops a policy learning method for tuning a pre-trained policy to adapt to additional tasks without altering the original task. A method named Adaptive Policy Gradient (APG) is proposed in this paper, which combines Bellman's principle of optimality with the policy gradient approach to improve the convergence rate. This paper provides theoretical analysis which guarantees the convergence rate and sample complexity of 𝒪(1/T) and 𝒪(1/ϵ), respectively, where T denotes the number of iterations and ϵ denotes the accuracy of the resulting stationary policy. Furthermore, several challenging numerical simulations, including cartpole, lunar lander, and robot arm, are provided to show that APG obtains similar performance compared to existing deterministic policy gradient methods while utilizing much less data and converging at a faster rate.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset