Faded-Experience Trust Region Policy Optimization for Model-Free Power Allocation in Interference Channel

08/04/2020
by   Mohammad G. Khoshkholgh, et al.
0

Policy gradient reinforcement learning techniques enable an agent to directly learn an optimal action policy through the interactions with the environment. Nevertheless, despite its advantages, it sometimes suffers from slow convergence speed. Inspired by human decision making approach, we work toward enhancing its convergence speed by augmenting the agent to memorize and use the recently learned policies. We apply our method to the trust-region policy optimization (TRPO), primarily developed for locomotion tasks, and propose faded-experience (FE) TRPO. To substantiate its effectiveness, we adopt it to learn continuous power control in an interference channel when only noisy location information of devices is available. Results indicate that with FE-TRPO it is possible to almost double the learning speed compared to TRPO. Importantly, our method neither increases the learning complexity nor imposes performance loss.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset