Diff-DAC: Distributed Actor-Critic for Average Multitask Deep Reinforcement Learning

10/28/2017
by   Sergio Valcarcel Macua, et al.
0

We propose a fully distributed actor-critic algorithm approximated by deep neural networks, named Diff-DAC, with application to single-task and to average multitask reinforcement learning (MRL). Each agent has access to data from its local task only, but it aims to learn a policy that performs well on average for the whole set of tasks. During the learning process, agents communicate their value-policy parameters to their neighbors, diffusing the information across the network, so that they converge to a common policy, with no need for a central node. The method is scalable, since the computational and communication costs per agent grow with its number of neighbors. We derive Diff-DAC's from duality theory and provide novel insights into the standard actor-critic framework, showing that it is actually an instance of the dual ascent method that approximates the solution of a linear program. Experiments suggest that Diff-DAC can outperform the single previous distributed MRL approach (i.e., Dist-MTLPS) and even the centralized architecture.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset