A DPDK-Based Acceleration Method for Experience Sampling of Distributed Reinforcement Learning

10/26/2021
by   Masaki Furukawa, et al.
0

A computing cluster that interconnects multiple compute nodes is used to accelerate distributed reinforcement learning based on DQN (Deep Q-Network). In distributed reinforcement learning, Actor nodes acquire experiences by interacting with a given environment and a Learner node optimizes their DQN model. Since data transfer between Actor and Learner nodes increases depending on the number of Actor nodes and their experience size, communication overhead between them is one of major performance bottlenecks. In this paper, their communication is accelerated by DPDK-based network optimizations, and DPDK-based low-latency experience replay memory server is deployed between Actor and Learner nodes interconnected with a 40GbE (40Gbit Ethernet) network. Evaluation results show that, as a network optimization technique, kernel bypassing by DPDK reduces network access latencies to a shared memory server by 32.7 experience replay memory server between Actor and Learner nodes reduces access latencies to the experience replay memory by 11.7 latencies for prioritized experience sampling by 21.9

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset