PRIMAL: Pathfinding via Reinforcement and Imitation Multi-Agent Learning

09/10/2018
by   Guillaume Sartoretti, et al.
0

Multi-agent path finding (MAPF) is an essential component of many large-scale, real-world robot deployments, from aerial swarms to warehouse automation to collaborative search-and-rescue. However, despite the community's continued efforts, most state-of-the-art MAPF algorithms still rely on centralized planning, and scale poorly past a few hundred agents. Such planning approaches are maladapted to real-world deployment, where noise and uncertainty often require paths to be recomputed or adapted online, which is impossible when planning times are in seconds to minutes. In this work, we present PRIMAL, a novel framework for MAPF that combines reinforcement and imitation learning to teach fully-decentralized policies, where agents reactively plan paths online in a partially-observable world while exhibiting implicit coordination. Our framework extends our previous works on distributed learning of collaborative policies by introducing demonstrations of an expert MAPF algorithm during training, as well as careful reward shaping and environment sampling. Once learned, the resulting policy can be copied onto any number of agents, and naturally scales to different team sizes and world dimensions. We present results on randomized worlds with up to 1024 agents and compare success rates against state-of-the-art MAPF algorithms. Finally, we show experimental results of the learned policies in a hybrid simulation of a factory mock-up, involving both real-world and simulated robots.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset