Multi Robot Collision Avoidance by Learning Whom to Communicate
Agents in decentralized multi-agent navigation lack the world knowledge to make safe and (near-)optimal plans reliably. They base their decisions on their neighbors' observable states, which hide the neighbors' navigation intent. We propose augmenting decentralized navigation with inter-agent communication to improve their performance and aid agent in making sound navigation decisions. In this regard, we present a novel reinforcement learning method for multi-agent collision avoidance using selective inter-agent communication. Our network learns to decide 'when' and with 'whom' to communicate to request additional information in an end-to-end fashion. We pose communication selection as a link prediction problem, where the network predicts if communication is necessary given the observable information. The communicated information augments the observed neighbor information to select a suitable navigation plan. As the number of neighbors for a robot varies, we use a multi-head self-attention mechanism to encode neighbor information and create a fixed-length observation vector. We validate that our proposed approach achieves safe and efficient navigation among multiple robots in challenging simulation benchmarks. Aided by learned communication, our network performs significantly better than existing decentralized methods across various metrics such as time-to-goal and collision frequency. Besides, we showcase that the network effectively learns to communicate when necessary in a situation of high complexity.
READ FULL TEXT