Event-Triggered Multi-agent Reinforcement Learning with Communication under Limited-bandwidth Constraint
Communicating with each other in a distributed manner and behaving as a group are essential in multi-agent reinforcement learning. However, real-world multi-agent systems suffer from restrictions on limited-bandwidth communication. If the bandwidth is fully occupied, some agents are not able to send messages promptly to others, causing decision delay and impairing cooperative effects. Recent related work has started to address the problem but still fails in maximally reducing the consumption of communication resources. In this paper, we propose Event-Triggered Communication Network (ETCNet) to enhance the communication efficiency in multi-agent systems by sending messages only when necessary. According to the information theory, the limited bandwidth is translated to the penalty threshold of an event-triggered strategy, which determines whether an agent at each step sends a message or not. Then the design of the event-triggered strategy is formulated as a constrained Markov decision problem, and reinforcement learning finds the best communication protocol that satisfies the limited bandwidth constraint. Experiments on typical multi-agent tasks demonstrate that ETCNet outperforms other methods in terms of the reduction of bandwidth occupancy and still preserves the cooperative performance of multi-agent systems at the most.
READ FULL TEXT