Investigating the Impact of Direct Punishment on the Emergence of Cooperation in Multi-Agent Reinforcement Learning Systems
The problem of cooperation is of fundamental importance for human societies, with examples ranging from navigating road junctions to negotiating climate treaties. As the use of AI becomes more pervasive within society, the need for socially intelligent agents that are able to navigate these complex dilemmas is becoming increasingly evident. Direct punishment is an ubiquitous social mechanism that has been shown to benefit the emergence of cooperation within the natural world, however no prior work has investigated its impact on populations of learning agents. Moreover, although the use of all forms of punishment in the natural world is strongly coupled with partner selection and reputation, no existing work has provided a holistic analysis of their combination within multi-agent systems. In this paper, we present a comprehensive analysis of the behaviors and learning dynamics associated with direct punishment in multi-agent reinforcement learning systems and how this compares to third-party punishment, when both forms of punishment are combined with other social mechanisms such as partner selection and reputation. We provide an extensive and systematic evaluation of the impact of these key mechanisms on the emergence of cooperation. Finally, we discuss the implications of the use of these mechanisms in the design of cooperative AI systems.
READ FULL TEXT