Mean-Field Approximation of Cooperative Constrained Multi-Agent Reinforcement Learning (CMARL)

09/15/2022
āˆ™
by   Washim Uddin Mondal, et al.
āˆ™
0
āˆ™

Mean-Field Control (MFC) has recently been proven to be a scalable tool to approximately solve large-scale multi-agent reinforcement learning (MARL) problems. However, these studies are typically limited to unconstrained cumulative reward maximization framework. In this paper, we show that one can use the MFC approach to approximate the MARL problem even in the presence of constraints. Specifically, we prove that, an N-agent constrained MARL problem, with state, and action spaces of each individual agents being of sizes |š’³|, and |š’°| respectively, can be approximated by an associated constrained MFC problem with an error, eā‰œš’Ŗ([āˆš(|š’³|)+āˆš(|š’°|)]/āˆš(N)). In a special case where the reward, cost, and state transition functions are independent of the action distribution of the population, we prove that the error can be improved to e=š’Ŗ(āˆš(|š’³|)/āˆš(N)). Also, we provide a Natural Policy Gradient based algorithm and prove that it can solve the constrained MARL problem within an error of š’Ŗ(e) with a sample complexity of š’Ŗ(e^-6).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset