Learning in Multi-Player Stochastic Games

10/25/2022
by   William Brown, et al.
0

We consider the problem of simultaneous learning in stochastic games with many players in the finite-horizon setting. While the typical target solution for a stochastic game is a Nash equilibrium, this is intractable with many players. We instead focus on variants of correlated equilibria, such as those studied for extensive-form games. We begin with a hardness result for the adversarial MDP problem: even for a horizon of 3, obtaining sublinear regret against the best non-stationary policy is -hard when both rewards and transitions are adversarial. This implies that convergence to even the weakest natural solution concept – normal-form coarse correlated equilbrium – is not possible via black-box reduction to a no-regret algorithm even in stochastic games with constant horizon (unless ⊆). Instead, we turn to a different target: algorithms which generate an equilibrium when they are used by all players. Our main result is algorithm which generates an extensive-form correlated equilibrium, whose runtime is exponential in the horizon but polynomial in all other parameters. We give a similar algorithm which is polynomial in all parameters for "fast-mixing" stochastic games. We also show a method for efficiently reaching normal-form coarse correlated equilibria in "single-controller" stochastic games which follows the traditional no-regret approach. When shared randomness is available, the two generative algorithms can be extended to give simultaneous regret bounds and converge in the traditional sense.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset