Offline Learning in Markov Games with General Function Approximation

02/06/2023
by   Yuheng Zhang, et al.
3

We study offline multi-agent reinforcement learning (RL) in Markov games, where the goal is to learn an approximate equilibrium – such as Nash equilibrium and (Coarse) Correlated Equilibrium – from an offline dataset pre-collected from the game. Existing works consider relatively restricted tabular or linear models and handle each equilibria separately. In this work, we provide the first framework for sample-efficient offline learning in Markov games under general function approximation, handling all 3 equilibria in a unified manner. By using Bellman-consistent pessimism, we obtain interval estimation for policies' returns, and use both the upper and the lower bounds to obtain a relaxation on the gap of a candidate policy, which becomes our optimization objective. Our results generalize prior works and provide several additional insights. Importantly, we require a data coverage condition that improves over the recently proposed "unilateral concentrability". Our condition allows selective coverage of deviation policies that optimally trade-off between their greediness (as approximate best responses) and coverage, and we show scenarios where this leads to significantly better guarantees. As a new connection, we also show how our algorithmic framework can subsume seemingly different solution concepts designed for the special case of two-player zero-sum games.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset