Regret Pruning for Learning Equilibria in Simulation-Based Games
In recent years, empirical game-theoretic analysis (EGTA) has emerged as a powerful tool for analyzing games in which an exact specification of the utilities is unavailable. Instead, EGTA assumes access to an oracle, i.e., a simulator, which can generate unbiased noisy samples of players' unknown utilities, given a strategy profile. Utilities can thus be empirically estimated by repeatedly querying the simulator. Recently, various progressive sampling (PS) algorithms have been proposed, which aim to produce PAC-style learning guarantees (e.g., approximate Nash equilibria with high probability) using as few simulator queries as possible. One recent work introduces a pruning technique called regret-pruning which further minimizes the number of simulator queries placed in PS algorithms which aim to learn pure Nash equilibria. In this paper, we address a serious limitation of this original regret pruning approach – it is only able to guarantee that true pure Nash equilibria of the empirical game are approximate equilibria of the true game, and is unable to provide any strong guarantees regarding the efficacy of approximate pure Nash equilibria. This is a significant limitation since in many games, pure Nash equilibria are computationally intractable to find, or even non-existent. We introduce three novel regret pruning variations. The first two variations generalize the original regret pruning approach to yield guarantees for approximate pure Nash equilibria of the empirical game. The third variation goes further to even yield strong guarantees for all approximate mixed Nash equilibria of the empirical game. We use these regret pruning variations to design two novel progressive sampling algorithms, PS-REG+ and PS-REG-M, which experimentally outperform the previous state-of-the-art algorithms for learning pure and mixed equilibria, respectively, of simulation-based games.
READ FULL TEXT