Learning with minimal information in continuous games

06/29/2018
by   Sebastian Bervoets, et al.
0

We introduce a stochastic learning process called the dampened gradient approximation process. While learning models have almost exclusively focused on finite games, in this paper we design a learning process for games with continuous action sets. It is payoff-based and thus requires from players no sophistication and no knowledge of the game. We show that despite such limited information, players will converge to Nash in large classes of games. In particular, convergence to a Nash equilibrium which is stable is guaranteed in all games with strategic complements as well as in concave games; convergence to Nash often occurs in all locally ordinal potential games; convergence to a stable Nash occurs with positive probability in all games with isolated equilibria.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/07/2020

Stability of Gradient Learning Dynamics in Continuous Games: Scalar Action Spaces

Learning processes in games explain how players grapple with one another...
research
04/23/2015

Strategic Teaching and Learning in Games

It is known that there are uncoupled learning heuristics leading to Nash...
research
02/02/2021

Games on Endogenous Networks

We study network games in which players both create spillovers for one a...
research
12/01/2014

Game-theoretical control with continuous action sets

Motivated by the recent applications of game-theoretical learning techni...
research
11/19/2020

Locally-Aware Constrained Games on Networks

Network games have been instrumental in understanding strategic behavior...
research
09/18/2017

Stochastic Stability of Reinforcement Learning in Positive-Utility Games

This paper considers a class of discrete-time reinforcement-learning dyn...
research
09/11/2022

Learning in Games with Quantized Payoff Observations

This paper investigates the impact of feedback quantization on multi-age...

Please sign up or login with your details

Forgot password? Click here to reset