A distributed (preconditioned) projected-reflected-gradient algorithm for stochastic generalized Nash equilibrium problems
We consider the stochastic generalized Nash equilibrium problem (SGNEP) with joint feasibility constraints and expected-value cost functions. We propose a distributed stochastic preconditioned projected reflected gradient algorithm and show its almost sure convergence when the pseudogradient mapping is cocoercive. The algorithm is based on monotone operator splitting methods for SGNEPs when the expected-value pseudogradient mapping is approximated at each iteration via an increasing number of samples of the random variable, an approach known as sample average approximation. Finally, we show that a non-preconditioned variant of our proposed algorithm has less restrictive convergence guarantees than state-of-the-art (preconditioned) forward-backward algorithms.
READ FULL TEXT