Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization

08/11/2022
by   Lesi Chen, et al.
0

We study the problem of finding a near-stationary point for smooth minimax optimization. The recent proposed extra anchored gradient (EAG) methods achieve the optimal convergence rate for the convex-concave minimax problem in deterministic setting. However, the direct extension of EAG to stochastic optimization is not efficient. In this paper, we design a novel stochastic algorithm called Recursive Anchored IteratioN (RAIN). We show that the RAIN achieves near-optimal stochastic first-order oracle complexity for stochastic minimax optimization in both convex-concave and strongly-convex-strongly-concave cases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset