Adaptive and Universal Single-gradient Algorithms for Variational Inequalities
Variational inequalities with monotone operators capture many problems of interest, notably convex optimization and convex-concave saddle point problems. Classical methods based on the MirrorProx algorithm require two operator evaluations per iteration, which is the dominant factor in the running time in many settings. Additionally, the algorithms typically require careful settings of the step sizes based on the parameters of the problems such as smoothness. In this work, we develop new algorithms addressing both of these shortcomings simultaneously. Our algorithms use a single operator evaluation per iteration and automatically adapt to problem parameters such as smoothness. We show that our algorithms are universal and simultaneously achieve the optimal convergence rates in the non-smooth, smooth, and stochastic settings.
READ FULL TEXT