Improved Algorithms for Convex-Concave Minimax Optimization

06/11/2020
by   Yuanhao Wang, et al.
0

This paper studies minimax optimization problems min_x max_y f(x,y), where f(x,y) is m_x-strongly convex with respect to x, m_y-strongly concave with respect to y and (L_x,L_xy,L_y)-smooth. <cit.> provided the following lower bound of the gradient complexity for any first-order method: Ω(√(L_x/m_x+L_xy^2/m_x m_y+L_y/m_y)ln(1/ϵ)). This paper proposes a new algorithm with gradient complexity upper bound Õ(√(L_x/m_x+L· L_xy/m_x m_y+L_y/m_y)ln(1/ϵ)), where L=max{L_x,L_xy,L_y}. This improves over the best known upper bound Õ(√(L^2/m_x m_y)ln^3(1/ϵ)) by <cit.>. Our bound achieves linear convergence rate and tighter dependency on condition numbers, especially when L_xy≪ L (i.e., when the interaction between x and y is weak). Via reduction, our new bound also implies improved bounds for strongly convex-concave and convex-concave minimax optimization problems. When f is quadratic, we can further improve the upper bound, which matches the lower bound up to a small sub-polynomial factor.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset