Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee

10/23/2022
by   Tianyi Lin, et al.
0

We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle point of a convex-concave unconstrained min-max optimization problem. Compared to their first-order counterparts, investigations of second-order methods for min-max optimization are relatively limited, as obtaining global rates of convergence with second-order information is much more involved. In this paper, we highlight how second-order information can be used to speed up the dynamics of dual extrapolation methods despite inexactness. Specifically, we show that the proposed algorithms generate iterates that remain within a bounded set and the averaged iterates converge to an ϵ-saddle point within O(ϵ^-2/3) iterations in terms of a gap function. Our algorithms match the theoretically established lower bound in this context and our analysis provides a simple and intuitive convergence analysis for second-order methods without requiring any compactness assumptions. Finally, we present a series of numerical experiments on synthetic and real data that demonstrate the efficiency of the proposed algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset