On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems

06/02/2019
by   Tianyi Lin, et al.
0

We consider nonconvex-concave minimax problems, _x_y∈Y f(x, y), where f is nonconvex in x but concave in y. The standard algorithm for solving this problem is the celebrated gradient descent ascent (GDA) algorithm, which has been widely used in machine learning, control theory and economics. However, despite the solid theory for the convex-concave setting, GDA can converge to limit cycles or even diverge in a general setting. In this paper, we present a nonasymptotic analysis of GDA for solving nonconvex-concave minimax problems, showing that GDA can find a stationary point of the function Φ(·) :=_y∈Yf(·, y) efficiently. To the best our knowledge, this is the first theoretical guarantee for GDA in this setting, shedding light on its practical performance in many real applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset