Global convergence rates of augmented Lagrangian methods for constrained convex programming

11/15/2017
by   Yangyang Xu, et al.
0

Augmented Lagrangian method (ALM) has been popularly used for solving constrained optimization problems. Its convergence and local convergence speed have been extensively studied. However, its global convergence rate is still open for problems with nonlinear inequality constraints. In this paper, we work on general constrained convex programs. For these problems, we establish the global convergence rate of ALM and its inexact variants. We first assume exact solution to each subproblem in the ALM framework and establish an O(1/k) ergodic convergence result, where k is the number of iterations. Then we analyze an inexact ALM that approximately solves the subproblems. Assuming summable errors, we prove that the inexact ALM also enjoys O(1/k) convergence if smaller stepsizes are used in the multiplier updates. Furthermore, we apply the inexact ALM to a constrained composite convex problem with each subproblem solved by Nesterov's optimal first-order method. We show that O(ε^-3/2-δ) gradient evaluations are sufficient to guarantee an ε-optimal solution in terms of both primal objective and feasibility violation, where δ is an arbitrary positive number. Finally, for constrained smooth problems, we modify the inexact ALM by adding a proximal term to each subproblem and improve the iteration complexity to O(ε^-1|ε|).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset