Second-order Guarantees of Distributed Gradient Algorithms

09/23/2018
by   Amir Daneshmand, et al.
0

We consider distributed smooth nonconvex unconstrained optimization over networks, modeled as a connected graph. We examine the behavior of distributed gradient-based algorithms near strict saddle points. Specifically, we establish that (i) the renowned Distributed Gradient Descent (DGD) algorithm likely converges to a neighborhood of a Second-order Stationary (SoS) solution; and (ii) the more recent class of distributed algorithms based on gradient tracking--implementable also over digraphs--likely converges to exact SoS solutions, thus avoiding (strict) saddle-points. Furthermore, a convergence rate is provided for the latter class of algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset