Optimal algorithms for smooth and strongly convex distributed optimization in networks

02/28/2017
by   Kevin Scaman, et al.
0

In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision ε > 0 in time O(√(κ_g)(1+Δτ)(1/ε)), where κ_g is the condition number of the (global) function to optimize, Δ is the diameter of the network, and τ (resp. 1) is the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves a precision ε > 0 in time O(√(κ_l)(1+τ/√(γ))(1/ε)), where κ_l is the condition number of the local functions and γ is the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset