Parallel Restarted SPIDER – Communication Efficient Distributed Nonconvex Optimization with Optimal Computation Complexity

12/12/2019
by   Pranay Sharma, et al.
0

In this paper, we propose a distributed algorithm for stochastic smooth, non-convex optimization. We assume a worker-server architecture where N nodes, each having n (potentially infinite) number of samples, collaborate with the help of a central server to perform the optimization task. The global objective is to minimize the average of local cost functions available at individual nodes. The proposed approach is a non-trivial extension of the popular parallel-restarted SGD algorithm, incorporating the optimal variance-reduction based SPIDER gradient estimator into it. We prove convergence of our algorithm to a first-order stationary solution. The proposed approach achieves the best known communication complexity O(ϵ^-1) along with the optimal computation complexity. For finite-sum problems (finite n), we achieve the optimal computation (IFO) complexity O(√(Nn)ϵ^-1). For online problems (n unknown or infinite), we achieve the optimal IFO complexity O(ϵ^-3/2). In both the cases, we maintain the linear speedup achieved by existing methods. This is a massive improvement over the O(ϵ^-2) IFO complexity of the existing approaches. Additionally, our algorithm is general enough to allow non-identical distributions of data across workers, as in the recently proposed federated learning paradigm.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset