Decentralized Stochastic First-Order Methods for Large-scale Machine Learning

07/23/2019
by   Ran Xin, et al.
0

Decentralized consensus-based optimization is a general computational framework where a network of nodes cooperatively minimizes a sum of locally available cost functions via only local computation and communication. In this article, we survey recent advances on this topic, particularly focusing on decentralized, consensus-based, first-order gradient methods for large-scale stochastic optimization. The class of consensus-based stochastic optimization algorithms is communication-efficient, able to exploit data parallelism, robust in random and adversarial environments, and simple to implement, thus providing scalable solutions to a wide range of large-scale machine learning problems. We review different state-of-the-art decentralized stochastic optimization formulations, different variants of consensus-based procedures, and demonstrate how to obtain decentralized counterparts of centralized stochastic first-order methods. We provide several intuitive illustrations of the main technical ideas as well as applications of the algorithms in the context of decentralized training of machine learning models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset