Don't Use Large Mini-Batches, Use Local SGD

08/22/2018
by   Tao Lin, et al.
0

Mini-batch stochastic gradient methods are the current state of the art for large-scale distributed training of neural networks and other machine learning models. However, they fail to adapt to a changing communication vs computation trade-off in a system, such as when scaling to a large number of workers or devices. More so, the fixed requirement of communication bandwidth for gradient exchange severely limits the scalability to multi-node training e.g. in datacenters, and even more so for training on decentralized networks such as mobile devices. We argue that variants of local SGD, which perform several update steps on a local model before communicating to other nodes, offer significantly improved overall performance and communication efficiency, as well as adaptivity to the underlying system resources. Furthermore, we present a new hierarchical extension of local SGD, and demonstrate that it can efficiently adapt to several levels of computation costs in a heterogeneous distributed system.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset