A Distributed SGD Algorithm with Global Sketching for Deep Learning Training Acceleration

08/13/2021
by   LingFei Dai, et al.
0

Distributed training is an effective way to accelerate the training process of large-scale deep learning models. However, the parameter exchange and synchronization of distributed stochastic gradient descent introduce a large amount of communication overhead. Gradient compression is an effective method to reduce communication overhead. In synchronization SGD compression methods, many Top-k sparsification based gradient compression methods have been proposed to reduce the communication. However, the centralized method based on the parameter servers has the single point of failure problem and limited scalability, while the decentralized method with global parameter exchanging may reduce the convergence rate of training. In contrast with Top-k based methods, we proposed a gradient compression method with globe gradient vector sketching, which uses the Count-Sketch structure to store the gradients to reduce the loss of the accuracy in the training process, named global-sketching SGD (gs-SGD). The gs-SGD has better convergence efficiency on deep learning models and a communication complexity of O(log d*log P), where d is the number of model parameters and P is the number of workers. We conducted experiments on GPU clusters to verify that our method has better convergence efficiency than global Top-k and Sketching-based methods. In addition, gs-SGD achieves 1.3-3.1x higher throughput compared with gTop-k, and 1.1-1.2x higher throughput compared with original Sketched-SGD.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset