Low-rank Gradient Approximation For Memory-Efficient On-device Training of Deep Neural Network

01/24/2020
by   Mary Gooneratne, et al.
0

Training machine learning models on mobile devices has the potential of improving both privacy and accuracy of the models. However, one of the major obstacles to achieving this goal is the memory limitation of mobile devices. Reducing training memory enables models with high-dimensional weight matrices, like automatic speech recognition (ASR) models, to be trained on-device. In this paper, we propose approximating the gradient matrices of deep neural networks using a low-rank parameterization as an avenue to save training memory. The low-rank gradient approximation enables more advanced, memory-intensive optimization techniques to be run on device. Our experimental results show that we can reduce the training memory by about 33.0 optimization. It uses comparable memory to momentum optimization and achieves a 4.5

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset