Momentum-Based Variance Reduction in Non-Convex SGD

05/24/2019
by   Ashok Cutkosky, et al.
0

Variance reduction has emerged in recent years as a strong competitor to stochastic gradient descent in non-convex problems, providing the first algorithms to improve upon the converge rate of stochastic gradient descent for finding first-order critical points. However, variance reduction techniques typically require carefully tuned learning rates and willingness to use excessively large "mega-batches" in order to achieve their improved results. We present a new variance reduction algorithm, STORM, that does not require any batches and makes use of adaptive learning rates, enabling simpler implementation and less tuning of hyperparameters. Our technique for removing the batches uses a variant of momentum to achieve variance reduction in non-convex optimization. On smooth losses F, STORM finds a point x with E[∇ F(x)]< O(1/√(T)+σ^1/3/T^1/3) in T iterations with σ^2 variance in the gradients, matching the optimal rate but without requiring knowledge of σ.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset