Faster Perturbed Stochastic Gradient Methods for Finding Local Minima
Escaping from saddle points and finding local minima is a central problem in nonconvex optimization. Perturbed gradient methods are perhaps the simplest approach for this problem. However, to find (ϵ, √(ϵ))-approximate local minima, the existing best stochastic gradient complexity for this type of algorithms is Õ(ϵ^-3.5), which is not optimal. In this paper, we propose , a faster perturbed stochastic gradient framework for finding local minima. We show that Pullback with stochastic gradient estimators such as SARAH/SPIDER and STORM can find (ϵ, ϵ_H)-approximate local minima within Õ(ϵ^-3 + ϵ_H^-6) stochastic gradient evaluations (or Õ(ϵ^-3) when ϵ_H = √(ϵ)). The core idea of our framework is a step-size “pullback” scheme to control the average movement of the iterates, which leads to faster convergence to the local minima. Experiments on matrix factorization problems corroborate our theory.
READ FULL TEXT