High Probability Convergence of Stochastic Gradient Methods

02/28/2023
by   Zijian Liu, et al.
0

In this work, we describe a generic approach to show convergence with high probability for both stochastic convex and non-convex optimization with sub-Gaussian noise. In previous works for convex optimization, either the convergence is only in expectation or the bound depends on the diameter of the domain. Instead, we show high probability convergence with bounds depending on the initial distance to the optimal solution. The algorithms use step sizes analogous to the standard settings and are universal to Lipschitz functions, smooth functions, and their linear combinations. This method can be applied to the non-convex case. We demonstrate an O((1+σ^2log(1/δ))/T+σ/√(T)) convergence rate when the number of iterations T is known and an O((1+σ^2log(T/δ))/√(T)) convergence rate when T is unknown for SGD, where 1-δ is the desired success probability. These bounds improve over existing bounds in the literature. Additionally, we demonstrate that our techniques can be used to obtain high probability bound for AdaGrad-Norm (Ward et al., 2019) that removes the bounded gradients assumption from previous works. Furthermore, our technique for AdaGrad-Norm extends to the standard per-coordinate AdaGrad algorithm (Duchi et al., 2011), providing the first noise-adapted high probability convergence for AdaGrad.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset