Simple and optimal high-probability bounds for strongly-convex stochastic gradient descent

09/02/2019
by   Nicholas J. A. Harvey, et al.
0

We consider stochastic gradient descent algorithms for minimizing a non-smooth, strongly-convex function. Several forms of this algorithm, including suffix averaging, are known to achieve the optimal O(1/T) convergence rate in expectation. We consider a simple, non-uniform averaging strategy of Lacoste-Julien et al. (2011) and prove that it achieves the optimal O(1/T) convergence rate with high probability. Our proof uses a recently developed generalization of Freedman's inequality. Finally, we compare several of these algorithms experimentally and show that this non-uniform averaging strategy outperforms many standard techniques, and with smaller variance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset