On the different regimes of Stochastic Gradient Descent

09/19/2023
by   Antonio Sclocchi, et al.
0

Modern deep networks are trained with stochastic gradient descent (SGD) whose key parameters are the number of data considered at each step or batch size B, and the step size or learning rate η. For small B and large η, SGD corresponds to a stochastic evolution of the parameters, whose noise amplitude is governed by the `temperature' T≡η/B. Yet this description is observed to break down for sufficiently large batches B≥ B^*, or simplifies to gradient descent (GD) when the temperature is sufficiently small. Understanding where these cross-overs take place remains a central challenge. Here we resolve these questions for a teacher-student perceptron classification model, and show empirically that our key predictions still apply to deep networks. Specifically, we obtain a phase diagram in the B-η plane that separates three dynamical phases: (i) a noise-dominated SGD governed by temperature, (ii) a large-first-step-dominated SGD and (iii) GD. These different phases also corresponds to different regimes of generalization error. Remarkably, our analysis reveals that the batch size B^* separating regimes (i) and (ii) scale with the size P of the training set, with an exponent that characterizes the hardness of the classification problem.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset