Gradient Descent Finds Global Minima for Generalizable Deep Neural Networks of Practical Sizes

08/05/2019
by   Kenji Kawaguchi, et al.
4

In this paper, we theoretically prove that gradient descent can find a global minimum for nonlinear deep neural networks of sizes commonly encountered in practice. The theory developed in this paper requires only the number of trainable parameters to increase linearly as the number of training samples increases. This allows the size of the deep neural networks to be several orders of magnitude smaller than that required by the previous theories. Moreover, we prove that the linear increase of the size of the network is the optimal rate and that it cannot be improved, except by a logarithmic factor. Furthermore, deep neural networks with the trainability guarantee are shown to generalize well to unseen test samples with a natural dataset but not a random dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset