The critical locus of overparameterized neural networks
Many aspects of the geometry of loss functions in deep learning remain mysterious. In this paper, we work toward a better understanding of the geometry of the loss function L of overparameterized feedforward neural networks. In this setting, we identify several components of the critical locus of L and study their geometric properties. For networks of depth ℓ≥ 4, we identify a locus of critical points we call the star locus S. Within S we identify a positive-dimensional sublocus C with the property that for p ∈ C, p is a degenerate critical point, and no existing theoretical result guarantees that gradient descent will not converge to p. For very wide networks, we build on earlier work and show that all critical points of L are degenerate, and give lower bounds on the number of zero eigenvalues of the Hessian at each critical point. For networks that are both deep and very wide, we compare the growth rates of the zero eigenspaces of the Hessian at all the different families of critical points that we identify. The results in this paper provide a starting point to a more quantitative understanding of the properties of various components of the critical locus of L.
READ FULL TEXT