Information Geometry of Orthogonal Initializations and Training

10/09/2018
by   Piotr A. Sokol, et al.
0

Recently mean field theory has been successfully used to analyze properties of wide, random neural networks. It has given rise to a prescriptive theory for initializing neural networks, which ensures that the ℓ_2 norm of the backpropagated gradients is bounded, and training is orders of magnitude faster. Despite the strong empirical performance of this class of initializations, the mechanisms by which they confer an advantage in the optimization of deep neural networks are poorly understood. Here we show a novel connection between the maximum curvature of the optimization landscape (gradient smoothness) as measured by the Fisher information matrix and the maximum singular value of the input-output Jacobian. Our theory partially explains why neural networks that are more isometric can train much faster. Furthermore, we experimentally investigate the benefits of maintaining orthogonality throughout training, from which we conclude that manifold constrained optimization of weights performs better regardless of the smoothness of the gradients. Finally we show that critical orthogonal initializations do not trivially give rise to a mean field limit of pre-activations for each layer.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset