Entropy Maximization with Depth: A Variational Principle for Random Neural Networks

05/25/2022
by   Amir Joudaki, et al.
0

To understand the essential role of depth in neural networks, we investigate a variational principle for depth: Does increasing depth perform an implicit optimization for the representations in neural networks? We prove that random neural networks equipped with batch normalization maximize the differential entropy of representations with depth up to constant factors, assuming that the representations are contractive. Thus, representations inherently obey the principle of maximum entropy at initialization, in the absence of information about the learning task. Our variational formulation for neural representations characterizes the interplay between representation entropy and architectural components, including depth, width, and non-linear activations, thereby potentially inspiring the design of neural architectures.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset