Mean-field theory of input dimensionality reduction in unsupervised deep neural networks
Deep neural networks as powerful tools are widely used in various domains. However, the nature of computations in each layer of the deep networks is far from being understood. Increasing the interpretability of deep neural networks is thus important. Here, we construct a mean-field framework to understand how compact representations are developed across layers, not only in deterministic random deep networks but also in generative deep networks where network parameters are learned from input data. Our theory shows that the deep computation implements a dimensionality reduction while maintaining a finite level of weak correlations between neurons for possible feature extraction. This work paves the way for understanding how a sensory hierarchy works in general.
READ FULL TEXT