Is Deep Learning an RG Flow?

06/12/2019
by   Ellen de Mello Koch, et al.
0

Although there has been a rapid development of practical applications, theoretical explanations of deep learning are in their infancy. A possible starting point suggests that deep learning performs a sophisticated coarse graining. Coarse graining is the foundation of the renormalization group (RG), which provides a systematic construction of the theory of large scales starting from an underlying microscopic theory. In this way RG can be interpreted as providing a mechanism to explain the emergence of large scale structure, which is directly relevant to deep learning. We pursue the possibility that RG may provide a useful framework within which to pursue a theoretical explanation of deep learning. A statistical mechanics model for a magnet, the Ising model, is used to train an unsupervised RBM. The patterns generated by the trained RBM are compared to the configurations generated through a RG treatment of the Ising model. We argue that correlation functions between hidden and visible neurons are capable of diagnosing RG-like coarse graining. Numerical experiments show the presence of RG-like patterns in correlators computed using the trained RBMs. The observables we consider are also able to exhibit important differences between RG and deep learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset