Why Unsupervised Deep Networks Generalize

12/07/2020
by   Anita de Mello Koch, et al.
0

Promising resolutions of the generalization puzzle observe that the actual number of parameters in a deep network is much smaller than naive estimates suggest. The renormalization group is a compelling example of a problem which has very few parameters, despite the fact that naive estimates suggest otherwise. Our central hypothesis is that the mechanisms behind the renormalization group are also at work in deep learning, and that this leads to a resolution of the generalization puzzle. We show detailed quantitative evidence that proves the hypothesis for an RBM, by showing that the trained RBM is discarding high momentum modes. Specializing attention mainly to autoencoders, we give an algorithm to determine the network's parameters directly from the learning data set. The resulting autoencoder almost performs as well as one trained by deep learning, and it provides an excellent initial condition for training, reducing training times by a factor between 4 and 100 for the experiments we considered. Further, we are able to suggest a simple criterion to decide if a given problem can or can not be solved using a deep network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset