Towards Unifying Neural Architecture Space Exploration and Generalization

10/02/2019
by   Kartikeya Bhardwaj, et al.
0

In this paper, we address a fundamental research question of significant practical interest: Can certain theoretical characteristics of CNN architectures indicate a priori (i.e., without training) which models with highly different number of parameters and layers achieve a similar generalization performance? To answer this question, we model CNNs from a network science perspective and introduce a new, theoretically-grounded, architecture-level metric called NN-Mass. We also integrate, for the first time, the PAC-Bayes theory of generalization with small-world networks to discover new synergies among our proposed NN-Mass metric, architecture characteristics, and model generalization. With experiments on real datasets such as CIFAR-10/100, we provide extensive empirical evidence for our theoretical findings. Finally, we exploit these new insights for model compression and achieve up to 3x fewer parameters and FLOPS, while losing minimal accuracy (e.g., 96.82 dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset