Implicit Regularization in Deep Learning: A View from Function Space

08/03/2020
by   Aristide Baratin, et al.
39

We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a possible regularization effect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al, along a small number of task-relevant directions. By extrapolating a new analysis of Rademacher complexity bounds in linear models, we propose and study a new heuristic complexity measure for neural networks which captures this phenomenon, in terms of sequences of tangent kernel classes along in the learning trajectories.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset