Statistical Optimality of Deep Wide Neural Networks

05/04/2023
by   Yicheng Li, et al.
0

In this paper, we consider the generalization ability of deep wide feedforward ReLU neural networks defined on a bounded domain 𝒳⊂ℝ^d. We first demonstrate that the generalization ability of the neural network can be fully characterized by that of the corresponding deep neural tangent kernel (NTK) regression. We then investigate on the spectral properties of the deep NTK and show that the deep NTK is positive definite on 𝒳 and its eigenvalue decay rate is (d+1)/d. Thanks to the well established theories in kernel regression, we then conclude that multilayer wide neural networks trained by gradient descent with proper early stopping achieve the minimax rate, provided that the regression function lies in the reproducing kernel Hilbert space (RKHS) associated with the corresponding NTK. Finally, we illustrate that the overfitted multilayer wide neural networks can not generalize well on 𝕊^d.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset