The Representation Power of Neural Networks: Breaking the Curse of Dimensionality

12/10/2020
by   Moise Blanchard, et al.
0

In this paper, we analyze the number of neurons and training parameters that a neural networks needs to approximate multivariate functions of bounded second mixed derivatives – Korobov functions. We prove upper bounds on these quantities for shallow and deep neural networks, breaking the curse of dimensionality. Our bounds hold for general activation functions, including ReLU. We further prove that these bounds nearly match the minimal number of parameters any continuous function approximator needs to approximate Korobov functions, showing that neural networks are near-optimal function approximators.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset