The necessity of depth for artificial neural networks to approximate certain classes of smooth and bounded functions without the curse of dimensionality

01/19/2023
by   Lukas Gonon, et al.
0

In this article we study high-dimensional approximation capacities of shallow and deep artificial neural networks (ANNs) with the rectified linear unit (ReLU) activation. In particular, it is a key contribution of this work to reveal that for all a,b∈ℝ with b-a≥ 7 we have that the functions [a,b]^d∋ x=(x_1,…,x_d)↦∏_i=1^d x_i∈ℝ for d∈ℕ as well as the functions [a,b]^d∋ x =(x_1,…, x_d)↦sin(∏_i=1^d x_i) ∈ℝ for d ∈ℕ can neither be approximated without the curse of dimensionality by means of shallow ANNs nor insufficiently deep ANNs with ReLU activation but can be approximated without the curse of dimensionality by sufficiently deep ANNs with ReLU activation. We show that the product functions and the sine of the product functions are polynomially tractable approximation problems among the approximating class of deep ReLU ANNs with the number of hidden layers being allowed to grow in the dimension d ∈ℕ. We establish the above outlined statements not only for the product functions and the sine of the product functions but also for other classes of target functions, in particular, for classes of uniformly globally bounded C^∞-functions with compact support on any [a,b]^d with a∈ℝ, b∈(a,∞). Roughly speaking, in this work we lay open that simple approximation problems such as approximating the sine or cosine of products cannot be solved in standard implementation frameworks by shallow or insufficiently deep ANNs with ReLU activation in polynomial time, but can be approximated by sufficiently deep ReLU ANNs with the number of parameters growing at most polynomially.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset