ReLU nets adapt to intrinsic dimensionality beyond the target domain
We study the approximation of two-layer compositions f(x) = g(ϕ(x)) via deep ReLU networks, where ϕ is a nonlinear, geometrically intuitive, and dimensionality reducing feature map. We focus on two complementary choices for ϕ that are intuitive and frequently appearing in the statistical literature. The resulting approximation rates are near optimal and show adaptivity to intrinsic notions of complexity, which significantly extend a series of recent works on approximating targets over low-dimensional manifolds. Specifically, we show that ReLU nets can express functions, which are invariant to the input up to an orthogonal projection onto a low-dimensional manifold, with the same efficiency as if the target domain would be the manifold itself. This implies approximation via ReLU nets is faithful to an intrinsic dimensionality governed by the target f itself, rather than the dimensionality of the approximation domain. As an application of our approximation bounds, we study empirical risk minimization over a space of sparsely constrained ReLU nets under the assumption that the conditional expectation satisfies one of the proposed models. We show near-optimal estimation guarantees in regression and classifications problems, for which, to the best of our knowledge, no efficient estimator has been developed so far.
READ FULL TEXT