Collocation approximation by deep neural ReLU networks for parametric elliptic PDEs with lognormal inputs

11/10/2021
by   Dinh Dũng, et al.
0

We obtained convergence rates of the collocation approximation by deep ReLU neural networks of the solution u to elliptic PDEs with lognormal inputs, parametrized by y from the non-compact set ℝ^∞. The approximation error is measured in the norm of the Bochner space L_2(ℝ^∞, V, γ), where γ is the infinite tensor product standard Gaussian probability measure on ℝ^∞ and V is the energy space. Under a certain assumption on ℓ_q-summability for the lognormal inputs (0<q<2), we proved that given arbitrary number δ >0 small enough, for every integer n > 1, one can construct a compactly supported deep ReLU neural network ϕ_n:= (ϕ_j)_j=1^m of size at most n on ℝ^m with m =𝒪(n^1 - δ), and a sequence of points (yj)_j=1^m ⊂ℝ^m (which are independent of u) so that the collocation approximation of u by Φ_n u:= ∑_j=1^m u(y^j) Φ_j, which is based on the m solvers ( u(y^j))_j=1^m and the deep ReLU network ϕ_n, gives the twofold error bounds: u- Φ_n u _L_2(ℝ^∞ V, γ) = 𝒪(m^- (1/q - 1/2)) =𝒪(n^- (1-δ)(1/q - 1/2)), where Φ_j are the extensions of ϕ_j to the whole ℝ^∞. We also obtained similar results for the case when the lognormal inputs are parametrized on ℝ^M with very large dimension M, and the approximation error is measured in the √(g_M)-weighted uniform norm of the Bochner space L_∞^√(g)(ℝ^M, V), where g_M is the density function of the standard Gaussian probability measure on ℝ^M.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset