Wasserstein convergence in Bayesian deconvolution models

11/12/2021
by   Judith Rousseau, et al.
0

We study the reknown deconvolution problem of recovering a distribution function from independent replicates (signal) additively contaminated with random errors (noise), whose distribution is known. We investigate whether a Bayesian nonparametric approach for modelling the latent distribution of the signal can yield inferences with asymptotic frequentist validity under the L^1-Wasserstein metric. When the error density is ordinary smooth, we develop two inversion inequalities relating either the L^1 or the L^1-Wasserstein distance between two mixture densities (of the observations) to the L^1-Wasserstein distance between the corresponding distributions of the signal. This smoothing inequality improves on those in the literature. We apply this general result to a Bayesian approach bayes on a Dirichlet process mixture of normal distributions as a prior on the mixing distribution (or distribution of the signal), with a Laplace or Linnik noise. In particular we construct an adaptive approximation of the density of the observations by the convolution of a Laplace (or Linnik) with a well chosen mixture of normal densities and show that the posterior concentrates at the minimax rate up to a logarithmic factor. The same prior law is shown to also adapt to the Sobolev regularity level of the mixing density, thus leading to a new Bayesian estimation method, relative to the Wasserstein distance, for distributions with smooth densities.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset