Scalable optimization-based sampling on function space

03/03/2019
by   Zheng Wang, et al.
0

Optimization-based samplers provide an efficient and parallellizable approach to solving large-scale Bayesian inverse problems. These methods solve randomly perturbed optimization problems to draw samples from an approximate posterior distribution. "Correcting" these samples, either by Metropolization or importance sampling, enables characterization of the original posterior distribution. This paper presents a new geometric interpretation of the randomize-then-optimize (RTO) method [1] and a unified transport-map interpretation of RTO and other optimization-based samplers, i.e., implicit sampling [19] and randomized-maximum-likelihood [20]. We then introduce a new subspace acceleration strategy that makes the computational complexity of RTO scale linearly with the parameter dimension. This subspace perspective suggests a natural extension of RTO to a function space setting. We thus formalize a function-space version of RTO and establish sufficient conditions for it to produce a valid Metropolis-Hastings proposal, yielding dimension-independent sampling performance. Numerical examples corroborate the dimension-independence of RTO and demonstrate sampling performance that is also robust to small observational noise.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset