Mixed Laplace approximation for marginal posterior and Bayesian inference in error-in-operator model
Laplace approximation is a very useful tool in Bayesian inference and it claims a nearly Gaussian behavior of the posterior. <cit.> established some rather accurate finite sample results about the quality of Laplace approximation in terms of the so called effective dimension p under the critical dimension constraint p^3≪ n. However, this condition can be too restrictive for many applications like error-in-operator problem or Deep Neuronal Networks. This paper addresses the question whether the dimensionality condition can be relaxed and the accuracy of approximation can be improved if the target of estimation is low dimensional while the nuisance parameter is high or infinite dimensional. Under mild conditions, the marginal posterior can be approximated by a Gaussian mixture and the accuracy of the approximation only depends on the target dimension. Under the condition p^2≪ n or in some special situation like semi-orthogonality, the Gaussian mixture can be replaced by one Gaussian distribution leading to a classical Laplace result. The second result greatly benefits from the recent advances in Gaussian comparison from <cit.>. The results are illustrated and specified for the case of error-in-operator model.
READ FULL TEXT