Bayesian inference using synthetic likelihood: asymptotics and adjustments

02/13/2019
by   David J. Nott, et al.
0

Implementing Bayesian inference is often computationally challenging in applications involving complex models, and sometimes calculating the likelihood itself can be difficult. Synthetic likelihood is one approach for carrying out inference when the likelihood is intractable, but it is straightforward to simulate from the model. The method constructs an approximate likelihood by taking a vector summary statistic as being multivariate normal, with the unknown mean and covariance matrix estimated by simulation for any given parameter value. Our article examines the asymptotic behaviour of Bayesian inference using a synthetic likelihood. If the summary statistic satisfies a central limit theorem, then the synthetic likelihood posterior is asymptotically normal under general conditions, with a distribution concentrating around a pseudo-true parameter value which is the true value when the model is correct. We compare the asymptotic behaviour of the synthetic likelihood posterior with that obtained from approximate Bayesian computation (ABC), and the two methods behave similarly under appropriate assumptions which allow correct uncertainty quantification. We also compare the computational efficiency of importance sampling ABC and synthetic likelihood algorithms, and give a general argument why synthetic likelihood is more efficient. Adjusted inference methods based on the asymptotic results are also suggested for use when a possibly misspecified form is assumed for the synthetic likelihood covariance matrix, such as diagonal or a factor model. This can be attractive to allow covariance matrix estimation using fewer model simulations when model simulation is expensive. The methods are illustrated in some simulated and real examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset