Embarrassingly parallel inference for Gaussian processes

02/27/2017
by   Michael M. Zhang, et al.
0

Training Gaussian process (GP)-based models typically involves an O(N^3) computational bottleneck. Popular methods for overcoming the matrix inversion problem include sparse approximations of the covariance matrix through inducing variables or dimensionality reduction via "local experts". However, these type of models cannot account for both long and short range correlations in the GP functions, and are often hard to implement in a distributed setting. We present an embarrassingly parallel method that takes advantage of the computational ease of inverting block diagonal matrices, while maintaining much of the expressivity of a full covariance matrix. By using importance sampling to average over different realizations of low-rank approximations of the GP model, we ensure our algorithm is both asymptotically unbiased and embarrassingly parallel. We show comparable or improved performance over competing methods, on a range of synthetic and real datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset