Sparse Gaussian processes for solving nonlinear PDEs
In this article, we propose a numerical method based on sparse Gaussian processes (SGPs) to solve nonlinear partial differential equations (PDEs). The SGP algorithm is based on a Gaussian process (GP) method, which approximates the solution of a PDE with the maximum a posteriori probability estimator of a GP conditioned on the PDE evaluated at a finite number of sample points. The main bottleneck of the GP method lies in the inversion of a covariance matrix, whose cost grows cubically with respect to the size of samples. To improve the scalability of the GP method while retaining desirable accuracy, we draw inspiration from SGP approximations, where inducing points are introduced to summarize the information of samples. More precisely, our SGP method uses a Gaussian prior associated with a low-rank kernel generated by inducing points randomly selected from samples. In the SGP method, the size of the matrix to be inverted is proportional to the number of inducing points, which is much less than the size of the samples. The numerical experiments show that the SGP method using less than half of the uniform samples as inducing points achieves comparable accuracy to the GP method using the same number of uniform samples, which significantly reduces the computational cost. We give the existence proof for the approximation to the solution of a PDE and provide rigorous error analysis.
READ FULL TEXT