Contraction of a quasi-Bayesian model with shrinkage priors in precision matrix estimation
Currently several Bayesian approaches are available to estimate large sparse precision matrices, including Bayesian graphical Lasso (Wang, 2012), Bayesian structure learning (Banerjee and Ghosal, 2015), and graphical horseshoe (Li et al., 2019). Although these methods have exhibited nice empirical performances, in general they are computationally expensive. Moreover, we have limited knowledge about the theoretical properties, e.g., posterior contraction rate, of graphical Bayesian Lasso and graphical horseshoe. In this paper, we propose a new method that integrates some commonly used continuous shrinkage priors into a quasi-Bayesian framework featured by a pseudo-likelihood. Under mild conditions, we establish an optimal posterior contraction rate for the proposed method. Compared to existing approaches, our method has two main advantages. First, our method is computationally more efficient while achieving similar error rate; second, our framework is more amenable to theoretical analysis. Extensive simulation experiments and the analysis on a real data set are supportive of our theoretical results.
READ FULL TEXT