Bayesian l_0 Regularized Least Squares

05/31/2017
by   Nicholas G. Polson, et al.
0

Bayesian l_0-regularized least squares provides a variable selection technique for high dimensional predictors. The challenge in l_0 regularization is optimizing a non-convex objective function via search over model space consisting of all possible predictor combinations, a NP-hard task. Spike-and-slab (a.k.a. Bernoulli-Gaussian, BG) priors are the gold standard for Bayesian variable selection, with a caveat of computational speed and scalability. We show that a Single Best Replacement (SBR) algorithm is a fast scalable alternative. Although SBR calculates a sparse posterior mode, we show that it possesses a number of equivalences and optimality properties of a posterior mean. To illustrate our methodology, we provide simulation evidence and a real data example on the statistical properties and computational efficiency of SBR versus direct posterior sampling using spike-and-slab priors. Finally, we conclude with directions for future research.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro