When is best subset selection the "best"?

07/03/2020
by   Jianqing Fan, et al.
0

Best subset selection (BSS) is fundamental in statistics and machine learning. Despite the intensive studies of it, the fundamental question of when BSS is truly the "best", namely yielding the oracle estimator, remains partially answered. In this paper, we address this important issue by giving a weak sufficient condition and a strong necessary condition for BSS to exactly recover the true model. We also give a weak sufficient condition for BSS to achieve the sure screening property. On the optimization aspect, we find that the exact combinatorial minimizer for BSS is unnecessary: all the established statistical properties for the best subset carry over to any sparse model whose residual sum of squares is close enough to that of the best subset. In particular, we show that an iterative hard thresholding (IHT) algorithm can find a sparse subset with the sure screening property within logarithmic steps; another round of BSS within this set can recover the true model. The simulation studies and real data examples show that IHT yields lower false discovery rates and higher true positive rates than the competing approaches including LASSO, SCAD and SIS.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset