High-Dimensional L_2Boosting: Rate of Convergence

02/29/2016
by   Ye Luo, et al.
0

Boosting is one of the most significant developments in machine learning. This paper studies the rate of convergence of L_2Boosting, which is tailored for regression, in a high-dimensional setting. Moreover, we introduce so-called “ post-Boosting”. This is a post-selection estimator which applies ordinary least squares to the variables selected in the first stage by L_2Boosting. Another variant is “ Orthogonal Boosting” where after each step an orthogonal projection is conducted. We show that both post-L_2Boosting and the orthogonal boosting achieve the same rate of convergence as LASSO in a sparse, high-dimensional setting. We show that the rate of convergence of the classical L_2Boosting depends on the design matrix described by a sparse eigenvalue constant. To show the latter results, we derive new approximation results for the pure greedy algorithm, based on analyzing the revisiting behavior of L_2Boosting. We also introduce feasible rules for early stopping, which can be easily implemented and used in applied work. Our results also allow a direct comparison between LASSO and boosting which has been missing from the literature. Finally, we present simulation studies and applications to illustrate the relevance of our theoretical results and to provide insights into the practical aspects of boosting. In these simulation studies, post-L_2Boosting clearly outperforms LASSO.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset