Finite-sample and asymptotic analysis of generalization ability with an application to penalized regression

09/12/2016
by   Ning Xu, et al.
0

In this paper, we study the performance of extremum estimators from the perspective of generalization ability (GA): the ability of a model to predict outcomes in new samples from the same population. By adapting the classical concentration inequalities, we derive upper bounds on the empirical out-of-sample prediction errors as a function of the in-sample errors, in-sample data size, heaviness in the tails of the error distribution, and model complexity. We show that the error bounds may be used for tuning key estimation hyper-parameters, such as the number of folds K in cross-validation. We also show how K affects the bias-variance trade-off for cross-validation. We demonstrate that the L_2-norm difference between penalized and the corresponding un-penalized regression estimates is directly explained by the GA of the estimates and the GA of empirical moment conditions. Lastly, we prove that all penalized regression estimates are L_2-consistent for both the n ≥ p and the n < p cases. Simulations are used to demonstrate key results. Keywords: generalization ability, upper bound of generalization error, penalized regression, cross-validation, bias-variance trade-off, L_2 difference between penalized and unpenalized regression, lasso, high-dimensional data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset