Sparsity Regularization for classification of large dimensional data

12/06/2017
by   Nand Sharma, et al.
0

Feature selection has evolved to be a very important step in several machine learning paradigms. Especially in the domains of bio-informatics and text classification which involve data of high dimensions, feature selection can help in drastically reducing the feature space. In cases where it is difficult or infeasible to obtain sufficient training examples, feature selection helps overcome the curse of dimensionality which in turn helps improve performance of the classification algorithm. The focus of our research are five embedded feature selection methods which use the ridge regression, or use Lasso regression, and those which combine the two with the goal of simultaneously performing variable selection and grouping correlated variables.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset