Vote-boosting ensembles

06/30/2016
by   Maryam Sabzevari, et al.
0

Vote-boosting is a sequential ensemble learning method in which individual classifiers are built on different weighted versions of the training data. To build a new classifier, the weight of each training instance is determined as a function of the disagreement rate of the current ensemble predictions for that particular instance. Experiments using the symmetric beta distribution as the emphasis function and different base learners are used to illustrate the properties and to analyze the performance of these types of ensembles. In classification problems with low or no class-label noise, when simple base learners are used, vote-boosting behaves as if it were an interpolation between bagging and standard boosting (e.g. AdaBoost), depending on the value of the shape parameter of the beta distribution. In terms of predictive accuracy the best results, which are comparable or better than random forests, are obtained with vote-boosting ensembles of random trees.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset