Learning to Diversify via Weighted Kernels for Classifier Ensemble

06/04/2014
by   Xu-Cheng Yin, et al.
0

Classifier ensemble generally should combine diverse component classifiers. However, it is difficult to give a definitive connection between diversity measure and ensemble accuracy. Given a list of available component classifiers, how to adaptively and diversely ensemble classifiers becomes a big challenge in the literature. In this paper, we argue that diversity, not direct diversity on samples but adaptive diversity with data, is highly correlated to ensemble accuracy, and we propose a novel technology for classifier ensemble, learning to diversify, which learns to adaptively combine classifiers by considering both accuracy and diversity. Specifically, our approach, Learning TO Diversify via Weighted Kernels (L2DWK), performs classifier combination by optimizing a direct but simple criterion: maximizing ensemble accuracy and adaptive diversity simultaneously by minimizing a convex loss function. Given a measure formulation, the diversity is calculated with weighted kernels (i.e., the diversity is measured on the component classifiers' outputs which are kernelled and weighted), and the kernel weights are automatically learned. We minimize this loss function by estimating the kernel weights in conjunction with the classifier weights, and propose a self-training algorithm for conducting this convex optimization procedure iteratively. Extensive experiments on a variety of 32 UCI classification benchmark datasets show that the proposed approach consistently outperforms state-of-the-art ensembles such as Bagging, AdaBoost, Random Forests, Gasen, Regularized Selective Ensemble, and Ensemble Pruning via Semi-Definite Programming.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset