Throwing Away Data Improves Worst-Class Error in Imbalanced Classification

05/23/2022
by   Martin Arjovsky, et al.
0

Class imbalances pervade classification problems, yet their treatment differs in theory and practice. On the one hand, learning theory instructs us that more data is better, as sample size relates inversely to the average test error over the entire data distribution. On the other hand, practitioners have long developed a plethora of tricks to improve the performance of learning machines over imbalanced data. These include data reweighting and subsampling, synthetic construction of additional samples from minority classes, ensembling expensive one-versus all architectures, and tweaking classification losses and thresholds. All of these are efforts to minimize the worst-class error, which is often associated to the minority group in the training data, and finds additional motivation in the robustness, fairness, and out-of-distribution literatures. Here we take on the challenge of developing learning theory able to describe the worst-class error of classifiers over linearly-separable data when fitted either on (i) the full training set, or (ii) a subset where the majority class is subsampled to match in size the minority class. We borrow tools from extreme value theory to show that, under distributions with certain tail properties, throwing away most data from the majority class leads to better worst-class error.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset