Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks

03/26/2021
by   Curtis G. Northcutt, et al.
2

We algorithmically identify label errors in the test sets of 10 of the most commonly-used computer vision, natural language, and audio datasets, and subsequently study the potential for these label errors to affect benchmark results. Errors in test sets are numerous and widespread: we estimate an average of 3.4 errors comprise 6 found using confident learning and then human-validated via crowdsourcing (54 of the algorithmically-flagged candidates are indeed erroneously labeled). Surprisingly, we find that lower capacity models may be practically more useful than higher capacity models in real-world datasets with high proportions of erroneously labeled data. For example, on ImageNet with corrected labels: ResNet-18 outperforms ResNet-50 if the prevalence of originally mislabeled test examples increases by just 6 outperforms VGG-19 if the prevalence of originally mislabeled test examples increases by 5 based on test accuracy – our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset