Data-Efficient Contrastive Self-supervised Learning: Easy Examples Contribute the Most

02/18/2023
by   Siddharth Joshi, et al.
0

Self-supervised learning (SSL) learns high-quality representations from large pools of unlabeled training data. As datasets grow larger, it becomes crucial to identify the examples that contribute the most to learning such representations. This enables efficient SSL by reducing the volume of data required for learning high-quality representations. Nevertheless, quantifying the value of examples for SSL has remained an open question. In this work, we address this for the first time, by proving that examples that contribute the most to contrastive SSL are those that have the most similar augmentations to other examples, in expectation. We provide rigorous guarantees for the generalization performance of SSL on such subsets. Empirically, we discover, perhaps surprisingly, the subsets that contribute the most to SSL are those that contribute the least to supervised learning. Through extensive experiments, we show that our subsets outperform random subsets by more than 3 on CIFAR100, CIFAR10, and STL10. Interestingly, we also find that we can safely exclude 20 downstream task performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset