Are all negatives created equal in contrastive instance discrimination?

10/13/2020
by   Tiffany, et al.
0

Self-supervised learning has recently begun to rival supervised learning on computer vision tasks. Many of the recent approaches have been based on contrastive instance discrimination (CID), in which the network is trained to recognize two augmented versions of the same instance (a query and positive) while discriminating against a pool of other instances (negatives). The learned representation is then used on downstream tasks such as image classification. Using methodology from MoCo v2 (Chen et al., 2020), we divided negatives by their difficulty for a given query and studied which difficulty ranges were most important for learning useful representations. We found a minority of negatives – the hardest 5 downstream task to reach nearly full accuracy. Conversely, the easiest 95 negatives were unnecessary and insufficient. Moreover, the very hardest 0.1 negatives were unnecessary and sometimes detrimental. Finally, we studied the properties of negatives that affect their hardness, and found that hard negatives were more semantically similar to the query, and that some negatives were more consistently easy or hard than we would expect by chance. Together, our results indicate that negatives vary in importance and that CID may benefit from more intelligent negative treatment.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset