Low-shot learning with large-scale diffusion
This paper considers the problem of inferring image labels for which only a few labelled examples are available at training time. This setup is often referred to as low-shot learning in the literature, where a standard approach is to re-train the last few layers of a convolutional neural network learned on separate classes. We consider a semi-supervised setting in which we exploit a large collection of images to support label propagation. This is made possible by leveraging the recent advances on large-scale similarity graph construction. We show that despite its conceptual simplicity, scaling up label propagation to up hundred millions of images leads to state of the art accuracy in the low-shot learning regime.
READ FULL TEXT