Does language help generalization in vision models?

04/16/2021
by   Benjamin Devillers, et al.
14

Vision models trained on multimodal datasets have recently proved very efficient, both in terms of the wide availability of large image-caption datasets, and in terms of the resulting model's ability to generalize to multiple downstream tasks (e.g. zero-shot learning). One might assume that these abilities are derived, at least in part, from a "semantic grounding" of the visual feature space, learning meaningful structure by mirroring the space of linguistic representations. Contrary to this intuition, we show that a visual model (BiT-M) trained on a very large supervised image dataset (ImageNet-21k) can be as efficient for generalization (few-shot learning, unsupervised clustering) as its multimodal counterpart (CLIP). When compared to other standard visual or language models, the latent representations of BiT-M were found to be just as "linguistic" as those of CLIP. Overall, these findings suggest that the main factor driving improvements of generalization in current models is the size of the training dataset, not (solely) the multimodal grounding property.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset