Cross and Learn: Cross-Modal Self-Supervision

11/09/2018
by   Nawid Sayed, et al.
0

In this paper we present a self-supervised method for representation learning utilizing two different modalities. Based on the observation that cross-modal information has a high semantic meaning we propose a method to effectively exploit this signal. For our approach we utilize video data since it is available on a large scale and provides easily accessible modalities given by RGB and optical flow. We demonstrate state-of-the-art performance on highly contested action recognition datasets in the context of self-supervised learning. We show that our feature representation also transfers to other tasks and conduct extensive ablation studies to validate our core contributions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset