Functional Regularization for Representation Learning: A Unified Theoretical Perspective

08/06/2020
by   Siddhant Garg, et al.
13

Unsupervised and self-supervised learning approaches have become a crucial tool to learn representations for downstream prediction tasks. While these approaches are widely used in practice and achieve impressive empirical gains, their theoretical understanding largely lags behind. Towards bridging this gap, we present a unifying perspective where several such approaches can be viewed as imposing a regularization on the representation via a learnable function using unlabeled data. We propose a discriminative theoretical framework for analyzing the sample complexity of these approaches. Our sample complexity bounds show that, with carefully chosen hypothesis classes to exploit the structure in the data, such functional regularization can prune the hypothesis space and help reduce the labeled data needed. We then provide two concrete examples of functional regularization, one using auto-encoders and the other using masked self-supervision, and apply the framework to quantify the reduction in the sample complexity bound. We also provide complementary empirical results for the examples to support our analysis.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset