Few-Shot Learning via Learning the Representation, Provably

02/21/2020
by   Simon S. Du, et al.
46

This paper studies few-shot learning via representation learning, where one uses T source tasks with n_1 data per task to learn a representation in order to reduce the sample complexity of a target task for which there is only n_2 (≪ n_1) data. Specifically, we focus on the setting where there exists a good common representation between source and target, and our goal is to understand how much of a sample size reduction is possible. First, we study the setting where this common representation is low-dimensional and provide a fast rate of O(C(Φ)/n_1T + k/n_2); here, Φ is the representation function class, C(Φ) is its complexity measure, and k is the dimension of the representation. When specialized to linear representation functions, this rate becomes O(dk/n_1T + k/n_2) where d (≫ k) is the ambient input dimension, which is a substantial improvement over the rate without using representation learning, i.e. over the rate of O(d/n_2). Second, we consider the setting where the common representation may be high-dimensional but is capacity-constrained (say in norm); here, we again demonstrate the advantage of representation learning in both high-dimensional linear regression and neural network learning. Our results demonstrate representation learning can fully utilize all n_1T samples from source tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset