Lessons from Chasing Few-Shot Learning Benchmarks: Rethinking the Evaluation of Meta-Learning Methods

02/23/2021
by   Amrith Setlur, et al.
0

In this work we introduce a simple baseline for meta-learning. Our unconventional method, FIX-ML, reduces task diversity by keeping support sets fixed across tasks, and consistently improves the performance of meta-learning methods on popular few-shot learning benchmarks. However, in exploring the reason for this counter-intuitive phenomenon, we unearth a series of questions and concerns about meta-learning evaluation practices. We explore two possible goals of meta-learning: to develop methods that generalize (i) to the same task distribution that generates the training set (in-distribution), or (ii) to new, unseen task distributions (out-of-distribution). Through careful analyses, we show that for each of these two goals, current few-shot learning benchmarks have potential pitfalls in 1) performing model selection and hyperparameter tuning for a given meta-learning method and 2) comparing the performance of different meta-learning methods. Our results highlight that in order to reason about progress in this space, it is necessary to provide a clearer description of the goals of meta-learning, and to develop more appropriate corresponding evaluation strategies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset