Conditional Mutual Information Bound for Meta Generalization Gap

10/21/2020
by   Arezou Rezazadeh, et al.
0

Meta-learning infers an inductive bias—typically in the form of the hyperparameters of a base-learning algorithm—by observing data from a finite number of related tasks. This paper presents an information-theoretic upper bound on the average meta-generalization gap that builds on the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020), which was originally developed for conventional learning. In the context of meta-learning, the CMI framework involves a training meta-supersample obtained by first sampling 2N independent tasks from the task environment, and then drawing 2M independent training samples for each sampled task. The meta-training data fed to the meta-learner is then obtained by randomly selecting N tasks from the available 2N tasks and M training samples per task from the available 2M training samples per task. The resulting bound is explicit in two CMI terms, which measure the information that the meta-learner output and the base-learner output respectively provide about which training data are selected given the entire meta-supersample.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset