We study online meta-learning with bandit feedback, with the goal of
imp...
Fine-tuning large-scale pretrained models has led to tremendous progress...
Hyperparameter tuning is critical to the success of federated learning
a...
When applying differential privacy to sensitive data, a common way of ge...
In the literature on game-theoretic equilibrium finding, focus has mainl...
An important unresolved challenge in the theory of regularization is to ...
We study online learning with bandit feedback across multiple tasks, wit...
When faced with data-starved or highly complex end-tasks, it is commonpl...
While neural architecture search (NAS) has enabled automated machine lea...
A burgeoning paradigm in algorithm design is the field of algorithms wit...
Most existing neural architecture search (NAS) benchmarks and algorithms...
We analyze the meta-learning of the initialization and step-size of lear...
Tuning hyperparameters is a crucial but arduous part of the machine lear...
Factorized layers–operations parameterized by products of two or more
ma...
An important goal of neural architecture search (NAS) is to automate-awa...
Many recent state-of-the-art methods for neural architecture search (NAS...
One popular trend in meta-learning is to learn from many training tasks ...
Federated learning (FL) is a machine learning setting where many clients...
Parameter-transfer is a well-known and versatile approach for meta-learn...
We build a theoretical framework for understanding practical meta-learni...
We study the problem of meta-learning through the lens of online convex
...
Recent empirical works have successfully used unlabeled data to learn fe...
Motivations like domain adaptation, transfer learning, and feature learn...
This work presents an unsupervised approach for improving WordNet that b...
We introduce the Self-Annotated Reddit Corpus (SARC), a large corpus for...