Projected Subnetworks Scale Adaptation

01/27/2023
by   Siddhartha Datta, et al.
0

Large models support great zero-shot and few-shot capabilities. However, updating these models on new tasks can break performance on previous seen tasks and their zero/few-shot unseen tasks. Our work explores how to update zero/few-shot learners such that they can maintain performance on seen/unseen tasks of previous tasks as well as new tasks. By manipulating the parameter updates of a gradient-based meta learner as the projected task-specific subnetworks, we show improvements for large models to retain seen and zero/few shot task performance in online settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset