Attribution-based Task-specific Pruning for Multi-task Language Models

05/09/2022
by   Nakyeong Yang, et al.
0

Multi-task language models show outstanding performance for various natural language understanding tasks with only a single model. However, these language models inevitably utilize unnecessary large-scale model parameters, even when they are used for only a specific task. In this paper, we propose a novel training-free task-specific pruning method for multi-task language models. Specifically, we utilize an attribution method to compute the importance of each neuron for performing a specific task. Then, we prune task-specifically unimportant neurons using this computed importance. Experimental results on the six widely-used datasets show that our proposed pruning method significantly outperforms baseline compression methods. Also, we extend our method to be applicable in a low-resource setting, where the number of labeled datasets is insufficient.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro