Hyperdecoders: Instance-specific decoders for multi-task NLP

03/15/2022
by   Hamish Ivison, et al.
0

We investigate input-conditioned hypernetworks for multi-tasking in NLP, generating parameter-efficient adaptations for a decoder using a hypernetwork conditioned on the output of an encoder. This approach produces a unique decoder for every input instance, allowing the network a larger degree of flexibility than prior work that specializes the decoder for each task. We apply our method to sequence classification tasks, extractive QA, and summarisation and find that it often outperforms fully finetuning the underlying model and surpasses previous parameter efficient fine-tuning methods. Gains are particularly large when evaluated out-of-domain on the MRQA benchmark. In addition, as the pretrained model is frozen, our method eliminates negative interference among unrelated tasks, a common failure mode in fully fine-tuned approaches. An analysis of the embeddings produced by our model suggests that a large benefit of our approach is allowing the encoder more effective control over the decoder, allowing mapping from hidden representations to a final text-based label without interference from other tasks' output formats or labels.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset