HUBERT Untangles BERT to Improve Transfer across NLP Tasks

10/25/2019
by   Mehrad Moradshahi, et al.
0

We introduce HUBERT which combines the structured-representational power of Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional Transformer language model. We show that there is shared structure between different NLP datasets that HUBERT, but not BERT, is able to learn and leverage. We validate the effectiveness of our model on the GLUE benchmark and HANS dataset. Our experiment results show that untangling data-specific semantics from general language structure is key for better transfer among NLP tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset