Text-to-Text Multi-view Learning for Passage Re-ranking

04/29/2021
by   Jia-Huei Ju, et al.
0

Recently, much progress in natural language processing has been driven by deep contextualized representations pretrained on large corpora. Typically, the fine-tuning on these pretrained models for a specific downstream task is based on single-view learning, which is however inadequate as a sentence can be interpreted differently from different perspectives. Therefore, in this work, we propose a text-to-text multi-view learning framework by incorporating an additional view – the text generation view – into a typical single-view passage ranking model. Empirically, the proposed approach is of help to the ranking performance compared to its single-view counterpart. Ablation studies are also reported in the paper.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset