Intent-calibrated Self-training for Answer Selection in Open-domain Dialogues

07/13/2023
by   Wentao Deng, et al.
0

Answer selection in open-domain dialogues aims to select an accurate answer from candidates. Recent success of answer selection models hinges on training with large amounts of labeled data. However, collecting large-scale labeled data is labor-intensive and time-consuming. In this paper, we introduce the predicted intent labels to calibrate answer labels in a self-training paradigm. Specifically, we propose the intent-calibrated self-training (ICAST) to improve the quality of pseudo answer labels through the intent-calibrated answer selection paradigm, in which we employ pseudo intent labels to help improve pseudo answer labels. We carry out extensive experiments on two benchmark datasets with open-domain dialogues. The experimental results show that ICAST outperforms baselines consistently with 1 Specifically, it improves 2.06 compared with the strongest baseline with only 5

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset