Alleviating Exposure Bias via Contrastive Learning for Abstractive Text Summarization

08/26/2021
by   Shichao Sun, et al.
0

Encoder-decoder models have achieved remarkable success in abstractive text summarization, which aims to compress one or more documents into a shorter version without the loss of the essential content. Unfortunately, these models mostly suffer a discrepancy between training and inference, i.e., the exposure bias problem. During the training stage, with teacher forcing these models are optimized to maximize the likelihood of the gold summary given the gold summary tokens as input to the decoder, while at inference the given tokens are replaced by the generated tokens. Consequently, low-quality summaries are very likely to be generated. To remedy this problem, we propose to leverage contrastive learning to decrease the likelihood of these low-quality summaries, and meanwhile increase the likelihood of the gold summary. Since our solution expands the states that the model perceives during training, we expect that the exposure bias problem can be alleviated. We experimentally demonstrate that our method effectively improves the performance of the state-of-the-art model on different datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset