Unsupervised Injection of Knowledge into Dialogue Generation via Language Models

04/30/2020
by   Yi-Lin Tuan, et al.
0

Neural conversation models have shown the power to produce more meaningful and engaging responses given external knowledge. Specifically, the knowledge we experiment on is in textual form, for example, a personality description. Despite the success of training and testing with external knowledge, in reality, we do not always have sufficient background knowledge about the discussed topic. Therefore, it is also crucial to have the models generate captivating responses without external knowledge. To achieve this, we propose a unified training method, Decoupling, which induces a knowledge-related sentence and couples it with the dialogue history to generate a response in an unsupervised fashion. Its effect is further analyzed by testing the models with no knowledge, partial and full text of the knowledge. Empirically, we observed that the variance of the performance given different amounts of knowledge is significant. Also, our method performs more closely to the supervised method (the upper bound) than the baselines.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset