Global-Locally Self-Attentive Dialogue State Tracker

05/19/2018
by   Victor Zhong, et al.
0

Dialogue state tracking, which estimates user goals and requests given the dialogue context, is an essential part of task-oriented dialogue systems. In this paper, we propose the Global-Locally Self-Attentive Dialogue State Tracker (GLAD), which learns representations of the user utterance and previous system actions with global-local modules. Our model uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. We show that this significantly improves tracking of rare states and achieves state-of-the-art performance on the WoZ and DSTC2 state tracking tasks. GLAD obtains 88.1 outperforming prior work by 3.7 joint goal accuracy and 97.5 1.1

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset