Efficient Dialogue State Tracking by Masked Hierarchical Transformer

06/28/2021
by   Min Mao, et al.
0

This paper describes our approach to DSTC 9 Track 2: Cross-lingual Multi-domain Dialog State Tracking, the task goal is to build a Cross-lingual dialog state tracker with a training set in rich resource language and a testing set in low resource language. We formulate a method for joint learning of slot operation classification task and state tracking task respectively. Furthermore, we design a novel mask mechanism for fusing contextual information about dialogue, the results show the proposed model achieves excellent performance on DSTC Challenge II with a joint accuracy of 62.37 MultiWOZ(en - zh) dataset and CrossWOZ(zh - en) dataset, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset