Error Detection in Large-Scale Natural Language Understanding Systems Using Transformer Models

09/04/2021
by   Rakesh Chada, et al.
0

Large-scale conversational assistants like Alexa, Siri, Cortana and Google Assistant process every utterance using multiple models for domain, intent and named entity recognition. Given the decoupled nature of model development and large traffic volumes, it is extremely difficult to identify utterances processed erroneously by such systems. We address this challenge to detect domain classification errors using offline Transformer models. We combine utterance encodings from a RoBERTa model with the Nbest hypothesis produced by the production system. We then fine-tune end-to-end in a multitask setting using a small dataset of humanannotated utterances with domain classification errors. We tested our approach for detecting misclassifications from one domain that accounts for <0.5 system. Our approach achieves an F1 score of 30 baseline by 16.9 further by 2.2

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset