SpeechBERT: Cross-Modal Pre-trained Language Model for End-to-end Spoken Question Answering

10/25/2019
by   Yung-Sung Chuang, et al.
0

While end-to-end models for spoken language understanding tasks have been explored recently, there is still no end-to-end model for spoken question answering (SQA) tasks, which would be catastrophically influenced by speech recognition errors. Meanwhile, pre-trained language models, such as BERT, have performed successfully in text question answering. To bring this advantage of pre-trained language models into spoken question answering, we propose SpeechBERT, a cross-modal transformer-based pre-trained language model. As the first exploration in end-to-end SQA models, our results matched the performance of conventional approaches that fed with output text from ASR and only slightly fell behind pre-trained language models, showing the potential of end-to-end SQA models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset