Cross-Attention End-to-End ASR for Two-Party Conversations

07/24/2019
by   Suyoun Kim, et al.
0

We present an end-to-end speech recognition model that learns interaction between two speakers based on the turn-changing information. Unlike conventional speech recognition models, our model exploits two speakers' history of conversational-context information that spans across multiple turns within an end-to-end framework. Specifically, we propose a speaker-specific cross-attention mechanism that can look at the output of the other speaker side as well as the one of the current speaker for better at recognizing long conversations. We evaluated the models on the Switchboard conversational speech corpus and show that our model outperforms standard end-to-end speech recognition models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset