The JHU Multi-Microphone Multi-Speaker ASR System for the CHiME-6 Challenge

06/14/2020
by   Ashish Arora, et al.
0

This paper summarizes the JHU team's efforts in tracks 1 and 2 of the CHiME-6 challenge for distant multi-microphone conversational speech diarization and recognition in everyday home environments. We explore multi-array processing techniques at each stage of the pipeline, such as multi-array guided source separation (GSS) for enhancement and acoustic model training data, posterior fusion for speech activity detection, PLDA score fusion for diarization, and lattice combination for automatic speech recognition (ASR). We also report results with different acoustic model architectures, and integrate other techniques such as online multi-channel weighted prediction error (WPE) dereverberation and variational Bayes-hidden Markov model (VB-HMM) based overlap assignment to deal with reverberation and overlapping speakers, respectively. As a result of these efforts, our ASR systems achieve a word error rate of 40.5 evaluation set. This is an improvement of 10.8 challenge baselines for the respective tracks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset