Don't Stop Self-Supervision: Accent Adaptation of Speech Representations via Residual Adapters

07/02/2023
by   Anshu Bhatia, et al.
0

Speech representations learned in a self-supervised fashion from massive unlabeled speech corpora have been adapted successfully toward several downstream tasks. However, such representations may be skewed toward canonical data characteristics of such corpora and perform poorly on atypical, non-native accented speaker populations. With the state-of-the-art HuBERT model as a baseline, we propose and investigate self-supervised adaptation of speech representations to such populations in a parameter-efficient way via training accent-specific residual adapters. We experiment with 4 accents and choose automatic speech recognition (ASR) as the downstream task of interest. We obtain strong word error rate reductions (WERR) over HuBERT-large for all 4 accents, with a mean WERR of 22.7 WERR of 25.1 utilize HuBERT and ASR as the downstream task, our proposed approach is both model and task-agnostic.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset