Applying wav2vec2.0 to Speech Recognition in various low-resource languages

12/22/2020
by   Cheng Yi, et al.
0

Several domains own corresponding widely used feature extractors, such as ResNet, BERT, and GPT-x. These models are pre-trained on large amounts of unlabelled data by self-supervision and can be effectively applied for downstream tasks. In the speech domain, wav2vec2.0 starts to show its powerful representation ability and feasibility of ultra-low resource speech recognition on Librispeech corpus. However, this model has not been tested on real spoken scenarios and languages other than English. To verify its universality over languages, we apply the released pre-trained models to solve low-resource speech recognition tasks in various spoken languages. We achieve more than 20% relative improvements in six languages compared with previous works. Among these languages, English improves up to 52.4%. Moreover, using coarse-grained modeling units, such as subword and character, achieves better results than the letter.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset