Injecting Text and Cross-lingual Supervision in Few-shot Learning from Self-Supervised Models

10/10/2021
by   Matthew Wiesner, et al.
6

Self-supervised model pre-training has recently garnered significant interest, but relatively few efforts have explored using additional resources in fine-tuning these models. We demonstrate how universal phoneset acoustic models can leverage cross-lingual supervision to improve transfer of pretrained self-supervised representations to new languages. We also show how target-language text can be used to enable and improve fine-tuning with the lattice-free maximum mutual information (LF-MMI) objective. In three low-resource languages these techniques greatly improved few-shot learning performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset