Very Low Resource Sentence Alignment: Luhya and Swahili

10/31/2022
by   Everlyn Asiko Chimoto, et al.
0

Language-agnostic sentence embeddings generated by pre-trained models such as LASER and LaBSE are attractive options for mining large datasets to produce parallel corpora for low-resource machine translation. We test LASER and LaBSE in extracting bitext for two related low-resource African languages: Luhya and Swahili. For this work, we created a new parallel set of nearly 8000 Luhya-English sentences which allows a new zero-shot test of LASER and LaBSE. We find that LaBSE significantly outperforms LASER on both languages. Both LASER and LaBSE however perform poorly at zero-shot alignment on Luhya, achieving just 1.5 We fine-tune the embeddings on a small set of parallel Luhya sentences and show significant gains, improving the LaBSE alignment accuracy to 53.3 restricting the dataset to sentence embedding pairs with cosine similarity above 0.7 yielded alignments with over 85

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset