Self-Induced Curriculum Learning in Neural Machine Translation

04/07/2020
by   Dana Ruiter, et al.
0

Self-supervised neural machine translation (SS-NMT) learns how to extract/select suitable training data from comparable – rather than parallel – corpora and how to translate, in a way that the two tasks support each other in a virtuous circle. SS-NMT has been shown to be competitive with state-of-the-art unsupervised NMT. In this study we provide an in-depth analysis of the sampling choices the SS-NMT model takes during training. We show that, without it having been told to do so, the model selects samples of increasing (i) complexity and (ii) task-relevance in combination with (iii) a denoising curriculum. We observe that the dynamics of the mutual-supervision of both system internal representation types is vital for the extraction and hence translation performance. We show that in terms of the human Gunning-Fog Readability index (GF), SS-NMT starts by extracting and learning from Wikipedia data suitable for high school (GF=10–11) and quickly moves towards content suitable for first year undergraduate students (GF=13).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset