Recurrent Stacking of Layers for Compact Neural Machine Translation Models

07/14/2018
by   Raj Dabre, et al.
0

In Neural Machine Translation (NMT), the most common practice is to stack a number of recurrent or feed-forward layers in the encoder and the decoder. As a result, the addition of each new layer improves the translation quality significantly. However, this also leads to a significant increase in the number of parameters. In this paper we propose to share parameters across all the layers thereby leading to a recurrently stacked NMT model. We empirically show that the translation quality of a model that recurrently stacks a single layer 6 times is comparable to the translation quality of a model that stacks 6 separate layers. We also show that using back-translated parallel corpora as additional data leads to further significant improvements in translation quality.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset