Solving Math Word Problems with Double-Decoder Transformer

08/28/2019
by   Yuanliang Meng, et al.
0

This paper proposes a Transformer-based model to generate equations for math word problems. It achieves much better results than RNN models when copy and align mechanisms are not used, and can outperform complex copy and align RNN models. We also show that training a Transformer jointly in a generation task with two decoders, left-to-right and right-to-left, is beneficial. Such a Transformer performs better than the one with just one decoder not only because of the ensemble effect, but also because it improves the encoder training procedure. We also experiment with adding reinforcement learning to our model, showing improved performance compared to MLE training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset