Simultaneous Translation with Flexible Policy via Restricted Imitation Learning

06/04/2019
by   Baigong Zheng, et al.
0

Simultaneous translation is widely useful but remains one of the most difficult tasks in NLP. Previous work either uses fixed-latency policies, or train a complicated two-staged model using reinforcement learning. We propose a much simpler single model that adds a `delay' token to the target vocabulary, and design a restricted dynamic oracle to greatly simplify training. Experiments on Chinese<->English simultaneous translation show that our work leads to flexible policies that achieve better BLEU scores and lower latencies compared to both fixed and RL-learned policies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset