Training with Exploration Improves a Greedy Stack-LSTM Parser

03/11/2016
by   Miguel Ballesteros, et al.
0

We adapt the greedy Stack-LSTM dependency parser of Dyer et al. (2015) to support a training-with-exploration procedure using dynamic oracles(Goldberg and Nivre, 2013) instead of cross-entropy minimization. This form of training, which accounts for model predictions at training time rather than assuming an error-free action history, improves parsing accuracies for both English and Chinese, obtaining very strong results for both languages. We discuss some modifications needed in order to get training with exploration to work well for a probabilistic neural-network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset