Stack-propagation: Improved Representation Learning for Syntax

03/21/2016
by   Yuan Zhang, et al.
0

Traditional syntax models typically leverage part-of-speech (POS) information by constructing features from hand-tuned templates. We demonstrate that a better approach is to utilize POS tags as a regularizer of learned representations. We propose a simple method for learning a stacked pipeline of models which we call "stack-propagation". We apply this to dependency parsing and tagging, where we use the hidden layer of the tagger network as a representation of the input tokens for the parser. At test time, our parser does not require predicted POS tags. On 19 languages from the Universal Dependencies, our method is 1.3 state-of-the-art graph-based approach and 2.7 comparable greedy model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset