Syntactically Informed Text Compression with Recurrent Neural Networks

08/08/2016
by   David Cox, et al.
0

We present a self-contained system for constructing natural language models for use in text compression. Our system improves upon previous neural network based models by utilizing recent advances in syntactic parsing -- Google's SyntaxNet -- to augment character-level recurrent neural networks. RNNs have proven exceptional in modeling sequence data such as text, as their architecture allows for modeling of long-term contextual information.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset