Natural Language Multitasking: Analyzing and Improving Syntactic Saliency of Hidden Representations

01/18/2018
by   Gino Brunner, et al.
0

We train multi-task autoencoders on linguistic tasks and analyze the learned hidden sentence representations. The representations change significantly when translation and part-of-speech decoders are added. The more decoders a model employs, the better it clusters sentences according to their syntactic similarity, as the representation space becomes less entangled. We explore the structure of the representation space by interpolating between sentences, which yields interesting pseudo-English sentences, many of which have recognizable syntactic structure. Lastly, we point out an interesting property of our models: The difference-vector between two sentences can be added to change a third sentence with similar features in a meaningful way.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset