Teaching Syntax by Adversarial Distraction

10/25/2018
by   Juho Kim, et al.
0

Existing entailment datasets mainly pose problems which can be answered without attention to grammar or word order. Learning syntax requires comparing examples where different grammar and word order change the desired classification. We introduce several datasets based on synthetic transformations of natural entailment examples in SNLI or FEVER, to teach aspects of grammar and word order. We show that without retraining, popular entailment models are unaware that these syntactic differences change meaning. With retraining, some but not all popular entailment models can learn to compare the syntax properly.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset