A Surprisingly Robust Trick for Winograd Schema Challenge

05/15/2019
by   Vid Kocijan, et al.
0

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273 strongly improves when fine-tuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSC-like dataset. By fine-tuning the BERT language model both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.2 solutions by 8.5 are also consistently more robust on the "complex" subsets of WSC273, introduced by Trichelair et al. (2018).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset