Does It Make Sense? And Why? A Pilot Study for Sense Making and Explanation

06/02/2019
by   Cunxiang Wang, et al.
0

Introducing common sense to natural language understanding systems has received increasing research attention. It remains a fundamental question on how to evaluate whether a system has a sense making capability. Existing benchmarks measures commonsense knowledge indirectly and without explanation. In this paper, we release a benchmark to directly test whether a system can differentiate natural language statements that make sense from those that do not make sense. In addition, a system is asked to identify the most crucial reason why a statement does not make sense. We evaluate models trained over large-scale language modeling tasks as well as human performance, showing that there are different challenges for system sense making.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset