More Than Reading Comprehension: A Survey on Datasets and Metrics of Textual Question Answering

09/25/2021
by   Yang Bai, et al.
0

Textual Question Answering (QA) aims to provide precise answers to user's questions in natural language using unstructured data. One of the most popular approaches to this goal is machine reading comprehension(MRC). In recent years, many novel datasets and evaluation metrics based on classical MRC tasks have been proposed for broader textual QA tasks. In this paper, we survey 47 recent textual QA benchmark datasets and propose a new taxonomy from an application point of view. In addition, We summarize 8 evaluation metrics of textual QA tasks. Finally, we discuss current trends in constructing textual QA benchmarks and suggest directions for future work.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset