Contextualized Word Representations for Reading Comprehension

12/10/2017
by   Shimi Salant, et al.
0

Reading a document and extracting an answer to a question about its content has attracted substantial attention recently, where most work has focused on the interaction between the question and the document. In this work we evaluate the importance of context when the question and the document are each read on their own. We take a standard neural architecture for the task of reading comprehension, and show that by providing rich contextualized word representations from a large language model, and allowing the model to choose between context dependent and context independent word representations, we can dramatically improve performance and reach state-of-the-art performance on the competitive SQuAD dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset