A Comparative Study of Word Embeddings for Reading Comprehension

03/02/2017
by   Bhuwan Dhingra, et al.
0

The focus of past machine learning research for Reading Comprehension tasks has been primarily on the design of novel deep learning architectures. Here we show that seemingly minor choices made on (1) the use of pre-trained word embeddings, and (2) the representation of out-of-vocabulary tokens at test time, can turn out to have a larger impact than architectural choices on the final performance. We systematically explore several options for these choices, and provide recommendations to researchers working in this area.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset