In-context learning (ICL), the ability of large language models to perfo...
Answering complex questions that require making latent decisions is a
ch...
The full power of human language-based communication cannot be realized
...
Transfer learning (TL) in natural language processing (NLP) has seen a s...
Training a referring expression comprehension (ReC) model for a new visu...
Natural language processing models often exploit spurious correlations
b...
A rapidly growing body of research has demonstrated the inability of NLP...
Pretrained Language Models (LMs) have demonstrated ability to perform
nu...
Retrieval-augmented generation models have shown state-of-the-art perfor...
While interest in models that generalize at test time to new composition...
Alongside huge volumes of research on deep learning models in NLP in the...
Making controlled perturbations is essential for various tasks (e.g., da...
The predominant challenge in weakly supervised semantic parsing is that ...
Readers of academic research papers often read with the goal of answerin...
As language models are trained on ever more text, researchers are turnin...
Compositional reasoning tasks like multi-hop question answering, require...
When training most modern reading comprehension models, all the question...
Much recent work in NLP has documented dataset artifacts, bias, and spur...
Compositional, structured models are appealing because they explicitly
d...
Question Answering (QA) tasks requiring information from multiple docume...
Typically, machine learning systems solve new tasks by training on thous...
Humans often have to read multiple documents to address their informatio...
Understanding the relationship between figures and text is key to scient...
Generalization of models to out-of-distribution (OOD) data has captured
...
Posing reading comprehension as a generation problem provides a great de...
High-quality and large-scale data are key to success for AI systems. How...
Coreference resolution is an important task for discourse-level natural
...
Answering questions that involve multi-step reasoning requires decomposi...
Neural module networks (NMNs) are a popular approach for modeling
compos...
A critical part of reading is being able to understand the temporal
rela...
Complex reasoning over text requires understanding and chaining together...
Standard test sets for supervised learning evaluate in-distribution
gene...
Understanding natural language questions entails the ability to break do...
Reading comprehension is one of the crucial tasks for furthering researc...
Answering compositional questions that require multiple steps of reasoni...
Recent years have seen a dramatic expansion of tasks and datasets posed ...
Neural NLP models are increasingly accurate but are imperfect and
opaque...
The ability to understand and work with numbers (numeracy) is critical f...
We introduce the first open-domain dataset, called QuaRTz, for reasoning...
State-of-the-art semantic parsers rely on auto-regressive decoding, emit...
Adversarial examples highlight model vulnerabilities and are useful for
...
Adversarial examples highlight model vulnerabilities and are useful for
...
A key component of successfully reading a passage of text is the ability...
Machine comprehension of texts longer than a single sentence often requi...
Multi-hop reading comprehension (RC) questions are challenging because t...
The sequence-to-sequence paradigm employed by neural text-to-SQL models
...
Research on parsing language to SQL has largely ignored the structure of...
Contextual word representations derived from large-scale neural language...
Reading comprehension has recently seen rapid progress, with systems mat...
Many natural language questions require recognizing and reasoning with
q...