Understanding when two pieces of text convey the same information is a g...
Current natural language systems designed for multi-step claim validatio...
Modern language models have the capacity to store and use immense amount...
Standard decoding approaches for conditional text generation tasks typic...
A human decision-maker benefits the most from an AI assistant that corre...
Developers often dedicate significant time to maintaining and refactorin...
Past work has studied event prediction and event language modeling, some...
The rise of large language models (LLMs) has brought a critical need for...
Evidence retrieval is a core part of automatic fact-checking. Prior work...
Many data extraction tasks of practical relevance require not only synta...
Prior work has combined chain-of-thought prompting in large language mod...
Pre-trained language models (LMs) are used for knowledge intensive tasks...
There has been growing interest in automatically predicting missing type...
Event scenarios are often complex and involve multiple event sequences
c...
Recent work has addressed textual reasoning tasks by prompting large lan...
Very large language models such as GPT-3 have shown impressive performan...
Large language models (LLMs) have exhibited remarkable capabilities in
l...
A growing body of work studies how to answer a question or verify a clai...
While pretrained language models have exhibited impressive generalizatio...
Automatic discourse processing, which can help understand how sentences
...
The recent success of zero- and few-shot prompting with models like GPT-...
We propose a new technique based on program synthesis for automatically
...
The propensity of abstractive summarization systems to make factual erro...
Progress in summarizing long texts is inhibited by the lack of appropria...
Verifying complex political claims is a challenging task, especially whe...
How can prompting a large language model like GPT-3 with explanations im...
Language models (LMs) are typically trained once on a large-scale corpus...
In settings from fact-checking to question answering, we frequently want...
Neural text generation models like those used for summarization and
tran...
While there has been substantial progress in text comprehension through
...
Pre-trained language models (e.g. BART) have shown impressive results wh...
A reader interested in a particular topic might be interested in summari...
The growth of cross-lingual pre-trained models has enabled NLP tools to
...
Document-level information extraction is a flexible framework compatible...
One often wants to take an existing, trained NLP model and use it on dat...
Most benchmark datasets targeting commonsense reasoning focus on everyda...
Despite the prominence of neural abstractive summarization models, we kn...
An interpretable system for complex, open-domain reasoning needs an
inte...
To build robust question answering systems, we need the ability to verif...
In this paper, we propose a new technique based on program synthesis for...
Token-level attributions have been extensively studied to explain model
...
Discourse signals are often implicit, leaving it up to the interpreter t...
Recent pre-trained abstractive summarization systems have started to ach...
While numerous methods have been proposed as defenses against adversaria...
Neural entity typing models typically represent entity types as vectors ...
Models encapsulating narrative schema knowledge have proven to be useful...
A principal barrier to training temporal relation extraction models in n...
Compressive summarization systems typically rely on a crafted set of
syn...
An advantage of seq2seq abstractive summarization models is that they
ge...
Despite significant progress in text generation models, a serious limita...