A popular approach for improving the correctness of output from large
la...
Despite their unprecedented success, even the largest language models ma...
Many recent advances in natural language generation have been fueled by
...
Like people, LLMs do not always generate the best text for a given gener...
The waning of Moore's Law has shifted the focus of the tech industry tow...
Large language models (LLMs) have recently demonstrated an impressive ab...
We address the general task of structured commonsense reasoning: given a...
Reasoning is a key pillar of human cognition and intelligence. In the pa...
We present FLOWGEN, a graph-generation model inspired by the dual-proces...
Conditional set generation learns a mapping from an input sequence of to...
Large LMs such as GPT-3, while powerful, are not immune to mistakes, but...
How can an end-user provide feedback if a deployed structured prediction...
How can an end-user provide feedback if a deployed structured prediction...
A class of explainable NLP models for reasoning tasks support their deci...
We introduce GEM, a living benchmark for natural language Generation (NL...
This paper presents the first study on using large-scale pre-trained lan...
This paper introduces a new task of politeness transfer which involves
c...
We propose a method of curating high-quality comparable training data fo...
The problem of collecting reliable estimates of occurrence of entities o...