We present Vārta, a large-scale multilingual dataset for headline
genera...
Coreference resolution models are often evaluated on multiple datasets.
...
With adversarial or otherwise normal prompts, existing large language mo...
State-of-the-art abstractive summarization systems frequently hallucinat...
Many state-of-the-art natural language understanding (NLU) models are ba...
In text summarization and simplification, system outputs must be evaluat...
Current language generation models suffer from issues such as repetition...
Transformer models pre-trained with a masked-language-modeling objective...
Understanding natural language requires common sense, one aspect of whic...
Idioms are unlike other phrases in two important ways. First, the words ...
Authorship attribution is the problem of identifying the most plausible
...
Reasoning in a temporal knowledge graph (TKG) is a critical task for
inf...
Despite considerable advancements with deep neural language models (LMs)...
Due to the common belief that training deep transformers from scratch
re...
Word embeddings are reliable feature representations of words used to ob...
The Winograd Schema Challenge (WSC) and variants inspired by it have bec...
Word embeddings are trained to predict word cooccurrence statistics, whi...
Neural abstractive summarization systems have achieved promising progres...
Inferring missing facts in temporal knowledge graphs (TKGs) is a fundame...
Pre-trained neural abstractive summarization systems have dominated
extr...
Uncontextualized word embeddings are reliable feature representations of...
Modeling semantic plausibility requires commonsense knowledge about the ...
Variational Autoencoders (VAEs) hold great potential for modelling text,...
Sentence position is a strong feature for news summarization, since the ...
Referring Expression Generation (REG) is the task of generating contextu...
We present the first sentence simplification model that learns explicit ...
Existing controllable text generation systems rely on annotated attribut...
Coherence is an important aspect of text quality and is crucial for ensu...
The standard loss function used to train neural network classifiers,
cat...
We present two architectures for multi-task learning with neural sequenc...
Recently, a large number of neural mechanisms and models have been propo...
The NLP and ML communities have long been interested in developing model...
We introduce a new benchmark task for coreference resolution, Hard-CoRe,...
We introduce an automatic system that achieves state-of-the-art results ...
In this work, we propose a novel method for training neural networks to
...
We introduce the task of predicting adverbial presupposition triggers su...
We present an approach to event coreference resolution by developing a
g...
Commonsense knowledge bases such as ConceptNet represent knowledge in th...
Recent work in learning vector-space embeddings for multi-relational dat...