Much of the previous work towards digital agents for graphical user
inte...
Large-scale multi-modal pre-training models such as CLIP and PaLI exhibi...
Creating labeled natural language training data is expensive and require...
We propose a simple and effective re-ranking method for improving passag...
We introduce CM3, a family of causally masked generative models trained ...
We introduce HTLM, a hyper-text language model trained on a large-scale ...
Short textual descriptions of entities provide summaries of their key
at...
We point out that common evaluation practices for cross-document corefer...
Coreference resolution has been mostly investigated within a single docu...
Current models for Word Sense Disambiguation (WSD) struggle to disambigu...
Recent evaluation protocols for Cross-document (CD) coreference resoluti...
Decisions of complex language understanding models can be rationalized b...
We present a method to represent input texts by contextualizing them joi...
We apply BERT to coreference resolution, achieving strong improvements o...
Language model pretraining has led to significant performance gains but
...
We present SpanBERT, a pre-training method that is designed to better
re...
Reasoning about implied relationships (e.g. paraphrastic, common sense,
...
We present TriviaQA, a challenging reading comprehension dataset contain...