Finetuning large language models (LLMs) on instructions leads to vast
pe...
The explosive growth of language models and their applications have led ...
Pretraining is the preliminary and fundamental step in developing capabl...
As language models grow ever larger, the need for large-scale high-quali...
We study the design decisions of publicly available instruction tuning
m...
Quantization, knowledge distillation, and magnitude pruning are among th...
Studies of active learning traditionally assume the target and source da...
Knowledge-dependent tasks typically use two sources of knowledge: parame...
Retrieval is a core component for open-domain NLP tasks. In open-domain
...
This paper describes the participation of UvA.ILPS group at the TREC CAs...
Conversational passage retrieval relies on question rewriting to modify ...
Existing methods for open-retrieval question answering in lower resource...
The dependency between an adequate question formulation and correct answ...
We introduce a new dataset for Question Rewriting in Conversational Cont...
Task-agnostic forms of data augmentation have proven widely effective in...
Recent work (Feng et al., 2018) establishes the presence of short,
unint...
Progress in cross-lingual modeling depends on challenging, realistic, an...
Conversational question answering (QA) requires answers conditioned on t...
To produce a domain-agnostic question answering model for the Machine Re...
LSTMs have become a basic building block for many deep NLP models. In re...