Tool-augmented large language models (LLMs) have achieved remarkable pro...
We uncover a systematic bias in the evaluation paradigm of adopting larg...
Weakly supervised learning is a popular approach for training machine
le...
Continual learning (CL) aims to constantly learn new knowledge over time...
Joint entity and relation extraction (JERE) is one of the most important...
Figures of speech, such as metaphor and irony, are ubiquitous in literat...
For high-resource languages like English, text classification is a
well-...
Training deep neural networks (DNNs) with weak supervision has been a ho...
Incorrect labels in training data occur when human annotators make mista...
For many new application domains for data-to-text generation, the main
o...
Distant and weak supervision allow to obtain large amounts of labeled
tr...
Multilingual transformer models like mBERT and XLM-RoBERTa have obtained...
The lack of labeled training data has limited the development of natural...
This paper describes our approach in DSTC 8 Track 4: Schema-Guided Dialo...
Altering the content of an image with photo editing tools is a tedious t...