With the dramatically increased number of parameters in language models,...
Prompt-based fine-tuning has boosted the performance of Pre-trained Lang...
Pre-trained language models (PLMs) like BERT have made significant progr...
Structured pruning has been extensively studied on monolingual pre-train...
Pre-trained Language Models (PLMs) have achieved remarkable performance ...
Pre-trained Language Models (PLMs) have achieved great success in variou...
Recent pretrained language models extend from millions to billions of
pa...
Vision-language pre-training (VLP) on large-scale image-text pairs has
r...
Recent studies about learning multilingual representations have achieved...
Pre-trained self-supervised models such as BERT have achieved striking
s...
Conventional Knowledge Graph Completion (KGC) assumes that all test enti...
In this paper, we focus on the task of generating a pun sentence given a...
Unsupervised text style transfer aims to alter text styles while preserv...
Unsupervised text style transfer aims to transfer the underlying style o...
Cross-lingual word embeddings aim to capture common linguistic regularit...
Word Sense Disambiguation (WSD) aims to identify the correct meaning of
...