Meeting summarization has emerged as a promising technique for providing...
Traditional multitask learning methods basically can only exploit common...
Diffusion models developed on top of powerful text-to-image generation m...
Multilingual neural machine translation has witnessed remarkable progres...
Generative agents that simulate human society show tremendous potential ...
Multi-document scientific summarization can extract and organize importa...
Although large-scale video-language pre-training models, which usually b...
Electroencephalography-to-Text generation (EEG-to-Text), which aims to
d...
Previous work on controllable text generation has explored the idea of
c...
Multi-aspect controllable text generation is a more challenging and prac...
Although all-in-one-model multilingual neural machine translation (MNMT)...
The standard BERT adopts subword-based tokenization, which may break a w...
With the development of dialogue systems and natural language generation...
Current dialogue summarization systems usually encode the text with a nu...
Recently, various neural encoder-decoder models pioneered by Seq2Seq
fra...
Sequence-to-sequence methods have achieved promising results for textual...
Abstractive dialogue summarization is the task of capturing the highligh...
In this paper, we focus on a new practical task, document-scale text con...
We present CodeBERT, a bimodal pre-trained model for programming languag...
Neural semantic parsing has achieved impressive results in recent years,...
Although Seq2Seq models for table-to-text generation have achieved remar...
Conversational semantic parsing over tables requires knowledge acquiring...
Machine reading comprehension (MRC) requires reasoning about both the
kn...
We present a generative model to map natural language questions into SQL...
Target-dependent sentiment classification remains a challenge: modeling ...