While large language models (LLMs) have demonstrated remarkable capabili...
Action models, which take the form of precondition/effect axioms, facili...
Modeling discourse – the linguistic phenomena that go beyond individual
...
Most existing text generation models follow the sequence-to-sequence
par...
Recent advances in large language models have enabled them to reach a le...
ChatGPT and GPT-4 have attracted substantial interest from both academic...
Generative Pre-trained Transformer 4 (GPT-4) demonstrates impressive
cha...
Relation extraction is a central task in natural language processing (NL...
Abstract Meaning Representation (AMR) parsing aims to predict an AMR gra...
Despite low latency, non-autoregressive machine translation (NAT) suffer...
Few-shot Named Entity Recognition (NER) is imperative for entity tagging...
In this technical report, we introduce Effidit (Efficient and Intelligen...
Although pre-trained sequence-to-sequence models have achieved great suc...
Aspect category sentiment analysis has attracted increasing research
att...
Although pre-training models have achieved great success in dialogue
gen...
Natural language inference (NLI) is a fundamental NLP task, investigatin...
Contextualized representations give significantly improved results for a...
The success of pre-trained contextualized language models such as BERT
m...
Machine reading is a fundamental task for testing the capability of natu...
Non-task oriented dialogue systems have achieved great success in recent...
Contextualized representations trained over large raw text data have giv...
How to build high-quality word embeddings is a fundamental research ques...
CRF has been used as a powerful model for statistical sequence labeling....