Dialogue related Machine Reading Comprehension requires language models ...
In recent years, great advances in pre-trained language models (PLMs) ha...
This paper studies Chinese Spelling Correction (CSC), which aims to dete...
In recent years, the use of multi-modal pre-trained Transformers has led...
Optimizer is an essential component for the success of deep learning, wh...
BatGPT is a large-scale language model designed and trained jointly by W...
Machine reading comprehension (MRC) poses new challenges over logical
re...
Universal Information Extraction (UIE) has been introduced as a unified
...
As the capabilities of large language models (LLMs) continue to advance,...
In this paper, we study Chinese Spelling Correction (CSC) as a joint dec...
With the widespread use of large language models (LLMs) in NLP tasks,
re...
Multi-party dialogues are more difficult for models to understand than
o...
General chat models, like ChatGPT, have attained impressive capability t...
Large Language Models (LLMs) play a powerful Reader of the
Retrieve-then...
Multilingual understanding models (or encoder-based), pre-trained via ma...
Dialogue response generation requires an agent to generate a response
ac...
Based on the remarkable achievements of pre-trained language models in
a...
Commonsense fact verification, as a challenging branch of commonsense
qu...
Named Entity Recognition (NER) is a cornerstone NLP task while its robus...
Beyond the success story of adversarial training (AT) in the recent text...
Large language models (LLMs) have shown impressive performance on comple...
Training machines to understand natural language and interact with human...
Open-Domain Question Answering (ODQA) requires models to answer factoid
...
Discriminative pre-trained language models (PLMs) learn to predict origi...
Multiple pre-training objectives fill the vacancy of the understanding
c...
Though offering amazing contextualized token-level representations, curr...
In open-retrieval conversational machine reading (OR-CMR) task, machines...
Based on the tremendous success of pre-trained language models (PrLMs) f...
Commonsense reasoning is an appealing topic in natural language processi...
Masked Language Modeling (MLM) has been widely used as the denoising
obj...
An ultimate language system aims at the high generalization and robustne...
Though offering amazing contextualized token-level representations, curr...
Multi-turn dialogue modeling as a challenging branch of natural language...
As a fundamental natural language processing task and one of core knowle...
As a broad and major category in machine reading comprehension (MRC), th...
Recently, the problem of robustness of pre-trained language models (PrLM...
Privacy protection is an important and concerning topic in Federated
Lea...
Without labeled question-answer pairs for necessary training, unsupervis...
Aspect-based sentiment analysis (ABSA) task consists of three typical
su...
Unsupervised constituency parsing has been explored much but is still fa...
Tangled multi-party dialogue context leads to challenges for dialogue re...
Machine reading comprehension is a heavily-studied research and test fie...
Training dense passage representations via contrastive learning (CL) has...
Multi-party dialogue machine reading comprehension (MRC) raises an even ...
In this paper, we leverage pre-trained language models (PLMs) to precise...
Attention scorers have achieved success in parsing tasks like semantic a...
Multi-party multi-turn dialogue comprehension brings unprecedented chall...
Multi-party dialogue machine reading comprehension (MRC) brings tremendo...
Open-domain Question Answering (ODQA) has achieved significant results i...
Pre-trained language models (PrLM) have to carefully manage input units ...