Vision-language pre-training (VLP) methods are blossoming recently, and ...
Although large language models (LLMs) demonstrate impressive performance...
Teaching assistants have played essential roles in the long history of
e...
We present Visual Knowledge oriented Programming platform (VisKoP), a
kn...
Deep text understanding, which requires the connections between a given
...
The unprecedented performance of large language models (LLMs) necessitat...
Event extraction (EE) is a crucial task aiming at extracting events from...
Entity linking models have achieved significant success via utilizing
pr...
Explainable question answering (XQA) aims to answer a given question and...
The robustness to distribution changes ensures that NLP models can be
su...
While there are abundant researches about evaluating ChatGPT on natural
...
Student modeling, the task of inferring a student's learning characteris...
Despite the recent emergence of video captioning models, how to generate...
Open Information Extraction models have shown promising results with
suf...
Answering complex logical queries on incomplete knowledge graphs is a
ch...
Transformer-based pre-trained language models have demonstrated superior...
The diverse relationships among real-world events, including coreference...
Pre-trained Language Models (PLMs) which are trained on large text corpu...
Web and artificial intelligence technologies, especially semantic web an...
Knowledge graphs, as the cornerstone of many AI applications, usually fa...
Document-level relation extraction with graph neural networks faces a
fu...
The recent prevalence of pretrained language models (PLMs) has dramatica...
Recently, there have merged a class of task-oriented dialogue (TOD) data...
Adaptive learning aims to stimulate and meet the needs of individual
lea...
A challenge on Semi-Supervised and Reinforced Task-Oriented Dialog Syste...
Subject to the semantic gap lying between natural and formal language, n...
Recognizing facts is the most fundamental step in making judgments, henc...
Dependency parsing aims to extract syntactic dependency structure or sem...
Self-supervised entity alignment (EA) aims to link equivalent entities a...
Multi-hop knowledge graph (KG) reasoning has been widely studied in rece...
To enhance research on multimodal knowledge base and multimodal informat...
Prompt tuning (PT) is a promising parameter-efficient method to utilize
...
How can pre-trained language models (PLMs) learn universal representatio...
Semantic parsing in KBQA aims to parse natural language questions into
l...
Existing technologies expand BERT from different perspectives, e.g. desi...
As an effective approach to tune pre-trained language models (PLMs) for
...
Tuning pre-trained language models (PLMs) with task-specific prompts has...
Few-shot Named Entity Recognition (NER) exploits only a handful of
annot...
Wikipedia abstract generation aims to distill a Wikipedia abstract from ...
Entity Matching (EM) aims at recognizing entity records that denote the ...
Event extraction (EE) has considerably benefited from pre-trained langua...
Multi-hop Question Answering (QA) is a challenging task because it requi...
Multi-hop reasoning has been widely studied in recent years to obtain mo...
Pre-trained Language Models (PLMs) have proven to be beneficial for vari...
Entity alignment (EA) aims at building a unified Knowledge Graph (KG) of...
Multi-hop reasoning has been widely studied in recent years to seek an
e...
Traditional neural networks represent everything as a vector, and are ab...
Complex question answering over knowledge base (Complex KBQA) is challen...
Knowledge graphs (KGs) contains an instance-level entity graph and an
on...
Event detection (ED), which identifies event trigger words and classifie...