Learning In-context Learning for Named Entity Recognition

05/18/2023
βˆ™
by   Jiawei Chen, et al.
βˆ™
0
βˆ™

Named entity recognition in real-world applications suffers from the diversity of entity types, the emergence of new entity types, and the lack of high-quality annotations. To address the above problems, this paper proposes an in-context learning-based NER approach, which can effectively inject in-context NER ability into PLMs and recognize entities of novel types on-the-fly using only a few demonstrative instances. Specifically, we model PLMs as a meta-function Ξ»_ π’Ύπ“ƒπ“ˆπ“‰π“‡π“Šπ’Έπ“‰π’Ύβ„΄π“ƒ, π’Ήβ„―π“‚β„΄π“ƒπ“ˆπ“‰π“‡π’Άπ“‰π’Ύβ„΄π“ƒπ“ˆ, 𝓉ℯ𝓍𝓉. β„³, and a new entity extractor can be implicitly constructed by applying new instruction and demonstrations to PLMs, i.e., (Ξ» . β„³)(instruction, demonstrations) β†’ β„± where β„± will be a new entity extractor, i.e., β„±: text β†’ entities. To inject the above in-context NER ability into PLMs, we propose a meta-function pre-training algorithm, which pre-trains PLMs by comparing the (instruction, demonstration)-initialized extractor with a surrogate golden extractor. Experimental results on 4 few-shot NER datasets show that our method can effectively inject in-context NER ability into PLMs and significantly outperforms the PLMs+fine-tuning counterparts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset