Bias amplification is a phenomenon in which models increase imbalances
p...
What is the effect of releasing a preprint of a paper before it is submi...
Few-shot fine-tuning and in-context learning are two alternative strateg...
Piano fingering – knowing which finger to use to play each note in a mus...
While fine-tuned language models perform well on many tasks, they were a...
Recently, the community has achieved substantial progress on many common...
Large amounts of training data are one of the major reasons for the high...
Understanding the relations between entities denoted by NPs in text is a...
We explore Few-Shot Learning (FSL) for Relation Classification (RC). Foc...
The Winograd Schema (WS) has been proposed as a test for measuring
commo...
Contrastive explanations clarify why an event occurred in contrast to
an...
Multilingual pretrained language models have demonstrated remarkable
zer...
Recent works have demonstrated that multilingual BERT (mBERT) learns ric...
Crowdsourcing has eased and scaled up the collection of linguistic annot...
Pretrained Language Models (LMs) have been shown to possess significant
...
Contextualized word representations, such as ELMo and BERT, were shown t...
A growing body of work makes use of probing in order to investigate the
...
The ability to control for the kinds of information encoded in neural
re...
Standard test sets for supervised learning evaluate in-distribution
gene...
Recent success of pre-trained language models (LMs) has spurred widespre...
Most current NLP systems have little knowledge about quantitative attrib...
We provide the first computational treatment of fused-heads construction...
Recent advances in Representation Learning and Adversarial Training seem...
Latent factor models for recommender systems represent users and items a...