Selection Bias Induced Spurious Correlations in Large Language Models

07/18/2022
by   Emily McMilin, et al.
0

In this work we show how large language models (LLMs) can learn statistical dependencies between otherwise unconditionally independent variables due to dataset selection bias. To demonstrate the effect, we developed a masked gender task that can be applied to BERT-family models to reveal spurious correlations between predicted gender pronouns and a variety of seemingly gender-neutral variables like date and location, on pre-trained (unmodified) BERT and RoBERTa large models. Finally, we provide an online demo, inviting readers to experiment further.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset