Avoiding catastrophic forgetting in mitigating model biases in sentence-pair classification with elastic weight consolidation

04/29/2020
by   James Thorne, et al.
0

The biases present in training datasets have been shown to be affecting models for a number of tasks such as natural language inference(NLI) and fact verification. While fine-tuning models on additional data has been used to mitigate such biases, a common issue is that of catastrophic forgetting of the original task. In this paper, we show that elastic weight consolidation (EWC) allows fine-tuning of models to mitigate biases for NLI and fact verification while being less susceptible to catastrophic forgetting. In our evaluation on fact verification systems, we show that fine-tuning with EWC Pareto dominates standard fine-tuning, yielding models lower levels of forgetting on the original task for equivalent gains in accuracy on the fine-tuned task. Additionally, we show that systems trained on NLI can be fine-tuned to improve their accuracy on stress test challenge tasks with minimal loss in accuracy on the MultiNLI dataset despite greater domain shift.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset