Learning Stable Classifiers by Transferring Unstable Features

06/15/2021
by   Yujia Bao, et al.
7

We study transfer learning in the presence of spurious correlations. We experimentally demonstrate that directly transferring the stable feature extractor learned on the source task may not eliminate these biases for the target task. However, we hypothesize that the unstable features in the source task and those in the target task are directly related. By explicitly informing the target classifier of the source task's unstable features, we can regularize the biases in the target task. Specifically, we derive a representation that encodes the unstable features by contrasting different data environments in the source task. On the target task, we cluster data from this representation, and achieve robustness by minimizing the worst-case risk across all clusters. We evaluate our method on both text and image classifications. Empirical results demonstrate that our algorithm is able to maintain robustness on the target task, outperforming the best baseline by 22.9 transfer settings. Our code is available at https://github.com/YujiaBao/Tofu.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset