How Many and Which Training Points Would Need to be Removed to Flip this Prediction?

02/04/2023
by   Jinghan Yang, et al.
0

We consider the problem of identifying a minimal subset of training data 𝒮_t such that if the instances comprising 𝒮_t had been removed prior to training, the categorization of a given test point x_t would have been different. Identifying such a set may be of interest for a few reasons. First, the cardinality of 𝒮_t provides a measure of robustness (if |𝒮_t| is small for x_t, we might be less confident in the corresponding prediction), which we show is correlated with but complementary to predicted probabilities. Second, interrogation of 𝒮_t may provide a novel mechanism for contesting a particular model prediction: If one can make the case that the points in 𝒮_t are wrongly labeled or irrelevant, this may argue for overturning the associated prediction. Identifying 𝒮_t via brute-force is intractable. We propose comparatively fast approximation methods to find 𝒮_t based on influence functions, and find that – for simple convex text classification models – these approaches can often successfully identify relatively small sets of training examples which, if removed, would flip the prediction.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset