GRASP: A Goodness-of-Fit Test for Classification Learning

09/05/2022
by   Adel Javanmard, et al.
0

Performance of classifiers is often measured in terms of average accuracy on test data. Despite being a standard measure, average accuracy fails in characterizing the fit of the model to the underlying conditional law of labels given the features vector (Y|X), e.g. due to model misspecification, over fitting, and high-dimensionality. In this paper, we consider the fundamental problem of assessing the goodness-of-fit for a general binary classifier. Our framework does not make any parametric assumption on the conditional law Y|X, and treats that as a black box oracle model which can be accessed only through queries. We formulate the goodness-of-fit assessment problem as a tolerance hypothesis testing of the form H_0: 𝔼[D_f( Bern(η(X)) Bern(η̂(X)))]≤τ , where D_f represents an f-divergence function, and η(x), η̂(x) respectively denote the true and an estimate likelihood for a feature vector x admitting a positive label. We propose a novel test, called for testing H_0, which works in finite sample settings, no matter the features (distribution-free). We also propose model-X designed for model-X settings where the joint distribution of the features vector is known. Model-X uses this distributional information to achieve better power. We evaluate the performance of our tests through extensive numerical experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset