Testing whether a Learning Procedure is Calibrated

12/23/2020
by   Jon Cockayne, et al.
0

A learning procedure takes as input a dataset and performs inference for the parameters θ of a model that is assumed to have given rise to the dataset. Here we consider learning procedures whose output is a probability distribution, representing uncertainty about θ after seeing the dataset. Bayesian inference is a prime example of such a procedure but one can also construct other learning procedures that return distributional output. This paper studies conditions for a learning procedure to be considered calibrated, in the sense that the true data-generating parameters are plausible as samples from its distributional output. A learning procedure that is calibrated need not be statistically efficient and vice versa. A hypothesis-testing framework is developed in order to assess, using simulation, whether a learning procedure is calibrated. Finally, we exploit our framework to test the calibration of some learning procedures that are motivated as being approximations to Bayesian inference but are nevertheless widely used.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset