Conditional Inference: Towards a Hierarchy of Statistical Evidence

04/09/2021
by   Ying Jin, et al.
0

Statistical uncertainty has many sources. P-values and confidence intervals usually quantify the overall uncertainty, which may include variation due to sampling and uncertainty due to measurement error, among others. Practitioners might be interested in quantifying only one source of uncertainty. For example, one might be interested in the uncertainty of a regression coefficient of a fixed set of subjects, which corresponds to quantifying the uncertainty due to measurement error and ignoring the variation induced by sampling. In causal inference it is common to infer treatment effects for a certain set of subjects, only accounting for uncertainty due to random treatment assignment. Motivated by these examples, we consider conditional estimation and conditional inference for parameters in parametric and semi-parametric models, where we condition on observed characteristics of a population. We derive a theory of conditional inference, including methods to obtain conditionally valid p-values and confidence intervals. Conditional p- values can be used to construct a hierarchy of statistical evidence that may help clarify the generalizability of a statistical finding. We show that a naive method allows to gauge the generalizability of a finding, with rigorous control of the family-wise error rate. In addition, the proposed approach allows to conduct transfer learning of conditional parameters, with rigorous conditional guarantees. The performance of the proposed approach is evaluated on simulated and real-world data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset