Optimal Estimation of Change in a Population of Parameters
Paired estimation of change in parameters of interest over a population plays a central role in several application domains including those in the social sciences, epidemiology, medicine and biology. In these domains, the size of the population under study is often very large, however, the number of observations available per individual in the population is very small (sparse observations) which makes the problem challenging. Consider the setting with N independent individuals, each with unknown parameters (p_i, q_i) drawn from some unknown distribution on [0, 1]^2. We observe X_i ∼Bin(t, p_i) before an event and Y_i ∼Bin(t, q_i) after the event. Provided these paired observations, {(X_i, Y_i) }_i=1^N, our goal is to accurately estimate the distribution of the change in parameters, δ_i := q_i - p_i, over the population and properties of interest like the ℓ_1-magnitude of the change with sparse observations (t≪ N). We provide information theoretic lower bounds on the error in estimating the distribution of change and the ℓ_1-magnitude of change. Furthermore, we show that the following two step procedure achieves the optimal error bounds: first, estimate the full joint distribution of the paired parameters using the maximum likelihood estimator (MLE) and then estimate the distribution of change and the ℓ_1-magnitude of change using the joint MLE. Notably, and perhaps surprisingly, these error bounds are of the same order as the minimax optimal error bounds for learning the full joint distribution itself (in Wasserstein-1 distance); in other words, estimating the magnitude of the change of parameters over the population is, in a minimax sense, as difficult as estimating the full joint distribution itself.
READ FULL TEXT