Empirical Bayes estimation: When does g-modeling beat f-modeling in theory (and in practice)?

11/23/2022
by   Yandi Shen, et al.
0

Empirical Bayes (EB) is a popular framework for large-scale inference that aims to find data-driven estimators to compete with the Bayesian oracle that knows the true prior. Two principled approaches to EB estimation have emerged over the years: f-modeling, which constructs an approximate Bayes rule by estimating the marginal distribution of the data, and g-modeling, which estimates the prior from data and then applies the learned Bayes rule. For the Poisson model, the prototypical examples are the celebrated Robbins estimator and the nonparametric MLE (NPMLE), respectively. It has long been recognized in practice that the Robbins estimator, while being conceptually appealing and computationally simple, lacks robustness and can be easily derailed by "outliers" (data points that were rarely observed before), unlike the NPMLE which provides more stable and interpretable fit thanks to its Bayes form. On the other hand, not only do the existing theories shed little light on this phenomenon, but they all point to the opposite, as both methods have recently been shown optimal in terms of the regret (excess over the Bayes risk) for compactly supported and subexponential priors with exact logarithmic factors. In this paper we provide a theoretical justification for the superiority of NPMLE over Robbins for heavy-tailed data by considering priors with bounded pth moment previously studied for the Gaussian model. For the Poisson model with sample size n, assuming p>1 (for otherwise triviality arises), we show that the NPMLE with appropriate regularization and truncation achieves a total regret Θ̃(n^3/2p+1), which is minimax optimal within logarithmic factors. In contrast, the total regret of Robbins estimator (with similar truncation) is Θ̃(n^3/p+2) and hence suboptimal by a polynomial factor.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset