Finite-Sample Maximum Likelihood Estimation of Location

06/06/2022
by   Shivam Gupta, et al.
0

We consider 1-dimensional location estimation, where we estimate a parameter λ from n samples λ + η_i, with each η_i drawn i.i.d. from a known distribution f. For fixed f the maximum-likelihood estimate (MLE) is well-known to be optimal in the limit as n →∞: it is asymptotically normal with variance matching the Cramér-Rao lower bound of 1/nℐ, where ℐ is the Fisher information of f. However, this bound does not hold for finite n, or when f varies with n. We show for arbitrary f and n that one can recover a similar theory based on the Fisher information of a smoothed version of f, where the smoothing radius decays with n.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro