Robust estimation of the mean with bounded relative standard deviation

08/15/2019
by   Mark Huber, et al.
0

Many randomized approximation algorithms operate by giving a procedure for simulating a random variable X which has mean μ equal to the target answer, and a relative standard deviation bounded above by a known constant c. Examples of this type of algorithm includes methods for approximating the number of satisfying assignments to 2-SAT or DNF, the volume of a convex body, and the partition function of a Gibbs distribution. Because the answer is usually exponentially large in the problem input size, it is typical to require an estimate μ̂ satisfy P(|μ̂/μ - 1| > ϵ) ≤δ, where ϵ and δ are user specified nonnegative parameters. The current best algorithm uses 2c^2ϵ^-2(1+ϵ)^2 ln(2/δ) samples to achieve such an estimate. By modifying the algorithm in order to balance the tails, it is possible to improve this result to 2(c^2ϵ^-2 + 1)/(1-ϵ^2)ln(2/δ) samples. Aside from the theoretical improvement, we also consider how to best implement this algorithm in practice. Numerical experiments show the behavior of the estimator on distributions where the relative standard deviation is unknown or infinite.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset