Solvable Integration Problems and Optimal Sample Size Selection
We want to compute the integral of a function or the expectation of a random variable with minimal cost and use, for the algorithm and for upper bounds of the complexity, i.i.d. samples. Under certain assumptions it is possible to select a sample size on ground of a variance estimation, or - more generally - on ground of an estimation of a (central absolute) p-moment. That way one can guarantee a small absolute error with high probability, the problem is thus called solvable. The expected cost of the method depends on the p-moment of the random variable, which can be arbitrarily large. Our lower bounds apply not only to methods based on i.i.d. samples but also to general randomized algorithms. They show that - up to constants - the cost of the algorithm is optimal in terms of accuracy, confidence level, and norm of the particular input random variable. Since the considered classes of random variables or integrands are very large, the worst case cost would be infinite. Nevertheless one can define adaptive stopping rules such that for each input the expected cost is finite. We contrast these positive results with examples of integration problems that are not solvable.
READ FULL TEXT