Optimal Hypothesis Testing Based on Information Theory
There has a major problem in the current theory of hypothesis testing in which no unified indicator to evaluate the goodness of various test methods since the cost function or utility function usually relies on the specific application scenario, resulting in no optimal hypothesis testing method. In this paper, the problem of optimal hypothesis testing is investigated based on information theory. We propose an information-theoretic framework of hypothesis testing consisting of five parts: test information (TI) is proposed to evaluate the hypothesis testing, which depends on the a posteriori probability distribution function of hypotheses and independent of specific test methods; accuracy with the unit of bit is proposed to evaluate the degree of validity of specific test methods; the sampling a posteriori (SAP) probability test method is presented, which makes stochastic selections on the hypotheses according to the a posteriori probability distribution of the hypotheses; the probability of test failure is defined to reflect the probability of the failed decision is made; test theorem is proved that all accuracy lower than the TI is achievable. Specifically, for every accuracy lower than TI, there exists a test method with the probability of test failure tending to zero. Conversely, there is no test method whose accuracy is more than TI. Numerical simulations are performed to demonstrate that the SAP test is asymptotically optimal. In addition, the results show that the accuracy of the SAP test and the existing test methods, such as the maximum a posteriori probability, expected a posteriori probability, and median a posteriori probability tests, are not more than TI.
READ FULL TEXT