Large-scale inference with block structure

06/28/2019
by   Jiyao Kou, et al.
0

The detection of weak and rare effects in large amounts of data arises in a number of modern data analysis problems. Known results show that in this situation the potential of statistical inference is severely limited by the large-scale multiple testing that is inherent in these problems. Here we show that fundamentally more powerful statistical inference is possible when there is some structure in the signal that can be exploited, e.g. if the signal is clustered in many small blocks, as is the case in some relevant applications. We derive the detection boundary in such a situation where we allow both the number of blocks and the block length to grow polynomially with sample size. We derive these results both for the univariate and the multivariate settings as well as for the problem of detecting clusters in a network. These results recover as special cases the heterogeneous mixture detection problem [1] where there is no structure in the signal, as well as scan problem [2] where the signal comprises a single interval. We develop methodology that allows optimal adaptive detection in the general setting, thus exploiting the structure if it is present without incurring a relevant penalty in the case where there is no structure. The advantage of this methodology can be considerable, as in the case of no structure the means need to increase at the rate √( n) to ensure detection, while the presence of structure allows detection even if the means decrease at a polynomial rate.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset