Non-Parametric Structure Learning on Hidden Tree-Shaped Distributions
We provide high probability sample complexity guarantees for non-parametric structure learning of tree-shaped graphical models whose nodes are discrete random variables with a finite or countable alphabet, both in the noiseless and noisy regimes. First, we introduce a new, fundamental quantity called the (noisy) information threshold, which arises naturally from the error analysis of the Chow-Liu algorithm and characterizes not only the sample complexity, but also the inherent impact of the noise on the structure learning task, without explicit assumptions on the distribution of the model. This allows us to present the first non-parametric, high-probability finite sample complexity bounds on tree-structure learning from potentially noise-corrupted data. In particular, for number of nodes p, success rate 1-δ, and a fixed value of the information threshold, our sample complexity bounds for exact structure recovery are of the order of O(log^1+ζ (p/δ)), for all ζ>0, for both noiseless and noisy settings. Subsequently, we apply our results on two classes of hidden models, namely, the M-ary erasure channel and the generalized symmetric channel, illustrating the usefulness and importance of our framework. As a byproduct of our analysis, this paper resolves the open problem of tree structure learning in the presence of non-identically distributed observation noise, providing explicit conditions on the convergence of the Chow-Liu algorithm under this setting as well.
READ FULL TEXT