Limit Distribution for Smooth Total Variation and χ^2-Divergence in High Dimensions

02/03/2020
by   Ziv Goldfeld, et al.
0

Statistical divergences are ubiquitous in machine learning as tools for measuring distances between probability distributions. As data science inherently relies on approximating distributions from samples, we consider empirical approximation under two central f-divergences: the total variation (TV) distance and the χ^2-divergence. To circumvent the sensitivity of these divergences to support mismatch, the framework of Gaussian smoothing is adopted. We study the limit distribution of √(n)δ_TV(P_n∗N,P∗N) and nχ^2(P_n∗NP∗N), where P_n is the empirical measure based on n independently and identically distributed (i.i.d.) samples from P, N:=N(0,σ^2I_d), and ∗ stands for convolution. In arbitrary dimension, the limit distributions are characterized in terms of Gaussian process on R^d with covariance operator that dependent on P and the isotropic Gaussian density of parameter σ. This, in turn, implies optimality of the n^-1/2 expected value convergence rates recently derived for δ_TV(P_n∗N,P∗N) and χ^2(P_n∗NP∗N). These strong statistical guarantees promote empirical approximation under Gaussian smoothing as a powerful framework for learning and inference based on high-dimensional data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset