Sparse Functional Principal Component Analysis in High Dimensions

11/02/2020
by   Xiaoyu Hu, et al.
0

Functional principal component analysis (FPCA) is a fundamental tool and has attracted increasing attention in recent decades, while existing methods are restricted to data with a small number of random functions. In this work, we focus on high-dimensional functional processes where the number of random functions p is comparable to, or even much larger than the sample size n. Such data are ubiquitous in various fields such as neuroimaging analysis, and cannot be properly modeled by existing methods. We propose a new algorithm, called sparse FPCA, which is able to model principal eigenfunctions effectively under sensible sparsity regimes. While sparsity assumptions are standard in multivariate statistics, they have not been investigated in the complex context where not only is p large, but also each variable itself is an intrinsically infinite-dimensional process. The sparsity structure motivates a thresholding rule that is easy to compute without smoothing operations by exploiting the relationship between univariate orthonormal basis expansions and multivariate Kahunen-Loève (K-L) representations. We investigate the theoretical properties of the resulting estimators under two sparsity regimes, and simulated and real data examples are provided to offer empirical support which also performs well in subsequent analysis such as classification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset