Nonparametric Estimation via Mixed Gradients

09/16/2022
by   Xiaowu Dai, et al.
0

Traditional nonparametric estimation methods often lead to a slow convergence rate in large dimensions and require unrealistically enormous sizes of datasets for reliable conclusions. We develop an approach based on mixed gradients, either observed or estimated, to effectively estimate the function at near-parametric convergence rates. The novel approach and computational algorithm could lead to methods useful to practitioners in many areas of science and engineering. Our theoretical results reveal a behavior universal to this class of nonparametric estimation problems. We explore a general setting involving tensor product spaces and build upon the smoothing spline analysis of variance (SS-ANOVA) framework. For d-dimensional models under full interaction, the optimal rates with gradient information on p covariates are identical to those for the (d-p)-interaction models without gradients and, therefore, the models are immune to the "curse of interaction". For additive models, the optimal rates using gradient information are root-n, thus achieving the "parametric rate". We demonstrate aspects of the theoretical results through synthetic and real data applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset