Error analysis of regularized trigonometric linear regression with unbounded sampling: a statistical learning viewpoint
The effectiveness of non-parametric, kernel-based methods for function estimation comes at the price of high computational complexity, which hinders their applicability in adaptive, model-based control. Motivated by approximation techniques based on sparse spectrum Gaussian processes, we focus on models given by regularized trigonometric linear regression. This paper provides an analysis of the performance of such an estimation set-up within the statistical learning framework. In particular, we derive a novel bound for the sample error in finite-dimensional spaces, accounting for noise with potentially unbounded support. Next, we study the approximation error and discuss the bias-variance trade-off as a function of the regularization parameter by combining the two bounds.
READ FULL TEXT