Sobolev Norm Learning Rates for Regularized Least-Squares Algorithm

02/23/2017
by   Simon Fischer, et al.
0

Learning rates for regularized least-squares algorithms are in most cases expressed with respect to the excess risk, or equivalently, the L_2-norm. For some applications, however, guarantees with respect to stronger norms such as the L_∞-norm, are desirable. We address this problem by establishing learning rates for a continuous scale of norms between the L_2- and the RKHS norm. As a byproduct we derive L_∞-norm learning rates, and in the case of Sobolev RKHSs we actually obtain Sobolev norm learning rates, which may also imply L_∞-norm rates for some derivatives. In all cases, we do not need to assume the target function to be contained in the used RKHS. Finally, we show that in many cases the derived rates are minimax optimal.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset