On aggregation for heavy-tailed classes

02/25/2015
by   Shahar Mendelson, et al.
0

We introduce an alternative to the notion of `fast rate' in Learning Theory, which coincides with the optimal error rate when the given class happens to be convex and regular in some sense. While it is well known that such a rate cannot always be attained by a learning procedure (i.e., a procedure that selects a function in the given class), we introduce an aggregation procedure that attains that rate under rather minimal assumptions -- for example, that the L_q and L_2 norms are equivalent on the linear span of the class for some q>2, and the target random variable is square-integrable.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset