Time-Space Tradeoffs for Learning from Small Test Spaces: Learning Low Degree Polynomial Functions

08/08/2017
by   Paul Beame, et al.
0

We develop an extension of recently developed methods for obtaining time-space tradeoff lower bounds for problems of learning from random test samples to handle the situation where the space of tests is signficantly smaller than the space of inputs, a class of learning problems that is not handled by prior work. This extension is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices. As applications that follow from our new technique, we show that any algorithm that learns m-variate homogeneous polynomial functions of degree at most d over F_2 from evaluations on randomly chosen inputs either requires space Ω(mn) or 2^Ω(m) time where n=m^Θ(d) is the dimension of the space of such functions. These bounds are asymptotically optimal since they match the tradeoffs achieved by natural learning algorithms for the problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset