On the Sample Complexity of Learning from a Sequence of Experiments

02/12/2018
by   Longyun Guo, et al.
0

We analyze the sample complexity of a new problem: learning from a sequence of experiments. In this problem, the learner should choose a hypothesis that performs well with respect to an infinite sequence of experiments, and their related data distributions. In practice, the learner can only perform m experiments with a total of N samples drawn from those data distributions. By using a Rademacher complexity approach, we show that the gap between the training and generation error is O((m/N)^0.5). We also provide some examples for linear prediction, two-layer neural networks and kernel methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro