Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

08/29/2019
by   Ilias Diakonikolas, et al.
7

We study the problem of properly learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning d-dimensional halfspaces on the unit ball within misclassification error α·OPT_γ + ϵ, where OPT_γ is the optimal γ-margin error rate and α≥ 1 is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio α≥ 1, that are nearly-matching for a range of parameters. Specifically, for the natural setting that α is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an α = 1.01-approximate proper learner that uses O(1/(ϵ^2γ^2)) samples (which is optimal) and runs in time poly(d/ϵ) · 2^Õ(1/γ^2). On the negative side, we show that any constant factor approximate proper learner has runtime poly(d/ϵ) · 2^(1/γ)^2-o(1), assuming the Exponential Time Hypothesis.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset