Learning a Single Neuron with Adversarial Label Noise via Gradient Descent
We study the fundamental problem of learning a single neuron, i.e., a function of the form ๐ฑโฆฯ(๐ฐยท๐ฑ) for monotone activations ฯ:โโฆโ, with respect to the L_2^2-loss in the presence of adversarial label noise. Specifically, we are given labeled examples from a distribution D on (๐ฑ, y)โโ^d รโ such that there exists ๐ฐ^โโโ^d achieving F(๐ฐ^โ)=ฯต, where F(๐ฐ)=๐_(๐ฑ,y)โผ D[(ฯ(๐ฐยท๐ฑ)-y)^2]. The goal of the learner is to output a hypothesis vector ๐ฐ such that F(๐จ)=C ฯต with high probability, where C>1 is a universal constant. As our main contribution, we give efficient constant-factor approximate learners for a broad class of distributions (including log-concave distributions) and activation functions. Concretely, for the class of isotropic log-concave distributions, we obtain the following important corollaries: For the logistic activation, we obtain the first polynomial-time constant factor approximation (even under the Gaussian distribution). Our algorithm has sample complexity O(d/ฯต), which is tight within polylogarithmic factors. For the ReLU activation, we give an efficient algorithm with sample complexity ร(d (1/ฯต)). Prior to our work, the best known constant-factor approximate learner had sample complexity ฮฉฬ(d/ฯต). In both of these settings, our algorithms are simple, performing gradient-descent on the (regularized) L_2^2-loss. The correctness of our algorithms relies on novel structural results that we establish, showing that (essentially all) stationary points of the underlying non-convex loss are approximately optimal.
READ FULL TEXT