Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK

07/09/2020
by   Yuanzhi Li, et al.
8

We consider the dynamic of gradient descent for learning a two-layer neural network. We assume the input x∈ℝ^d is drawn from a Gaussian distribution and the label of x satisfies f^⋆(x) = a^⊤|W^⋆x|, where a∈ℝ^d is a nonnegative vector and W^⋆∈ℝ^d× d is an orthonormal matrix. We show that an over-parametrized two-layer neural network with ReLU activation, trained by gradient descent from random initialization, can provably learn the ground truth network with population loss at most o(1/d) in polynomial time with polynomial samples. On the other hand, we prove that any kernel method, including Neural Tangent Kernel, with a polynomial number of samples in d, has population loss at least Ω(1 / d).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset