Learning Two-Layer Residual Networks with Nonparametric Function Estimation by Convex Programming

08/17/2020
by   Zhunxuan Wang, et al.
0

We focus on learning a two-layer residual neural network with preactivation by ReLU (preReLU-TLRN): Suppose the input 𝐱 is from a distribution with support space ℝ^d and the ground-truth generative model is a preReLU-TLRN, given by 𝐲 = B^∗[(A^∗𝐱)^+ + 𝐱], where ground-truth network parameters A^∗∈ℝ^d× d is a nonnegative full-rank matrix and B^∗∈ℝ^m× d is full-rank with m ≥ d. We design layerwise objectives as functionals whose analytic minimizers sufficiently express the exact ground-truth network in terms of its parameters and nonlinearities. Following this objective landscape, learning a preReLU-TLRN from finite samples can be formulated as convex programming with nonparametric function estimation: For each layer, we first formulate the corresponding empirical risk minimization (ERM) as convex quadratic programming (QP), then we show the solution space of the QP can be equivalently determined by a set of linear inequalities, which can then be efficiently solved by linear programming (LP). Experiments show the robustness and sample efficiency of our methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset