Farkas layers: don't shift the data, fix the geometry

10/04/2019
by   Aram-Alexandre Pooladian, et al.
0

Successfully training deep neural networks often requires either batch normalization, appropriate weight initialization, both of which come with their own challenges. We propose an alternative, geometrically motivated method for training. Using elementary results from linear programming, we introduce Farkas layers: a method that ensures at least one neuron is active at a given layer. Focusing on residual networks with ReLU activation, we empirically demonstrate a significant improvement in training capacity in the absence of batch normalization or methods of initialization across a broad range of network sizes on benchmark datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset