Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting

07/13/2017
by   Anders Oland, et al.
0

In this work, we show that saturating output activation functions, such as the softmax, impede learning on a number of standard classification tasks. Moreover, we present results showing that the utility of softmax does not stem from the normalization, as some have speculated. In fact, the normalization makes things worse. Rather, the advantage is in the exponentiation of error gradients. This exponential gradient boosting is shown to speed up convergence and improve generalization. To this end, we demonstrate faster convergence and better performance on diverse classification tasks: image classification using CIFAR-10 and ImageNet, and semantic segmentation using PASCAL VOC 2012. In the latter case, using the state-of-the-art neural network architecture, the model converged 33 with the standard softmax activation, and with a slightly better performance to boot.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset