Foothill: A Quasiconvex Regularization Function
Deep neural networks (DNNs) have demonstrated success for many supervised learning tasks, ranging from voice recognition, object detection, to image classification. However, their increasing complexity yields poor generalization error. Adding noise to the input data or using a concrete regularization function helps to improve generalization. Here we introduce foothill function, an infinitely differentiable quasiconvex function. This regularizer is flexible enough to deform towards L_1 and L_2 penalties. Foothill can be used as a loss, as a regularizer, or as a binary quantizer.
READ FULL TEXT