Monotonic Trends in Deep Neural Networks

09/24/2019
by   Akhil Gupta, et al.
0

The importance of domain knowledge in enhancing model performance and making reliable predictions in the real-world is unparalleled. We focus on incorporating monotonic trends (increase in input implies increase/decrease in output), and propose a novel gradient-based point-wise loss function for enforcing partial monotonicity with deep neural networks. While recent developments have focused on structural changes to the model, our approach aims at enhancing the learning process. Our point-wise loss function acts as a plug-in to the standard loss and penalizes non-monotonic gradients. We demonstrate that the point-wise loss produces comparable (and sometimes better) results with respect to both AUC and monotonicity metric, compared to state-of-the-art deep lattice networks that enforce monotonicity. Moreover, it is able to learn customized individual trends and produces smoother conditional curves - important for personalized decisions, while preserving the flexibility of deep networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset