Interpretable Neural Networks for Panel Data Analysis in Economics

10/11/2020
by   Yucheng Yang, et al.
0

The lack of interpretability and transparency are preventing economists from using advanced tools like neural networks in their empirical work. In this paper, we propose a new class of interpretable neural network models that can achieve both high prediction accuracy and interpretability in regression problems with time series cross-sectional data. Our model can essentially be written as a simple function of a limited number of interpretable features. In particular, we incorporate a class of interpretable functions named persistent change filters as part of the neural network. We apply this model to predicting individual's monthly employment status using high-dimensional administrative data in China. We achieve an accuracy of 94.5 which is comparable to the most accurate conventional machine learning methods. Furthermore, the interpretability of the model allows us to understand the mechanism that underlies the ability for predicting employment status using administrative data: an individual's employment status is closely related to whether she pays different types of insurances. Our work is a useful step towards overcoming the "black box" problem of neural networks, and provide a promising new tool for economists to study administrative and proprietary big data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset