The Foes of Neural Network's Data Efficiency Among Unnecessary Input Dimensions

07/13/2021
by   Vanessa D'Amario, et al.
0

Datasets often contain input dimensions that are unnecessary to predict the output label, e.g. background in object recognition, which lead to more trainable parameters. Deep Neural Networks (DNNs) are robust to increasing the number of parameters in the hidden layers, but it is unclear whether this holds true for the input layer. In this letter, we investigate the impact of unnecessary input dimensions on a central issue of DNNs: their data efficiency, ie. the amount of examples needed to achieve certain generalization performance. Our results show that unnecessary input dimensions that are task-unrelated substantially degrade data efficiency. This highlights the need for mechanisms that remove task-unrelated dimensions to enable data efficiency gains.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset