A Bayesian encourages dropout

12/22/2014
by   Shin-ichi Maeda, et al.
0

Dropout is one of the key techniques to prevent the learning from overfitting. It is explained that dropout works as a kind of modified L2 regularization. Here, we shed light on the dropout from Bayesian standpoint. Bayesian interpretation enables us to optimize the dropout rate, which is beneficial for learning of weight parameters and prediction after learning. The experiment result also encourages the optimization of the dropout.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset