Dropout Inference with Non-Uniform Weight Scaling

04/27/2022
by   Zhaoyuan Yang, et al.
0

Dropout as regularization has been used extensively to prevent overfitting for training neural networks. During training, units and their connections are randomly dropped, which could be considered as sampling many different submodels from the original model. At test time, weight scaling and Monte Carlo approximation are two widely applied approaches to approximate the outputs. Both approaches work well practically when all submodels are low-bias complex learners. However, in this work, we demonstrate scenarios where some submodels behave closer to high-bias models and a non-uniform weight scaling is a better approximation for inference.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset