Theroretical Insight into Batch Normalization: Data Dependant Auto-Tuning of Regularization Rate

09/15/2022
by   Lakshmi Annamalai, et al.
0

Batch normalization is widely used in deep learning to normalize intermediate activations. Deep networks suffer from notoriously increased training complexity, mandating careful initialization of weights, requiring lower learning rates, etc. These issues have been addressed by Batch Normalization (BN), by normalizing the inputs of activations to zero mean and unit standard deviation. Making this batch normalization part of the training process dramatically accelerates the training process of very deep networks. A new field of research has been going on to examine the exact theoretical explanation behind the success of BN. Most of these theoretical insights attempt to explain the benefits of BN by placing them on its influence on optimization, weight scale invariance, and regularization. Despite BN undeniable success in accelerating generalization, the gap of analytically relating the effect of BN to the regularization parameter is still missing. This paper aims to bring out the data-dependent auto-tuning of the regularization parameter by BN with analytical proofs. We have posed BN as a constrained optimization imposed on non-BN weights through which we demonstrate its data statistics dependant auto-tuning of regularization parameter. We have also given analytical proof for its behavior under a noisy input scenario, which reveals the signal vs. noise tuning of the regularization parameter. We have also substantiated our claim with empirical results from the MNIST dataset experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset