Differentiable Learning-to-Normalize via Switchable Normalization

06/28/2018
by   Ping Luo, et al.
0

We address a learning-to-normalize problem by proposing Switchable Normalization (SN), which learns to select different operations for different normalization layers of a deep neural network (DNN). The statistics (means and variances) of SN are computed in three scopes, including a channel, a layer, and a minibatch. SN selects them by learning their importance weights in an end-to-end manner. It has several appealing benefits. First, SN adapts to various network architectures and tasks. Second, it is robust to a wide range of batch sizes, maintaining high performance when small minibatch is presented (e.g. 2 images/GPU). Third, the above benefits of SN are obtained by treating all channels as a group, unlike group normalization that searches the number of groups as the hyper-parameter. Extensive evaluations demonstrate that SN outperforms its counterparts on various challenging problems, such as image classification in ImageNet, object detection and segmentation in COCO, artistic image stylization, and neural architecture search. The code of SN will be made available in https://github.com/switchablenorms/.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset