Extending Label Smoothing Regularization with Self-Knowledge Distillation

09/11/2020
by   Ji-Yue Wang, et al.
0

Inspired by the strong correlation between the Label Smoothing Regularization(LSR) and Knowledge distillation(KD), we propose an algorithm LsrKD for training boost by extending the LSR method to the KD regime and applying a softer temperature. Then we improve the LsrKD by a Teacher Correction(TC) method, which manually sets a constant larger proportion for the right class in the uniform distribution teacher. To further improve the performance of LsrKD, we develop a self-distillation method named Memory-replay Knowledge Distillation (MrKD) that provides a knowledgeable teacher to replace the uniform distribution one in LsrKD. The MrKD method penalizes the KD loss between the current model's output distributions and its copies' on the training trajectory. By preventing the model learning so far from its historical output distribution space, MrKD can stabilize the learning and find a more robust minimum. Our experiments show that LsrKD can improve LSR performance consistently at no cost, especially on several deep neural networks where LSR is ineffectual. Also, MrKD can significantly improve single model training. The experiment results confirm that the TC can help LsrKD and MrKD to boost training, especially on the networks they are failed. Overall, LsrKD, MrKD, and their TC variants are comparable to or outperform the LSR method, suggesting the broad applicability of these KD methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset