LGD: Label-guided Self-distillation for Object Detection

09/23/2021
by   Peizhen Zhang, et al.
0

In this paper, we propose the first self-distillation framework for general object detection, termed LGD (Label-Guided self-Distillation). Previous studies rely on a strong pretrained teacher to provide instructive knowledge for distillation. However, this could be unavailable in real-world scenarios. Instead, we generate an instructive knowledge by inter-and-intra relation modeling among objects, requiring only student representations and regular labels. In detail, our framework involves sparse label-appearance encoding, inter-object relation adaptation and intra-object knowledge mapping to obtain the instructive knowledge. Modules in LGD are trained end-to-end with student detector and are discarded in inference. Empirically, LGD obtains decent results on various detectors, datasets, and extensive task like instance segmentation. For example in MS-COCO dataset, LGD improves RetinaNet with ResNet-50 under 2x single-scale training from 36.2 much stronger detectors like FCOS with ResNeXt-101 DCN v2 under 2x multi-scale training (46.1 CrowdHuman dataset, LGD boosts mMR by 2.3 Compared with a classical teacher-based method FGFI, LGD not only performs better without requiring pretrained teacher but also with 51 cost beyond inherent student learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset