D^3ETR: Decoder Distillation for Detection Transformer

11/17/2022
by   Xiaokang Chen, et al.
0

While various knowledge distillation (KD) methods in CNN-based detectors show their effectiveness in improving small students, the baselines and recipes for DETR-based detectors are yet to be built. In this paper, we focus on the transformer decoder of DETR-based detectors and explore KD methods for them. The outputs of the transformer decoder lie in random order, which gives no direct correspondence between the predictions of the teacher and the student, thus posing a challenge for knowledge distillation. To this end, we propose MixMatcher to align the decoder outputs of DETR-based teachers and students, which mixes two teacher-student matching strategies, i.e., Adaptive Matching and Fixed Matching. Specifically, Adaptive Matching applies bipartite matching to adaptively match the outputs of the teacher and the student in each decoder layer, while Fixed Matching fixes the correspondence between the outputs of the teacher and the student with the same object queries, with the teacher's fixed object queries fed to the decoder of the student as an auxiliary group. Based on MixMatcher, we build Decoder Distillation for DEtection TRansformer (D^3ETR), which distills knowledge in decoder predictions and attention maps from the teachers to students. D^3ETR shows superior performance on various DETR-based detectors with different backbones. For example, D^3ETR improves Conditional DETR-R50-C5 by 7.8/2.4 mAP under 12/50 epochs training settings with Conditional DETR-R101-C5 as the teacher.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset