Deep Multimodal Fusion for Generalizable Person Re-identification

11/02/2022
by   Suncheng Xiang, et al.
0

Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance. Recently, leveraging the supervised or semi-unsupervised learning paradigms, which benefits from the large-scale datasets and strong computing performance, has achieved a competitive performance on a specific target domain. However, when Re-ID models are directly deployed in a new domain without target samples, they always suffer from considerable performance degradation and poor domain generalization. To address this challenge, in this paper, we propose DMF, a Deep Multimodal Fusion network for the general scenarios on person re-identification task, where rich semantic knowledge is introduced to assist in feature representation learning during the pre-training stage. On top of it, a multimodal fusion strategy is introduced to translate the data of different modalities into the same feature space, which can significantly boost generalization capability of Re-ID model. In the fine-tuning stage, a realistic dataset is adopted to fine-tine the pre-trained model for distribution alignment with real-world. Comprehensive experiments on benchmarks demonstrate that our proposed method can significantly outperform previous domain generalization or meta-learning methods. Our source code will also be publicly available at https://github.com/JeremyXSC/DMF.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset