HPILN: A feature learning framework for cross-modality person re-identification

06/07/2019
by   Jian-Wu Lin, et al.
4

Most video surveillance systems use both RGB and infrared cameras, making it a vital technique to re-identify a person cross the RGB and infrared modalities. This task can be challenging due to both the cross-modality variations caused by heterogeneous images in RGB and infrared, and the intra-modality variations caused by the heterogeneous human poses, camera views, light brightness, etc. To meet these challenges a novel feature learning framework, HPILN, is proposed. In the framework existing single-modality re-identification models are modified to fit for the cross-modality scenario, following which specifically designed hard pentaplet loss and identity loss are used to improve the performance of the modified cross-modality re-identification models. Based on the benchmark of the SYSU-MM01 dataset, extensive experiments have been conducted, which show that the proposed method outperforms all existing methods in terms of Cumulative Match Characteristic curve (CMC) and Mean Average Precision (MAP).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset