Continuous Adaptation of Multi-Camera Person Identification Models through Sparse Non-redundant Representative Selection
The problem of image-base person identification/recognition is to provide an identity to the image of an individual based on learned models that describe his/her appearance. Most traditional person identification systems rely on learning a static model on tediously labeled training data. Though labeling manually is an indispensable part of a supervised framework, for a large scale identification system labeling huge amount of data is a significant overhead. For large multi-sensor data as typically encountered in camera networks, labeling a lot of samples does not always mean more information, as redundant images are labeled several times. In this work, we propose a convex optimization based iterative framework that progressively and judiciously chooses a sparse but informative set of samples for labeling, with minimal overlap with previously labeled images. We also use a structure preserving sparse reconstruction based classifier to reduce the training burden typically seen in discriminative classifiers. The two stage approach leads to a novel framework for online update of the classifiers involving only the incorporation of new labeled data rather than any expensive training phase. We demonstrate the effectiveness of our approach on multi-camera person re-identification datasets, to demonstrate the feasibility of learning online classification models in multi-camera big data applications. Using three benchmark datasets, we validate our approach and demonstrate that our framework achieves superior performance with significantly less amount of manual labeling.
READ FULL TEXT