Joint Person Segmentation and Identification in Synchronized First- and Third-person Videos
In a world in which cameras are becoming more and more pervasive, scenes in public spaces are often captured from multiple perspectives by diverse types of cameras, including surveillance and wearable cameras. An important problem is how to organize these heterogeneous collections of videos by finding connections between them, such as identifying common correspondences between people both appearing in the videos and wearing the cameras. In this paper, we consider scenarios in which multiple cameras of different types are observing a scene involving multiple people, and we wish to solve two specific, related problems: (1) given two or more synchronized third-person videos of a scene, produce a pixel-level segmentation of each visible person and identify corresponding people across different views (i.e., determine who in camera A corresponds with whom in camera B), and (2) given one or more synchronized third-person videos as well as a first-person video taken by a wearable camera, segment and identify the camera wearer in the third-person videos. Unlike previous work which requires ground truth bounding boxes to estimate the correspondences, we jointly perform the person segmentation and identification. We find that solving these two problems simultaneously is mutually beneficial, because better fine-grained segmentations allow us to better perform matching across views, and using information from multiple views helps us perform more accurate segmentation. We evaluate our approach on a challenging dataset of interacting people captured from multiple wearable cameras, and show that our proposed method performs significantly better than the state-of-the-art on both person segmentation and identification.
READ FULL TEXT