Co-segmentation Inspired Attention Module for Video-based Computer Vision Tasks

11/14/2021
by   Arulkumar Subramaniam, et al.
11

Computer vision tasks can benefit from the estimation of the salient object regions and interactions between those object regions. Identifying the object regions involves utilizing pretrained models to perform object detection, object segmentation and/or object pose estimation. However, it is infeasible in practice due to the following reasons: 1) The object categories of pretrained models' training dataset may not cover all the object categories exhaustively needed for general computer vision tasks, 2) The domain gap between pretrained models' training dataset and target task's dataset may differ and negatively impact the performance, 3) The bias and variance present in pretrained models may leak into target task leading to an inadvertently biased target model. To overcome these downsides, we propose to utilize the common rationale that a sequence of video frames capture a set of common objects and interactions between them, thus a notion of co-segmentation between the video frame features may equip the model with the ability to automatically focus on salient regions and improve underlying task's performance in an end-to-end manner. In this regard, we propose a generic module called "Co-Segmentation Activation Module" (COSAM) that can be plugged-in to any CNN to promote the notion of co-segmentation based attention among a sequence of video frame features. We show the application of COSAM in three video based tasks namely 1) Video-based person re-ID, 2) Video captioning, 3) Video action classification and demonstrate that COSAM is able to capture salient regions in the video frames, thus leading to notable performance improvements along with interpretable attention maps.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset