Semantic Video Moments Retrieval at Scale: A New Task and a Baseline

10/15/2022
by   Na Li, et al.
0

Motivated by the increasing need of saving search effort by obtaining relevant video clips instead of whole videos, we propose a new task, named Semantic Video Moments Retrieval at scale (SVMR), which aims at finding relevant videos coupled with re-localizing the video clips in them. Instead of a simple combination of video retrieval and video re-localization, our task is more challenging because of several essential aspects. In the 1st stage, our SVMR should take into account the fact that: 1) a positive candidate long video can contain plenty of irrelevant clips which are also semantically meaningful. 2) a long video can be positive to two totally different query clips if it contains clips relevant to two queries. The 2nd re-localization stage also exhibits different assumptions from existing video re-localization tasks, which hold an assumption that the reference video must contain semantically similar segments corresponding to the query clip. Instead, in our scenario, the retrieved long video can be a false positive one due to the inaccuracy of the first stage. To address these challenges, we propose our two-stage baseline solution of candidate videos retrieval followed by a novel attention-based query-reference semantically alignment framework to re-localize target clips from candidate videos. Furthermore, we build two more appropriate benchmark datasets from the off-the-shelf ActivityNet-1.3 and HACS for a thorough evaluation of SVMR models. Extensive experiments are carried out to show that our solution outperforms several reference solutions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset