Weakly Supervised Action Localization by Sparse Temporal Pooling Network
We propose a weakly supervised temporal action localization algorithm on untrimmed videos using convolutional neural networks. Our algorithm predicts temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. This objective is achieved by proposing a novel deep neural network that recognizes actions and identifies a sparse set of key segments associated with the actions through adaptive temporal pooling of video segments. We design the loss function of the network to comprise two terms--one for classification error and the other for sparsity of the selected segments. After recognizing actions with sparse attention weights for key segments, we extract temporal proposals for actions using temporal class activation mappings to estimate time intervals that localize target actions. The proposed algorithm attains state-of-the-art accuracy on the THUMOS14 dataset and outstanding performance on ActivityNet1.3 even with weak supervision.
READ FULL TEXT