GPU-based Commonsense Paradigms Reasoning for Real-Time Query Answering and Multimodal Analysis

07/14/2018
by   Nguyen Ha Tran, et al.
0

We utilize commonsense knowledge bases to address the problem of real- time multimodal analysis. In particular, we focus on the problem of multimodal sentiment analysis, which consists in the simultaneous analysis of different modali- ties, e.g., speech and video, for emotion and polarity detection. Our approach takes advantages of the massively parallel processing power of modern GPUs to enhance the performance of feature extraction from different modalities. In addition, in order to ex- tract important textual features from multimodal sources we generate domain-specific graphs based on commonsense knowledge and apply GPU-based graph traversal for fast feature detection. Then, powerful ELM classifiers are applied to build the senti- ment analysis model based on the extracted features. We conduct our experiments on the YouTube dataset and achieve an accuracy of 78 In term of processing speed, our method shows improvements of several orders of magnitude for feature extraction compared to CPU-based counterparts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset