A Feature Embedding Strategy for High-level CNN representations from Multiple ConvNets

05/11/2017
by   Thangarajah Akilan, et al.
0

Following the rapidly growing digital image usage, automatic image categorization has become preeminent research area. It has broaden and adopted many algorithms from time to time, whereby multi-feature (generally, hand-engineered features) based image characterization comes handy to improve accuracy. Recently, in machine learning, pre-trained deep convolutional neural networks (DCNNs or ConvNets) have been that the features extracted through such DCNN can improve classification accuracy. Thence, in this paper, we further investigate a feature embedding strategy to exploit cues from multiple DCNNs. We derive a generalized feature space by embedding three different DCNN bottleneck features with weights respect to their Softmax cross-entropy loss. Test outcomes on six different object classification data-sets and an action classification data-set show that regardless of variation in image statistics and tasks the proposed multi-DCNN bottleneck feature fusion is well suited to image classification tasks and an effective complement of DCNN. The comparisons to existing fusion-based image classification approaches prove that the proposed method surmounts the state-of-the-art methods and produces competitive results with fully trained DCNNs as well.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset