Deep Visual Attention Prediction

05/07/2017
by   Wenguan Wang, et al.
0

Deep Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction. There still remains room for improvement over deep learning based attention models that do not explicitly deal with scale-space feature learning problem. Our method learns to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. The attention model captures hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. We base our model on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned with a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset