SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization

12/10/2019
by   Xianzhi Du, et al.
0

Convolutional neural networks typically encode an input image into a series of intermediate features with decreasing resolutions. While this structure is suited to classification tasks, it does not perform well for tasks requiring simultaneous recognition and localization (e.g., object detection). The encoder-decoder architectures are proposed to resolve this by applying a decoder network onto a backbone model designed for classification tasks. In this paper, we argue that encoder-decoder architecture is ineffective in generating strong multi-scale features because of the scale-decreased backbone. We propose SpineNet, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search. SpineNet achieves state-of-the-art performance of one-stage object detector on COCO with 60 ResNet-FPN counterparts by 6 classification tasks, achieving 6 iNaturalist fine-grained dataset.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset