Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding
This paper presents a new Vision Transformer (ViT) architecture Multi-Scale Vision Longformer, which significantly enhances the ViT of <cit.> for encoding high-resolution images using two techniques. The first is the multi-scale model structure, which provides image encodings at multiple scales with manageable computational cost. The second is the attention mechanism of vision Longformer, which is a variant of Longformer <cit.>, originally developed for natural language processing, and achieves a linear complexity w.r.t. the number of input tokens. A comprehensive empirical study shows that the new ViT significantly outperforms several strong baselines, including the existing ViT models and their ResNet counterparts, and the Pyramid Vision Transformer from a concurrent work <cit.>, on a range of vision tasks, including image classification, object detection, and segmentation. The models and source code used in this study will be released to public soon.
READ FULL TEXT