Object-wise Masked Autoencoders for Fast Pre-training

05/28/2022
by   Jiantao Wu, et al.
0

Self-supervised pre-training for images without labels has recently achieved promising performance in image classification. The success of transformer-based methods, ViT and MAE, draws the community's attention to the design of backbone architecture and self-supervised task. In this work, we show that current masked image encoding models learn the underlying relationship between all objects in the whole scene, instead of a single object representation. Therefore, those methods bring a lot of compute time for self-supervised pre-training. To solve this issue, we introduce a novel object selection and division strategy to drop non-object patches for learning object-wise representations by selective reconstruction with interested region masks. We refer to this method ObjMAE. Extensive experiments on four commonly-used datasets demonstrate the effectiveness of our model in reducing the compute cost by 72 investigate the inter-object and intra-object relationship and find that the latter is crucial for self-supervised pre-training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset