DenseDINO: Boosting Dense Self-Supervised Learning with Token-Based Point-Level Consistency

06/06/2023
by   Yike Yuan, et al.
0

In this paper, we propose a simple yet effective transformer framework for self-supervised learning called DenseDINO to learn dense visual representations. To exploit the spatial information that the dense prediction tasks require but neglected by the existing self-supervised transformers, we introduce point-level supervision across views in a novel token-based way. Specifically, DenseDINO introduces some extra input tokens called reference tokens to match the point-level features with the position prior. With the reference token, the model could maintain spatial consistency and deal with multi-object complex scene images, thus generalizing better on dense prediction tasks. Compared with the vanilla DINO, our approach obtains competitive performance when evaluated on classification in ImageNet and achieves a large margin (+7.2 linear probing protocol for segmentation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset