Variance Loss: A Confidence-Based Reweighting Strategy for Coarse Semantic Segmentation

09/11/2020
by   Jingchao Liu, et al.
0

Coarsely-labeled semantic segmentation annotations are easy to obtain, but therefore bear the risk of losing edge details and introducing background noise. Though they are usually used as a supplement to the finely-labeled ones, in this paper, we attempt to train a model only using these coarse annotations, and improve the model performance with a noise-robust reweighting strategy. Specifically, the proposed confidence indicator makes it possible to design a reweighting strategy that simultaneously mines hard samples and alleviates noisy labels for the coarse annotation. Besides, the optimal reweighting strategy can be automatically derived by our Adversarial Weight Assigning Module (AWAM) with only 53 learnable parameters. Moreover, a rigorous proof of the convergence of AWAM is given. Experiments on standard datasets show that our proposed reweighting strategy can bring consistent performance improvements for both coarse annotations and fine annotations. In particular, built on top of DeeplabV3+, we improve the mIoU on Cityscapes Coarse dataset (coarsely-labeled) and ADE20K (finely-labeled) by 2.21 and 0.91, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset