Adversarial Feature Distribution Alignment for Semi-Supervised Learning
Training deep neural networks with only a few labeled samples can lead to overfitting. This is problematic in semi-supervised learning where only a few labeled samples are available. In this paper, we show that a consequence of overfitting in SSL is feature distribution misalignment between labeled and unlabeled samples. Hence, we propose a new feature distribution alignment method. Our method is particularly effective when using only a small amount of labeled samples. We test our method on CIFAR10 and SVHN. On SVHN we achieve a test error of 3.88 which is close to the fully supervised model 2.89 comparison, the current SOTA achieves only 4.29 a theoretical insight why feature distribution alignment occurs and show that our method reduces it.
READ FULL TEXT