LaRVAE: Label Replacement VAE for Semi-Supervised Disentanglement Learning
Learning interpretable and disentangled representations is a crucial yet challenging task in representation learning. In this work, we develop LaRVAE, a novel semi-supervised variational auto-encoder (VAE) for learning disentangled representations by more effectively exploiting the limited labeled data. Specifically, during training, we replace the inferred representation associated with a data point with its ground-truth representation whenever it is available. LaRVAE is theoretically inspired by our proposed general framework of semi-supervised disentanglement learning in the context of VAEs. It naturally motivates both the novel representation replacement strategy in LaRVAE and a supervised regularization term commonly (albeit in an ad-hoc way) used in existing semi-supervised VAEs. Extensive experiments on synthetic and real datasets demonstrate both quantitatively and qualitatively the ability of LaRVAE to significantly and consistently improve disentanglement with very limited supervision.
READ FULL TEXT