Improving Adversarial Discriminative Domain Adaptation

09/10/2018
by   Aaron Chadha, et al.
0

Adversarial discriminative domain adaptation (ADDA) is an efficient framework for unsupervised domain adaptation, where the source and target domains are assumed to have the same classes, but no labels are available for the target domain. While ADDA has already achieved significant training efficiency and competitive accuracy in comparison to generative adversarial networks, we investigate whether we can allow for further improvements in its convergence properties by incorporating source label knowledge during target domain training. To achieve this, our approach first modifies the discriminator output to jointly predict the source labels and distinguish inputs from the target domain. We then leverage on the various source/target and encoder/discriminator distribution combinations to propose two loss functions for adversarial training of the target encoder. Our final design minimizes the maximum mean discrepancy between source encoder and target discriminator distributions, which ties together adversarial and discrepancy-based loss functions that are frequently considered independently in recent deep learning domain adaptation methods. Beyond validating our framework on standard datasets like MNIST, MNIST-M, USPS and SVHN, we introduce and evaluate on a neuromorphic vision sensing (NVS) sign language recognition dataset, where the source domain constitutes emulated neuromorphic spike events converted from APS video and the target domain is experimental spike events from an NVS camera. Our results on all datasets show that our proposal is both simple and efficient, as it competes or outperforms the state-of-the-art in unsupervised domain adaptation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset