Zero-Shot Deep Domain Adaptation
The existing methods of domain adaptation (DA) work under the assumption that the task-relevant target-domain training data is given. However, such assumption can be violated, which is often ignored by the prior works. To tackle this issue, we propose zero-shot deep domain adaptation (ZDDA), which uses the privileged information from the task-irrelevant dual-domain pairs. ZDDA learns a source-domain representation which is not only tailored for the task of interest (TOI) but also close to the target-domain representation. Therefore, the source-domain TOI solution (e.g. the classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations. Using the MNIST, Fashion-MNIST, NIST, EMNIST, and SUN RGB-D datasets, we show that ZDDA can perform DA in classification tasks without the access to the task-relevant target-domain training data. We also extend ZDDA to perform sensor fusion in the SUN RGB-D scene classification task by simulating the task-relevant target-domain representations with the task-relevant source-domain data. To the best of our knowledge, ZDDA is the first DA and sensor fusion method which needs no task-relevant target-domain data.
READ FULL TEXT