KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowledge Distillation

11/19/2020
by   Hao-Zhe Feng, et al.
1

Conventional unsupervised multi-source domain adaptation(UMDA) methods assume all source domains can be accessed directly. This neglects the privacy-preserving policy, that is,all the data and computations must be kept decentralized.There exists three problems in this scenario: (1)Minimizing the domain distance requires the pairwise calculation of the data from source and target domains, which is not accessible.(2)The communication cost and privacy security limit the application of UMDA methods (e.g.,the domain adversarial training).(3)Since users have no authority to checkthe data quality, the irrelevant or malicious source domainsare more likely to appear, which causes negative transfer. In this study, we propose a privacy-preserving UMDA paradigm named Knowledge Distillation based Decentralized Domain Adaptation(KD3A), which performs domain adaptation through the knowledge distillation on models from different source domains. KD3A solves the above problems with three components:(1)A multi-source knowledge distillation method named Knowledge Voteto learn high-quality domain consensus knowledge. (2)A dynamic weighting strategy named Consensus Focusto identify both the malicious and irrelevant domains.(3)A decentralized optimization strategy for computing domain distance named BatchNorm MMD.The extensive experiments on DomainNet demonstrate that KD3A is robust to the negative transfer and brings a 100x reduction of communication cost compared with other decentralized UMDA methods. Moreover, our KD3A significantly outperforms state-of-the-art UMDA approaches.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset