Domain Generalization under Conditional and Label Shifts via Variational Bayesian Inference

07/22/2021
by   Xiaofeng Liu, et al.
0

In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training. Considering the inherent conditional and label shifts, we would expect the alignment of p(x|y) and p(y). However, the widely used domain invariant feature learning (IFL) methods relies on aligning the marginal concept shift w.r.t. p(x), which rests on an unrealistic assumption that p(y) is invariant across domains. We thereby propose a novel variational Bayesian inference framework to enforce the conditional distribution alignment w.r.t. p(x|y) via the prior distribution matching in a latent space, which also takes the marginal label shift w.r.t. p(y) into consideration with the posterior alignment. Extensive experiments on various benchmarks demonstrate that our framework is robust to the label shift and the cross-domain accuracy is significantly improved, thereby achieving superior performance over the conventional IFL counterparts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset