Image-to-image translation for cross-domain disentanglement

05/24/2018
by   Abel Gonzalez-Garcia, et al.
2

Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of cross-domain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. The obtained model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains. We can perform cross-domain retrieval without the need of labeled data. Finally, we can perform domain-specific image transfer and interpolation. We compare our model to the state-of-the-art in multi-modal image translation and achieve better results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset