ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes

03/28/2018
by   Taihong Xiao, et al.
2

Recent studies on face attribute transfer have achieved great success, especially after the emergence of generative adversarial networks (GANs). A lot of image-to-image translation models are able to transfer face attributes given an input image. However, they suffer from three limitations: (1) failing to make image generation by exemplars; (2) unable to deal with multiple face attributes simultaneously; (3) low-quality generated images, such as low resolution and sensible artifacts. To address these limitations, we propose a novel model that receives two images of different attributes as inputs. Our model can transfer the exactly same type of attributes from one image to another by exchanging certain part of their encodings. All the attributes are encoded in the latent space in a disentangled manner, which enables us to manipulate several attributes simultaneously. Besides, it can learn the residual images so as to facilitate training on higher resolution images. With the help of multi-scale discriminators for adversarial training, it can even generate high-quality images with finer details and less artifacts. We demonstrate the effectiveness of our model in overcoming the above three limitations by comparing with other methods on the CelebA face database.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset