Arbitrary Facial Attribute Editing: Only Change What You Want

11/29/2017
by   Zhenliang He, et al.
0

Facial attribute editing aims to modify either single or multiple attributes on a face image. Since it is practically infeasible to collect images with arbitrarily specified attributes for each person, the generative adversarial net (GAN) and the encoder-decoder architecture are usually incorporated to handle this task. With the encoder-decoder architecture, arbitrary attribute editing can then be conducted by decoding the latent representation of the face image conditioned on the specified attributes. A few existing methods attempt to establish attribute-independent latent representation for arbitrarily changing the attributes. However, since the attributes portray the characteristics of the face image, the attribute-independent constraint on the latent representation is excessive. Such constraint may result in information loss and unexpected distortion on the generated images (e.g. over-smoothing), especially for those identifiable attributes such as gender, race etc. Instead of imposing the attribute-independent constraint on the latent representation, we introduce an attribute classification constraint on the generated image, just requiring the correct change of the attributes. Meanwhile, reconstruction learning is introduced in order to guarantee the preservation of all other attribute-excluding details on the generated image, and adversarial learning is employed for visually realistic generation. Moreover, our method can be naturally extended to attribute intensity manipulation. Experiments on the CelebA dataset show that our method outperforms the state-of-the-arts on generating realistic attribute editing results with facial details well preserved.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset