POCE: Pose-Controllable Expression Editing

by   Rongliang Wu, et al.

Facial expression editing has attracted increasing attention with the advance of deep neural networks in recent years. However, most existing methods suffer from compromised editing fidelity and limited usability as they either ignore pose variations (unrealistic editing) or require paired training data (not easy to collect) for pose controls. This paper presents POCE, an innovative pose-controllable expression editing network that can generate realistic facial expressions and head poses simultaneously with just unpaired training images. POCE achieves the more accessible and realistic pose-controllable expression editing by mapping face images into UV space, where facial expressions and head poses can be disentangled and edited separately. POCE has two novel designs. The first is self-supervised UV completion that allows to complete UV maps sampled under different head poses, which often suffer from self-occlusions and missing facial texture. The second is weakly-supervised UV editing that allows to generate new facial expressions with minimal modification of facial identity, where the synthesized expression could be controlled by either an expression label or directly transplanted from a reference UV map via feature transfer. Extensive experiments show that POCE can learn from unpaired face images effectively, and the learned model can generate realistic and high-fidelity facial expressions under various new poses.


page 1

page 2

page 3

page 6

page 7

page 9

page 10


DPE: Disentanglement of Pose and Expression for General Video Portrait Editing

One-shot video-driven talking face generation aims at producing a synthe...

Continuously Controllable Facial Expression Editing in Talking Face Videos

Recently audio-driven talking face video generation has attracted consid...

ICface: Interpretable and Controllable Face Reenactment Using GANs

This paper presents a generic face animator that is able to control the ...

Pose-Controllable 3D Facial Animation Synthesis using Hierarchical Audio-Vertex Attention

Most of the existing audio-driven 3D facial animation methods suffered f...

Perceptually Validated Precise Local Editing for Facial Action Units with StyleGAN

The ability to edit facial expressions has a wide range of applications ...

Generative Adversarial Talking Head: Bringing Portraits to Life with a Weakly Supervised Neural Network

This paper presents Generative Adversarial Talking Head (GATH), a novel ...

ClipFace: Text-guided Editing of Textured 3D Morphable Models

We propose ClipFace, a novel self-supervised approach for text-guided ed...

Please sign up or login with your details

Forgot password? Click here to reset