3D-Aware Scene Manipulation via Inverse Graphics

08/28/2018
by   Shunyu Yao, et al.
0

We aim to obtain an interpretable, expressive and disentangled scene representation that contains comprehensive structural and textural information for each object. Previous representations learned by neural networks are often uninterpretable, limited to a single object, or lack 3D knowledge. In this work, we address the above issues by integrating 3D modeling into a deep generative model. We adopt a differentiable shape renderer to decode geometrical object attributes into a shape, and a neural generator to decode learned latent codes to texture. The encoder is therefore forced to perform an inverse graphics task and transform a scene image into a structured representation with 3D attributes of objects and learned texture latent codes. The representation supports reconstruction and a variety of 3D-aware scene manipulation applications. The disentanglement of structure and texture in our representation allows us to rotate and move objects freely while maintaining consistent texture, as well as changing the object appearance without affecting their structures. We systematically evaluate our representation and demonstrate that our editing scheme is superior to 2D counterparts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset