BlendGAN: Learning and Blending the Internal Distributions of Single Images by Spatial Image-Identity Conditioning

12/03/2022
by   Idan Kligvasser, et al.
0

Training a generative model on a single image has drawn significant attention in recent years. Single image generative methods are designed to learn the internal patch distribution of a single natural image at multiple scales. These models can be used for drawing diverse samples that semantically resemble the training image, as well as for solving many image editing and restoration tasks that involve that particular image. Here, we introduce an extended framework, which allows to simultaneously learn the internal distributions of several images, by using a single model with spatially varying image-identity conditioning. Our BlendGAN opens the door to applications that are not supported by single-image models, including morphing, melding, and structure-texture fusion between two or more arbitrary images.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset