Conditional Generation from Unconditional Diffusion Models using Denoiser Representations

06/02/2023
by   Alexandros Graikos, et al.
0

Denoising diffusion models have gained popularity as a generative modeling technique for producing high-quality and diverse images. Applying these models to downstream tasks requires conditioning, which can take the form of text, class labels, or other forms of guidance. However, providing conditioning information to these models can be challenging, particularly when annotations are scarce or imprecise. In this paper, we propose adapting pre-trained unconditional diffusion models to new conditions using the learned internal representations of the denoiser network. We demonstrate the effectiveness of our approach on various conditional generation tasks, including attribute-conditioned generation and mask-conditioned generation. Additionally, we show that augmenting the Tiny ImageNet training set with synthetic images generated by our approach improves the classification accuracy of ResNet baselines by up to 8 adapt diffusion models to new conditions and generate high-quality augmented data for various conditional generation tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset