Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators

10/31/2019
by   Reinhard Heckel, et al.
0

Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. This success is often attributed to large amounts of training data. However, recent experimental findings challenge this view and instead suggest that a major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the single corrupted image. While this over-parameterized network can fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent one obtains the uncorrupted image. This intriguing phenomenon enables state-of-the-art CNN-based denoising and regularization of linear inverse problems such as compressive sensing. In this paper, we take a step towards demystifying this experimental phenomenon by attributing this effect to particular architectural choices of convolutional networks, namely convolutions with fixed interpolating filters. We then formally characterize the dynamics of fitting a two-layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. This result relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset