On catastrophic forgetting and mode collapse in Generative Adversarial Networks

07/11/2018
by   Hoang Thanh-Tung, et al.
2

Generative Adversarial Networks (GAN) are one of the most prominent tools for learning complicated distributions. However, problems such as mode collapse and catastrophic forgetting, prevent GAN from learning the target distribution. These problems are usually studied independently from each other. In this paper, we show that both problems are present in GAN and their combined effect makes the training of GAN unstable. We also show that methods such as gradient penalties and momentum based optimizers can improve the stability of GAN by effectively preventing these problems from happening. Finally, we study a mechanism for mode collapse to occur and propagate in feedforward neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset