Boundary-Seeking Generative Adversarial Networks

02/27/2017
by   R Devon Hjelm, et al.
0

We introduce a novel approach to training generative adversarial networks, where we train a generator to match a target distribution that converges to the data distribution at the limit of a perfect discriminator. This objective can be interpreted as training a generator to produce samples that lie on the decision boundary of the current discriminator in training at each update, and we call a GAN trained using this algorithm a boundary-seeking GAN (BGAN). This approach can be used to train a generator with discrete output when the generator outputs a parametric conditional distribution. We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation. Finally, we notice that the proposed boundary-seeking algorithm works even with continuous variables, and demonstrate its effectiveness with various natural image benchmarks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset