Boundary-Seeking Generative Adversarial Networks
We introduce a novel approach to training generative adversarial networks, where we train a generator to match a target distribution that converges to the data distribution at the limit of a perfect discriminator. This objective can be interpreted as training a generator to produce samples that lie on the decision boundary of the current discriminator in training at each update, and we call a GAN trained using this algorithm a boundary-seeking GAN (BGAN). This approach can be used to train a generator with discrete output when the generator outputs a parametric conditional distribution. We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation. Finally, we notice that the proposed boundary-seeking algorithm works even with continuous variables, and demonstrate its effectiveness with various natural image benchmarks.
READ FULL TEXT