PBGAN: Partial Binarization of Deconvolution Based Generators

02/26/2018
by   Jinglan Liu, et al.
0

The generator is quite different from the discriminator in a generative adversarial network (GAN). Compression techniques for the latter have been studied widely, while those for the former stay untouched so far. This work explores the binarization of the deconvolution based generator in a GAN for memory saving and speedup. We show that some layers of the generator may need to be kept in floating point representation to preserve performance, though conventional convolutional neural networks can be completely binarized. As such, only partial binarization may be possible for the generator. To quickly decide whether a layer can be binarized, supported by theoretical analysis and verified by experiments, a simple metric based on the dimension of deconvolution operations is established. Moreover, our results indicate that both generator and discriminator should be binarized at the same time for balanced competition and better performance. Compared with the floating-point version, experimental results based on CelebA suggest that our partial binarization on the generator of the deep convolutional generative adversarial network can yield up to 25.81× saving in memory consumption, and 1.96× and 1.32× speedup in inference and training respectively with little performance loss measured by sliced Wasserstein distance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset