Latent Transformations for Discrete-Data Normalising Flows
Normalising flows (NFs) for discrete data are challenging because parameterising bijective transformations of discrete variables requires predicting discrete/integer parameters. Having a neural network architecture predict discrete parameters takes a non-differentiable activation function (eg, the step function) which precludes gradient-based learning. To circumvent this non-differentiability, previous work has employed biased proxy gradients, such as the straight-through estimator. We present an unbiased alternative where rather than deterministically parameterising one transformation, we predict a distribution over latent transformations. With stochastic transformations, the marginal likelihood of the data is differentiable and gradient-based learning is possible via score function estimation. To test the viability of discrete-data NFs we investigate performance on binary MNIST. We observe great challenges with both deterministic proxy gradients and unbiased score function estimation. Whereas the former often fails to learn even a shallow transformation, the variance of the latter could not be sufficiently controlled to admit deeper NFs.
READ FULL TEXT