Leveraging the Exact Likelihood of Deep Latent Variables Models
Deep latent variable models combine the approximation abilities of deep neural networks and the statistical foundations of generative models. The induced data distribution is an infinite mixture model whose density is extremely delicate to compute. Variational methods are consequently used for inference, following the seminal work of Rezende et al. (2014) and Kingma and Welling (2014). We study the well-posedness of the exact problem (maximum likelihood) these techniques approximatively solve. In particular, we show that most unconstrained models used for continuous data have an unbounded likelihood. This ill-posedness and the problems it causes are illustrated on real data. We also show how to insure the existence of maximum likelihood estimates, and draw useful connections with nonparametric mixture models. Furthermore, we describe an algorithm that allows to perform missing data imputation using the exact conditional likelihood of a deep latent variable model. On several real data sets, our algorithm consistently and significantly outperforms the usual imputation scheme used within deep latent variable models.
READ FULL TEXT