Fast Learning of Clusters and Topics via Sparse Posteriors
Mixture models and topic models generate each observation from a single cluster, but standard variational posteriors for each observation assign positive probability to all possible clusters. This requires dense storage and runtime costs that scale with the total number of clusters, even though typically only a few clusters have significant posterior mass for any data point. We propose a constrained family of sparse variational distributions that allow at most L non-zero entries, where the tunable threshold L trades off speed for accuracy. Previous sparse approximations have used hard assignments (L=1), but we find that moderate values of L>1 provide superior performance. Our approach easily integrates with stochastic or incremental optimization algorithms to scale to millions of examples. Experiments training mixture models of image patches and topic models for news articles show that our approach produces better-quality models in far less time than baseline methods.
READ FULL TEXT