Quasi-Bayes properties of a recursive procedure for mixtures

02/27/2019
by   Sandra Fortini, et al.
0

Bayesian methods are attractive and often optimal, yet nowadays pressure for fast computations, especially with streaming data and online learning, brings renewed interest in faster, although possibly sub-optimal, solutions. To what extent these algorithms may approximate a Bayesian solution is a problem of interest, not always solved. On this background, in this paper we revisit a sequential procedure proposed by Smith and Makov (1978) for unsupervised learning and classification in finite mixtures, and developed by M. Newton and Zhang (1999), for nonparametric mixtures. Newton's algorithm is simple and fast, and theoretically intriguing. Although originally proposed as an approximation of the Bayesian solution, its quasi-Bayes properties remain unclear. We propose a novel methodological approach. We regard the algorithm as a probabilistic learning rule, that implicitly defines an underlying probabilistic model; and we find this model. We can then prove that it is, asymptotically, a Bayesian, exchangeable mixture model. Moreover, while the algorithm only offers a point estimate, our approach allows us to obtain an asymptotic posterior distribution and asymptotic credible intervals for the mixing distribution. Our results also provide practical hints for tuning the algorithm and obtaining desirable properties, as we illustrate in a simulation study. Beyond mixture models, our study suggests a theoretical framework that may be of interest for recursive quasi-Bayes methods in other settings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset