Approximations in Probabilistic Programs

12/14/2019
by   Ekansh Sharma, et al.
0

We study the first-order probabilistic programming language introduced by Staton et al. (2016), but with an additional language construct, stat, that, like the fixpoint operator of Atkinson et al. (2018), converts the description of the Markov kernel of an ergodic Markov chain into a sample from its unique stationary distribution. Up to minor changes in how certain error conditions are handled, we show that norm and score are eliminable from the extended language, in the sense of Felleisen (1991). We do so by giving an explicit program transformation and proof of correctness. In fact, our program transformation implements a Markov chain Monte Carlo algorithm, in the spirit of the "Trace-MH" algorithm of Wingate et al. (2011) and Goodman et al. (2008), but less sophisticated to enable analysis. We then explore the problem of approximately implementing the semantics of the language with potentially nested stat expressions, in a language without stat. For a single stat term, the error introduced by the finite unrolling proposed by Atkinson et al. (2018) vanishes only asymptotically. In the general case, no guarantees exist. Under uniform ergodicity assumptions, we are able to give quantitative error bounds and convergence results for the approximate implementation of the extended first-order language.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset