A Divergence Proof for Latuszynski's Counter-Example Approaching Infinity with Probability "Near" One

08/30/2018
by   Yufan Li, et al.
0

This note is a technical supplement to the following paper: latuszynski2013adaptive. In the said paper, the authors explored various convergence conditions for adaptive Gibbs samplers. A significant portion of the paper seeks to prove false a set of convergence conditions proposed in an earlier paper: levine2006optimizing. This is done by providing a proof that the counter-example constructed (essentially a state-dependent, time-dependent random walk on R^2) approaches infinity with probability larger than 0. The author noted that it is very likely that the said random walk approaches infinity with probability 1 according to their numerical simulation (See Proposition 3.2, Remark 3.3). But they also noted that due to technicalities, they were only able to provide a proof that the process tends to infinity with probability strictly larger than 0 (Remark 3.3). Upon checking their proof, we notice that their approach may be simplified and an alternative approach yields stronger result. We detail our method and result here out of technical interest.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset