Faster Convergence of Stochastic Gradient Langevin Dynamics for Non-Log-Concave Sampling

10/19/2020
by   Difan Zou, et al.
2

We establish a new convergence analysis of stochastic gradient Langevin dynamics (SGLD) for sampling from a class of distributions that can be non-log-concave. At the core of our approach is a novel conductance analysis of SGLD using an auxiliary time-reversible Markov Chain. Under certain conditions on the target distribution, we prove that Õ(d^4ϵ^-2) stochastic gradient evaluations suffice to guarantee ϵ-sampling error in terms of the total variation distance, where d is the problem dimension, which improves existing results on the convergence rate of SGLD (Raginsky et al., 2017; Xu et al., 2018). We further show that provided an additional Hessian Lipschitz condition on the log-density function, SGLD is guaranteed to achieve ϵ-sampling error within Õ(d^15/4ϵ^-3/2) stochastic gradient evaluations. Our proof technique provides a new way to study the convergence of Langevin based algorithms, and sheds some light on the design of fast stochastic gradient based sampling algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset