On Testing of Samplers

10/24/2020
by   Kuldeep S. Meel, et al.
0

Given a set of items ℱ and a weight function 𝚠𝚝: ℱ↦ (0,1), the problem of sampling seeks to sample an item proportional to its weight. Sampling is a fundamental problem in machine learning. The daunting computational complexity of sampling with formal guarantees leads designers to propose heuristics-based techniques for which no rigorous theoretical analysis exists to quantify the quality of generated distributions. This poses a challenge in designing a testing methodology to test whether a sampler under test generates samples according to a given distribution. Only recently, Chakraborty and Meel (2019) designed the first scalable verifier, called Barbarik1, for samplers in the special case when the weight function 𝚠𝚝 is constant, that is, when the sampler is supposed to sample uniformly from ℱ . The techniques in Barbarik1, however, fail to handle general weight functions. The primary contribution of this paper is an affirmative answer to the above challenge: motivated by Barbarik1 but using different techniques and analysis, we design Barbarik2 an algorithm to test whether the distribution generated by a sampler is ε-close or η-far from any target distribution. In contrast to black-box sampling techniques that require a number of samples proportional to |ℱ| , Barbarik2 requires only Õ(tilt(𝚠𝚝,φ)^2/η(η - 6ε)^3) samples, where the tilt is the maximum ratio of weights of two satisfying assignments. Barbarik2 can handle any arbitrary weight function. We present a prototype implementation of Barbarik2 and use it to test three state-of-the-art samplers.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset