Approximating the Permanent with Deep Rejection Sampling
We present a randomized approximation scheme for the permanent of a matrix with nonnegative entries. Our scheme extends a recursive rejection sampling method of Huber and Law (SODA 2008) by replacing the upper bound for the permanent with a linear combination of the subproblem bounds at a moderately large depth of the recursion tree. This method, we call deep rejection sampling, is empirically shown to outperform the basic, depth-zero variant, as well as a related method by Kuck et al. (NeurIPS 2019). We analyze the expected running time of the scheme on random (0, 1)-matrices where each entry is independently 1 with probability p. Our bound is superior to a previous one for p less than 1/5, matching another bound that was known to hold when every row and column has density exactly p.
READ FULL TEXT