Generalized Regret Analysis of Thompson Sampling using Fractional Posteriors

09/12/2023
by   Prateek Jaiswal, et al.
0

Thompson sampling (TS) is one of the most popular and earliest algorithms to solve stochastic multi-armed bandit problems. We consider a variant of TS, named α-TS, where we use a fractional or α-posterior (α∈(0,1)) instead of the standard posterior distribution. To compute an α-posterior, the likelihood in the definition of the standard posterior is tempered with a factor α. For α-TS we obtain both instance-dependent 𝒪(∑_k ≠ i^*Δ_k(log(T)/C(α)Δ_k^2 + 1/2)) and instance-independent 𝒪(√(KTlog K)) frequentist regret bounds under very mild conditions on the prior and reward distributions, where Δ_k is the gap between the true mean rewards of the k^th and the best arms, and C(α) is a known constant. Both the sub-Gaussian and exponential family models satisfy our general conditions on the reward distribution. Our conditions on the prior distribution just require its density to be positive, continuous, and bounded. We also establish another instance-dependent regret upper bound that matches (up to constants) to that of improved UCB [Auer and Ortner, 2010]. Our regret analysis carefully combines recent theoretical developments in the non-asymptotic concentration analysis and Bernstein-von Mises type results for the α-posterior distribution. Moreover, our analysis does not require additional structural properties such as closed-form posteriors or conjugate priors.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset