Thompson Sampling for Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms

09/07/2018
by   Alihan Hüyük, et al.
0

We analyze the regret of combinatorial Thompson sampling (CTS) for the combinatorial multi-armed bandit with probabilistically triggered arms under the semi-bandit feedback setting. We assume that the learner has access to an exact optimization oracle but does not know the expected base arm outcomes beforehand. When the expected reward function is Lipschitz continuous in the expected base arm outcomes, we derive O(∑_i =1^m T / (p_i Δ_i)) regret bound for CTS, where m denotes the number of base arms, p_i denotes the minimum non-zero triggering probability of base arm i and Δ_i denotes the minimum suboptimality gap of base arm i. We also show that CTS outperforms combinatorial upper confidence bound (CUCB) via numerical experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset