Batched Thompson Sampling

10/01/2021
by   Cem Kalkanli, et al.
0

We introduce a novel anytime Batched Thompson sampling policy for multi-armed bandits where the agent observes the rewards of her actions and adjusts her policy only at the end of a small number of batches. We show that this policy simultaneously achieves a problem dependent regret of order O(log(T)) and a minimax regret of order O(√(Tlog(T))) while the number of batches can be bounded by O(log(T)) independent of the problem instance over a time horizon T. We also show that in expectation the number of batches used by our policy can be bounded by an instance dependent bound of order O(loglog(T)). These results indicate that Thompson sampling maintains the same performance in this batched setting as in the case when instantaneous feedback is available after each action, while requiring minimal feedback. These results also indicate that Thompson sampling performs competitively with recently proposed algorithms tailored for the batched setting. These algorithms optimize the batch structure for a given time horizon T and prioritize exploration in the beginning of the experiment to eliminate suboptimal actions. We show that Thompson sampling combined with an adaptive batching strategy can achieve a similar performance without knowing the time horizon T of the problem and without having to carefully optimize the batch structure to achieve a target regret bound (i.e. problem dependent vs minimax regret) for a given T.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset