Multi-armed Bandits with Compensation

11/05/2018
by   Siwei Wang, et al.
0

We propose and study the known-compensation multi-arm bandit (KCMAB) problem, where a system controller offers a set of arms to many short-term players for T steps. In each step, one short-term player arrives to the system. Upon arrival, the player aims to select an arm with the current best average reward and receives a stochastic reward associated with the arm. In order to incentivize players to explore other arms, the controller provides a proper payment compensation to players. The objective of the controller is to maximize the total reward collected by players while minimizing the compensation. We first provide a compensation lower bound Θ(∑_i Δ_i T KL_i), where Δ_i and KL_i are the expected reward gap and Kullback-Leibler (KL) divergence between distributions of arm i and the best arm, respectively. We then analyze three algorithms to solve the KCMAB problem, and obtain their regrets and compensations. We show that the algorithms all achieve O( T) regret and O( T) compensation that match the theoretical lower bound. Finally, we present experimental results to demonstrate the performance of the algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset