Top K Ranking for Multi-Armed Bandit with Noisy Evaluations
We consider a multi-armed bandit setting where, at the beginning of each round, the learner receives noisy independent, and possibly biased, evaluations of the true reward of each arm and it selects K arms with the objective of accumulating as much reward as possible over T rounds. Under the assumption that at each round the true reward of each arm is drawn from a fixed distribution, we derive different algorithmic approaches and theoretical guarantees depending on how the evaluations are generated. First, we show a O(T^2/3) regret in the general case when the observation functions are a genearalized linear function of the true rewards. On the other hand, we show that an improved O(√(T)) regret can be derived when the observation functions are noisy linear functions of the true rewards. Finally, we report an empirical validation that confirms our theoretical findings, provides a thorough comparison to alternative approaches, and further supports the interest of this setting in practice.
READ FULL TEXT