Cooperative Stochastic Multi-agent Multi-armed Bandits Robust to Adversarial Corruptions
We study the problem of stochastic bandits with adversarial corruptions in the cooperative multi-agent setting, where V agents interact with a common K-armed bandit problem, and each pair of agents can communicate with each other to expedite the learning process. In the problem, the rewards are independently sampled from distributions across all agents and rounds, but they may be corrupted by an adversary. Our goal is to minimize both the overall regret and communication cost across all agents. We first show that an additive term of corruption is unavoidable for any algorithm in this problem. Then, we propose a new algorithm that is agnostic to the level of corruption. Our algorithm not only achieves near-optimal regret in the stochastic setting, but also obtains a regret with an additive term of corruption in the corrupted setting, while maintaining efficient communication. The algorithm is also applicable for the single-agent corruption problem, and achieves a high probability regret that removes the multiplicative dependence of K on corruption level. Our result of the single-agent case resolves an open question from Gupta et al. [2019].
READ FULL TEXT