On the Re-Solving Heuristic for (Binary) Contextual Bandits with Knapsacks

11/25/2022
by   Rui Ai, et al.
0

In the problem of (binary) contextual bandits with knapsacks (CBwK), the agent receives an i.i.d. context in each of the T rounds and chooses an action, resulting in a random reward and a random consumption of resources that are related to an i.i.d. external factor. The agent's goal is to maximize the accumulated reward under the initial resource constraints. In this work, we combine the re-solving heuristic, which proved successful in revenue management, with distribution estimation techniques to solve this problem. We consider two different information feedback models, with full and partial information, which vary in the difficulty of getting a sample of the external factor. Under both information feedback settings, we achieve two-way results: (1) For general problems, we show that our algorithm gets an O(T^α_u + T^α_v + T^1/2) regret against the fluid benchmark. Here, α_u and α_v reflect the complexity of the context and external factor distributions, respectively. This result is comparable to existing results. (2) When the fluid problem is linear programming with a unique and non-degenerate optimal solution, our algorithm leads to an O(1) regret. To the best of our knowledge, this is the first O(1) regret result in the CBwK problem regardless of information feedback models. We further use numerical experiments to verify our results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset