Staged Multi-armed Bandits
In this paper, we introduce a new class of reinforcement learning methods referred to as staged multi-armed bandits (S-MAB). In S-MAB the learner proceeds in rounds, each composed of several stages, in which it chooses an action and observes a feedback signal. Moreover, in each stage, it can take a special action, called the stop action, that ends the current round. After the stop action is taken, the learner collects a terminal reward, and observes the costs and terminal rewards associated with each stage of the round. The goal of the learner is to maximize its cumulative gain (i.e., the terminal reward minus costs) over all rounds by learning to choose the best sequence of actions based on the feedback it gets about these actions. First, we define an oracle benchmark, which sequentially selects the actions that maximize the expected immediate gain. This benchmark is known to be approximately optimal when the reward sequence associated with the selected actions is adaptive submodular. Then, we propose our online learning algorithm, named Feedback Adaptive Learning (FAL), and show (i) a problem-independent confidence bound on the performance of the selected actions, (ii) a finite regret bound that holds with high probability, (iii) a logarithmic bound on the expected regret. S-MAB can be used to model numerous applications, ranging from personalized medical screening to personalized web-based education, where the learner does not obtain rewards after each action, but only after sequences of actions are taken, intermediate feedbacks are observed, and a final decision is made based on which a terminal reward is obtained. Our illustrative results show that S-MAB can be used to model medical screening, and FAL outperforms the existing approaches in this field.
READ FULL TEXT