Lazy Restless Bandits for Decision Making with Limited Observation Capability: Applications in Wireless Networks

01/04/2018
by   Kesav Kaza, et al.
0

In this work we formulate the problem of restless multi-armed bandits with cumulative feedback and partially observable states. We call these bandits as lazy restless bandits (LRB) as they are slow in action and allow multiple system state transitions during every decision interval. Rewards for each action are state dependent. The states of arms are hidden from the decision maker. The goal of the decision maker is to choose one of the M arms, at the beginning of each decision interval, such that long term cumulative reward is maximized. This work is motivated from applications in wireless networks such as relay selection, opportunistic channel access and downlink scheduling under evolving channel conditions. The Whittle index policy for solving LRB problem is analyzed. In course of doing so, various structural properties of the value functions are proved. Further, closed form index expressions are provided for two sets of special cases; for general cases, an algorithm for index computation is provided. A comparative study based on extensive numerical simulations is presented; the performances of Whittle index policy and myopic policy are compared with other policies such as uniform random, non-uniform random and round-robin.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset