Stabilizing a Queue Subject to Action-Dependent Server Performance

03/01/2019
by   Michael Lin, et al.
0

We consider a discrete-time system comprising an unbounded queue that logs tasks, a non-preemptive server, and a stationary non-work-conserving scheduler. Time is uniformly divided into intervals we call epochs, within which new tasks arrive according to a Bernoulli process. At the start of each epoch, the server is either busy working on a task or is available. In the latter case, the scheduler either assigns a new task to the server or allows a rest epoch. In addition to the aforementioned availability state, we assume that the server has an additional action-dependent state that takes values in a finite set of positive integers. The action-dependent state, which evolves according to a given Markov decision process whose input is the action chosen by the scheduler, is non-increasing during rest periods, and is non-decreasing otherwise. The action-dependent state determines the probability that a task can be completed within an epoch. The scheduler has access to the queue size and the entire state of the server. In this article, we show that stability, whenever viable, can be achieved by a simple scheduler whose decisions are based on the availability state, a threshold applied to the action-dependent state, and a flag that indicates when the queue is empty. The supremum of the long-term service rates achievable by the system, subject to the requirement that the queue remains stable, can be determined by a finite search.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset