Model-Based Reinforcement Learning Framework of Online Network Resource Allocation
Online Network Resource Allocation (ONRA) for service provisioning is a fundamental problem in communication networks. As a sequential decision-making under uncertainty problem, it is promising to approach ONRA via Reinforcement Learning (RL). But, RL solutions suffer from the sample complexity issue; i.e., a large number of interactions with the environment needed to find an efficient policy. This is a barrier to utilize RL for ONRA as on one hand, it is not practical to train the RL agent offline due to lack of information about future requests, and on the other hand, online training in the real network leads to significant performance loss because of the sub-optimal policy during the prolonged learning time. This performance degradation is even higher in non-stationary ONRA where the agent should continually adapt the policy with the changes in service requests. To deal with this issue, we develop a general resource allocation framework, named RADAR, using model-based RL for a class of ONRA problems with the known immediate reward of each action. RADAR improves sample efficiency via exploring the state space in the background and exploiting the policy in the decision-time using synthetic samples by the model of the environment, which is trained by real interactions. Applying RADAR on the multi-domain service federation problem, to maximize profit via selecting proper domains for service requests deployment, shows its continual learning capability and up to 44 RL solution.
READ FULL TEXT