Dynamic Channel Access and Power Control in Wireless Interference Networks via Multi-Agent Deep Reinforcement Learning
Due to the scarcity in the wireless spectrum and limited energy resources especially in mobile applications, efficient resource allocation strategies are critical in wireless networks. Motivated by the recent advances in deep reinforcement learning (DRL), we address multi-agent DRL-based joint dynamic channel access and power control in a wireless interference network. We first propose a multi-agent DRL algorithm with centralized training (DRL-CT) to tackle the joint resource allocation problem. In this case, the training is performed at the central unit (CU) and after training, the users make autonomous decisions on their transmission strategies with only local information. We demonstrate that with limited information exchange and faster convergence, DRL-CT algorithm can achieve 90 the combination of weighted minimum mean square error (WMMSE) algorithm for power control and exhaustive search for dynamic channel access. In the second part of this paper, we consider distributed multi-agent DRL scenario in which each user conducts its own training and makes its decisions individually, acting as a DRL agent. Finally, as a compromise between centralized and fully distributed scenarios, we consider federated DRL (FDRL) to approach the performance of DRL-CT with the use of a central unit in training while limiting the information exchange and preserving privacy of the users in the wireless system. Via simulation results, we show that proposed learning frameworks lead to efficient adaptive channel access and power control policies in dynamic environments.
READ FULL TEXT