Multi-Agent Deep Stochastic Policy Gradient for Event Based Dynamic Spectrum Access
We consider the dynamic spectrum access (DSA) problem where K Internet of Things (IoT) devices compete for T time slots constituting a frame. Devices collectively monitor M events where each event could be monitored by multiple IoT devices. Each device, when at least one of its monitored events is active, picks an event and a time slot to transmit the corresponding active event information. In the case where multiple devices select the same time slot, a collision occurs and all transmitted packets are discarded. In order to capture the fact that devices observing the same event may transmit redundant information, we consider the maximization of the average sum event rate of the system instead of the classical frame throughput. We propose a multi-agent reinforcement learning approach based on a stochastic version of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) to access the frame by exploiting device-level correlation and time correlation of events. Through numerical simulations, we show that the proposed approach is able to efficiently exploit the aforementioned correlations and outperforms benchmark solutions such as standard multiple access protocols and the widely used Independent Deep Q-Network (IDQN) algorithm.
READ FULL TEXT