Multi-Power Level Q-Learning Algorithm for Random Access in NOMA mMTC Systems

The massive machine-type communications (mMTC) service will be part of new services planned to integrate the fifth generation of wireless communication (B5G). In mMTC, thousands of devices sporadically access available resource blocks on the network. In this scenario, the massive random access (RA) problem arises when two or more devices collide when selecting the same resource block. There are several techniques to deal with this problem. One of them deploys Q-learning (QL), in which devices store in their Q-table the rewards sent by the central node that indicate the quality of the transmission performed. The device learns the best resource blocks to select and transmit to avoid collisions. We propose a multi-power level QL (MPL-QL) algorithm that uses non-orthogonal multiple access (NOMA) transmit scheme to generate transmission power diversity and allow accommodate more than one device in the same time-slot as long as the signal-to-interference-plus-noise ratio (SINR) exceeds a threshold value. The numerical results reveal that the best performance-complexity trade-off is obtained by using a higher number of power levels, typically eight levels. The proposed MPL-QL can deliver better throughput and lower latency compared to other recent QL-based algorithms found in the literature

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset