Multiple Access in Dynamic Cell-Free Networks: Outage Performance and Deep Reinforcement Learning-Based Design
In future cell-free (or cell-less) wireless networks, a large number of devices in a geographical area will be served simultaneously in non-orthogonal multiple access scenarios by a large number of distributed access points (APs), which coordinate with a centralized processing pool. For such a centralized cell-free network with static predefined beamforming design, we first derive a closed-form expression of the uplink per-user probability of outage. To significantly reduce the complexity of joint processing of users' signals in presence of a large number of devices and APs, we propose a novel dynamic cell-free network architecture. In this architecture, the distributed APs are partitioned (i.e. clustered) among a set of subgroups with each subgroup acting as a virtual AP equipped with a distributed antenna system (DAS). The conventional static cell-free network is a special case of this dynamic cell-free network when the cluster size is one. For this dynamic cell-free network, we propose a successive interference cancellation (SIC)-enabled signal detection method and an inter-user-interference (IUI)-aware DAS's receive diversity combining scheme. We then formulate the general problem of clustering APs and designing the beamforming vectors with an objective to maximizing the sum rate or maximizing the minimum rate. To this end, we propose a hybrid deep reinforcement learning (DRL) model, namely, a deep deterministic policy gradient (DDPG)-deep double Q-network (DDQN) model, to solve the optimization problem for online implementation with low complexity. The DRL model for sum-rate optimization significantly outperforms that for maximizing the minimum rate in terms of average per-user rate performance. Also, in our system setting, the proposed DDPG-DDQN scheme is found to achieve around 78% of the rate achievable through an exhaustive search-based design.
READ FULL TEXT