A Joint Reinforcement-Learning Enabled Caching and Cross-Layer Network Code for Sum-Rate Maximization in F-RAN with D2D Communications

03/22/2021
by   Mohammed S. Al-Abiad, et al.
0

In this paper, we leverage reinforcement learning (RL) and cross-layer network coding (CLNC) for efficiently pre-fetching users' contents to the local caches and delivering these contents to users in a downlink fog-radio access network (F-RAN) with device-to-device (D2D) communications. In the considered system, fog access points (F-APs) and cache-enabled D2D (CE-D2D) users are equipped with local caches for alleviating traffic burden at the fronthaul, while users' contents can be easily and quickly accommodated. In CLNC, the coding decisions take users' contents, their rates, and power levels of F-APs and CE-D2D users into account, and RL optimizes caching strategy. Towards this goal, a joint content placement and delivery problem is formulated as an optimization problem with a goal to maximize system sum-rate. For this NP-hard problem, we first develop an innovative decentralized CLNC coalition formation (CLNC-CF) algorithm to obtain a stable solution for the content delivery problem, where F-APs and CE-D2D users utilize CLNC resource allocation. By taking the behavior of F-APs and CE-D2D users into account, we then develop a multi-agent RL (MARL) algorithm for optimizing the content placements at both F-APs and CE-D2D users. Simulation results show that the proposed joint CLNC-CF and RL framework can effectively improve the sum-rate by up to 30%, 60%, and 150%, respectively, compared to: 1) an optimal uncoded algorithm, 2) a standard rate-aware-NC algorithm, and 3) a benchmark classical NC with network-layer optimization.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset