Energy Efficient Edge Computing: When Lyapunov Meets Distributed Reinforcement Learning

03/31/2021
by   Mohamed Sana, et al.
0

In this work, we study the problem of energy-efficient computation offloading enabled by edge computing. In the considered scenario, multiple users simultaneously compete for limited radio and edge computing resources to get offloaded tasks processed under a delay constraint, with the possibility of exploiting low power sleep modes at all network nodes. The radio resource allocation takes into account inter- and intra-cell interference, and the duty cycles of the radio and computing equipment have to be jointly optimized to minimize the overall energy consumption. To address this issue, we formulate the underlying problem as a dynamic long-term optimization. Then, based on Lyapunov stochastic optimization tools, we decouple the formulated problem into a CPU scheduling problem and a radio resource allocation problem to be solved in a per-slot basis. Whereas the first one can be optimally and efficiently solved using a fast iterative algorithm, the second one is solved using distributed multi-agent reinforcement learning due to its non-convexity and NP-hardness. The resulting framework achieves up to 96.5 optimal strategy based on exhaustive search, while drastically reducing complexity. The proposed solution also allows to increase the network's energy efficiency compared to a benchmark heuristic approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset