Eco-Vehicular Edge Networks for Connected Transportation: A Decentralized Multi-Agent Reinforcement Learning Approach
This paper introduces an energy-efficient, software-defined vehicular edge network for the growing intelligent connected transportation system. A joint user-centric virtual cell formation and resource allocation problem is investigated to bring eco-solutions at the edge. This joint problem aims to combat against the power-hungry edge nodes while maintaining assured reliability and data rate. More specifically, by prioritizing the downlink communication of dynamic eco-routing, highly mobile autonomous vehicles are served with multiple low-powered access points simultaneously for ubiquitous connectivity and guaranteed reliability of the network. The formulated optimization is extremely troublesome to solve within a polynomial time, due to its complicated combinatorial structure. Hence, a decentralized multi-agent reinforcement learning (D-MARL) algorithm is proposed for eco-vehicular edges. First, the algorithm segments the centralized action space into multiple smaller groups. Based on the model-free decentralized Q learner, each edge agent then takes its actions from the respective group. Also, in each learning state, a software-defined controller chooses the global best action from individual bests of all of the distributed agents. Numerical results validate that our learning solution outperforms existing baseline schemes and achieves near-optimal performance.
READ FULL TEXT