Dynamic Content Update for Wireless Edge Caching via Deep Reinforcement Learning

10/19/2019
by   Pingyang Wu, et al.
0

This letter studies a basic wireless caching network where a source server is connected to a cache-enabled base station (BS) that serves multiple requesting users. A critical problem is how to improve cache hit rate under dynamic content popularity. To solve this problem, the primary contribution of this work is to develop a novel dynamic content update strategy with the aid of deep reinforcement learning. Considering that the BS is unaware of content popularities, the proposed strategy dynamically updates the BS cache according to the time-varying requests and the BS cached contents. Towards this end, we model the problem of cache update as a Markov decision process and put forth an efficient algorithm that builds upon the long short-term memory network and external memory to enhance the decision making ability of the BS. Simulation results show that the proposed algorithm can achieve not only a higher average reward than deep Q-network, but also a higher cache hit rate than the existing replacement policies such as the least recently used, first-in first-out, and deep Q-network based algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset