论文标题
用户偏好学习辅助协作边缘缓存的小单元网络
User Preference Learning-Aided Collaborative Edge Caching for Small Cell Networks
论文作者
论文摘要
虽然下一代无线通信网络打算利用边缘缓存,以提高光谱效率,服务质量,端到端延迟,内容共享成本等,但尚未解决它的几个方面,以使其成为现实。支持缓存网络中的基本谜团之一是预测要缓存的内容以及在哪里缓存,以便完成高缓存的内容可用性。为了简单起见,大多数旧系统都基于ZIPF分布来利用静态估计,实际上,这可能不足以捕获内容的动态行为。预测用户的偏好可以主动分配缓存资源并缓存所需的内容,这在具有实时服务需求的动态环境中尤为重要。在此激励的情况下,我们提出了一个长期的短期内存(LSTM)的顺序模型,该模型能够捕获内容库中用户对可用内容的偏好的时间动力学。此外,对于更有效的边缘缓存解决方案,接近性的不同节点可以协作以互相帮助。基于预测,提出了一个非凸优化问题,以最大程度地减少这些节点之间的内容共享成本。此外,贪婪算法用于实现亚最佳解决方案。通过使用数学分析和仿真结果,我们验证了所提出的算法的性能比其他现有方案更好。
While next-generation wireless communication networks intend leveraging edge caching for enhanced spectral efficiency, quality of service, end-to-end latency, content sharing cost, etc., several aspects of it are yet to be addressed to make it a reality. One of the fundamental mysteries in a cache-enabled network is predicting what content to cache and where to cache so that high caching content availability is accomplished. For simplicity, most of the legacy systems utilize a static estimation - based on Zipf distribution, which, in reality, may not be adequate to capture the dynamic behaviors of the contents popularities. Forecasting user's preferences can proactively allocate caching resources and cache the needed contents, which is especially important in a dynamic environment with real-time service needs. Motivated by this, we propose a long short-term memory (LSTM) based sequential model that is capable of capturing the temporal dynamics of the users' preferences for the available contents in the content library. Besides, for a more efficient edge caching solution, different nodes in proximity can collaborate to help each other. Based on the forecast, a non-convex optimization problem is formulated to minimize content sharing costs among these nodes. Moreover, a greedy algorithm is used to achieve a sub-optimal solution. By using mathematical analysis and simulation results, we validate that the proposed algorithm performs better than other existing schemes.