论文标题

胡子:缓存驱逐的多步预测

MUSTACHE: Multi-Step-Ahead Predictions for Cache Eviction

论文作者

Tolomei, Gabriele, Takanen, Lorenzo, Pinelli, Fabio

论文摘要

在这项工作中,我们提出了胡子,这是一种新的页面缓存替换算法,其逻辑是从观察到的内存访问请求中学到的,而不是像现有策略一样固定的。我们将页面请求预测问题作为一个分类时间序列预测任务。然后,我们的方法查询学习的页面请求预报员以获取下一个$ k $预测的页面内存参考,以更好地近似最佳的Bélády的替换算法。我们使用先进的深度学习体系结构实现了几种预测技术,并将最佳性能的技术集成到现有的开源缓存模拟器中。在基准数据集上运行的实验表明,小胡子的表现优于最佳页面替换启发式启发式(即精确的LRU),将缓存HIT率提高了1.9%,并减少了处理缓存误差所需的读数/写入数量,将其提高18.4%和10.3%。

In this work, we propose MUSTACHE, a new page cache replacement algorithm whose logic is learned from observed memory access requests rather than fixed like existing policies. We formulate the page request prediction problem as a categorical time series forecasting task. Then, our method queries the learned page request forecaster to obtain the next $k$ predicted page memory references to better approximate the optimal Bélády's replacement algorithm. We implement several forecasting techniques using advanced deep learning architectures and integrate the best-performing one into an existing open-source cache simulator. Experiments run on benchmark datasets show that MUSTACHE outperforms the best page replacement heuristic (i.e., exact LRU), improving the cache hit ratio by 1.9% and reducing the number of reads/writes required to handle cache misses by 18.4% and 10.3%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源