论文标题

O-Ran Midhaul中的网络切片的基于强化学习的资源分配

Reinforcement Learning Based Resource Allocation for Network Slices in O-RAN Midhaul

论文作者

Cheng, Nien Fang, Pamuklu, Turgay, Erol-Kantarci, Melike

论文摘要

网络切片设想第五代(5G)移动网络资源分配是基于对不同服务的不同要求,例如超可靠的低潜伏期通信(URLLC)和增强的移动宽带(EMBB)。开放无线电访问网络(O-RAN),提出了通过将功能调节为独立组件的开放和分解RAN的概念。 O-RAN的网络切片可以显着提高性能。因此,在本研究中提出了用于O-RAN网络切片的高级资源分配解决方案,通过应用强化学习(RL)。这项研究表明了RL兼容的简化边缘网络模拟器,该模拟器具有三个组件,用户设备(UE),Edge O-Cloud和区域O-Cloud。此模拟器后来用于发现如何通过从其他切片中动态分配未使用的带宽来改善目标网络切片的吞吐量。增加某些网络切片的吞吐量还可以使最终用户的平均数据速率,峰值率或更短的传输时间使最终用户受益。结果表明,与平衡和EMBB焦点基线相比,RL模型可以为URLLC提供较高的峰值速率和更短的传输时间。

Network slicing envisions the 5th generation (5G) mobile network resource allocation to be based on different requirements for different services, such as Ultra-Reliable Low Latency Communication (URLLC) and Enhanced Mobile Broadband (eMBB). Open Radio Access Network (O-RAN), proposes an open and disaggregated concept of RAN by modulizing the functionalities into independent components. Network slicing for O-RAN can significantly improve performance. Therefore, an advanced resource allocation solution for network slicing in O-RAN is proposed in this study by applying Reinforcement Learning (RL). This research demonstrates an RL compatible simplified edge network simulator with three components, user equipment(UE), Edge O-Cloud, and Regional O-Cloud. This simulator is later used to discover how to improve throughput for targeted network slice(s) by dynamically allocating unused bandwidth from other slices. Increasing the throughput for certain network slicing can also benefit the end users with a higher average data rate, peak rate, or shorter transmission time. The results show that the RL model can provide eMBB traffic with a high peak rate and shorter transmission time for URLLC compared to balanced and eMBB focus baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源