论文标题
IIT方案中URLLC的分布式资源分配:一种多臂强盗方法
Distributed Resource Allocation for URLLC in IIoT Scenarios: A Multi-Armed Bandit Approach
论文作者
论文摘要
本文解决了在未来6G工业互联网(IIOT)网络中实现机间超级可靠的低延迟通信(URLLC)的问题。就无线电访问网络(RAN)而言,集中式预先配置的资源分配需要在上行链路传输之前将调度赠款分配给用户设备(UES),这对URLLC来说是不有效的,尤其是在灵活/不可预测的流量的情况下。为了减轻这一负担,我们根据机器学习研究了一个分布式的,以用户为中心的方案,在该方案中,UES自主选择其上行链路无线电资源,而无需等待安排赠款或连接预先配置。使用仿真,我们证明了多臂强盗(MAB)方法代表了在IIOT环境中考虑到IIT环境中的IRLLC资源的理想解决方案,即使是周期性和高端流量,甚至考虑到填充量高度的网络和积极的流量。
This paper addresses the problem of enabling inter-machine Ultra-Reliable Low-Latency Communication (URLLC) in future 6G Industrial Internet of Things (IIoT) networks. As far as the Radio Access Network (RAN) is concerned, centralized pre-configured resource allocation requires scheduling grants to be disseminated to the User Equipments (UEs) before uplink transmissions, which is not efficient for URLLC, especially in case of flexible/unpredictable traffic. To alleviate this burden, we study a distributed, user-centric scheme based on machine learning in which UEs autonomously select their uplink radio resources without the need to wait for scheduling grants or preconfiguration of connections. Using simulation, we demonstrate that a Multi-Armed Bandit (MAB) approach represents a desirable solution to allocate resources with URLLC in mind in an IIoT environment, in case of both periodic and aperiodic traffic, even considering highly populated networks and aggressive traffic.