In this paper, we design a resource management scheme to support stateful applications, which will be prevalent in 6G networks. Different from stateless applications, stateful applications require context data while executing computing tasks from user terminals (UTs). Using a multi-tier computing paradigm with servers deployed at the core network, gateways, and base stations to support stateful applications, we aim to optimize long-term resource reservation by jointly minimizing the usage of computing, storage, and communication resources and the cost from reconfiguring resource reservation. The coupling among different resources and the impact of UT mobility create challenges in resource management. To address the challenges, we develop digital twin (DT) empowered network planning with two elements, i.e., multi-resource reservation and resource reservation reconfiguration. First, DTs are designed for collecting UT status data, based on which UTs are grouped according to their mobility patterns. Second, an algorithm is proposed to customize resource reservation for different groups to satisfy their different resource demands. Last, a Meta-learning-based approach is developed to reconfigure resource reservation for balancing the network resource usage and the reconfiguration cost. Simulation results demonstrate that the proposed DT-empowered network planning outperforms benchmark frameworks by using less resources and incurring lower reconfiguration costs.
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
Recent technological advancements in space, air and ground components have made possible a new network paradigm called "space-air-ground integrated network" (SAGIN). Unmanned aerial vehicles (UAVs) play a key role in SAGINs. However, due to UAVs' high dynamics and complexity, the real-world deployment of a SAGIN becomes a major barrier for realizing such SAGINs. Compared to the space and terrestrial components, UAVs are expected to meet performance requirements with high flexibility and dynamics using limited resources. Therefore, employing UAVs in various usage scenarios requires well-designed planning in algorithmic approaches. In this paper, we provide a comprehensive review of recent learning-based algorithmic approaches. We consider possible reward functions and discuss the state-of-the-art algorithms for optimizing the reward functions, including Q-learning, deep Q-learning, multi-armed bandit (MAB), particle swarm optimization (PSO) and satisfaction-based learning algorithms. Unlike other survey papers, we focus on the methodological perspective of the optimization problem, which can be applicable to various UAV-assisted missions on a SAGIN using these algorithms. We simulate users and environments according to real-world scenarios and compare the learning-based and PSO-based methods in terms of throughput, load, fairness, computation time, etc. We also implement and evaluate the 2-dimensional (2D) and 3-dimensional (3D) variations of these algorithms to reflect different deployment cases. Our simulation suggests that the $3$D satisfaction-based learning algorithm outperforms the other approaches for various metrics in most cases. We discuss some open challenges at the end and our findings aim to provide design guidelines for algorithm selections while optimizing the deployment of UAV-assisted SAGINs.
translated by 谷歌翻译
随着全球推出第五代(5G)网络,有必要超越5G,并设想6G网络。预计6G网络将具有空间空气地集成网络,高级网络虚拟化和无处不在的智能。本文介绍了一个用于6G网络的人工智能(AI) - 网络切片架构,以实现AI和网络切片的协同作用,从而促进智能网络管理和支持新兴AI服务。首先在网络切片生命周期中讨论基于AI的解决方案,以智能地管理网络切片,即用于切片的AI。然后,研究了网络切片解决方案,通过构建AI实例和执行高效的资源管理来支持Emerging AI服务,即AI的切片。最后,提出了一个案例研究,然后讨论了6G网络中的AI-Native Network SliCing必不可少的开放研究问题。
translated by 谷歌翻译
In recent years, the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks. Such challenges can be potentially overcome by integrating communication, computing, caching, and control (i4C) technologies. In this survey, we first give a snapshot of different aspects of the i4C, comprising background, motivation, leading technological enablers, potential applications, and use cases. Next, we describe different models of communication, computing, caching, and control (4C) to lay the foundation of the integration approach. We review current state-of-the-art research efforts related to the i4C, focusing on recent trends of both conventional and artificial intelligence (AI)-based integration approaches. We also highlight the need for intelligence in resources integration. Then, we discuss integration of sensing and communication (ISAC) and classify the integration approaches into various classes. Finally, we propose open challenges and present future research directions for beyond 5G networks, such as 6G.
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
The deployment flexibility and maneuverability of Unmanned Aerial Vehicles (UAVs) increased their adoption in various applications, such as wildfire tracking, border monitoring, etc. In many critical applications, UAVs capture images and other sensory data and then send the captured data to remote servers for inference and data processing tasks. However, this approach is not always practical in real-time applications due to the connection instability, limited bandwidth, and end-to-end latency. One promising solution is to divide the inference requests into multiple parts (layers or segments), with each part being executed in a different UAV based on the available resources. Furthermore, some applications require the UAVs to traverse certain areas and capture incidents; thus, planning their paths becomes critical particularly, to reduce the latency of making the collaborative inference process. Specifically, planning the UAVs trajectory can reduce the data transmission latency by communicating with devices in the same proximity while mitigating the transmission interference. This work aims to design a model for distributed collaborative inference requests and path planning in a UAV swarm while respecting the resource constraints due to the computational load and memory usage of the inference requests. The model is formulated as an optimization problem and aims to minimize latency. The formulated problem is NP-hard so finding the optimal solution is quite complex; thus, this paper introduces a real-time and dynamic solution for online applications using deep reinforcement learning. We conduct extensive simulations and compare our results to the-state-of-the-art studies demonstrating that our model outperforms the competing models.
translated by 谷歌翻译
Collaboration among industrial Internet of Things (IoT) devices and edge networks is essential to support computation-intensive deep neural network (DNN) inference services which require low delay and high accuracy. Sampling rate adaption which dynamically configures the sampling rates of industrial IoT devices according to network conditions, is the key in minimizing the service delay. In this paper, we investigate the collaborative DNN inference problem in industrial IoT networks. To capture the channel variation and task arrival randomness, we formulate the problem as a constrained Markov decision process (CMDP). Specifically, sampling rate adaption, inference task offloading and edge computing resource allocation are jointly considered to minimize the average service delay while guaranteeing the long-term accuracy requirements of different inference services. Since CMDP cannot be directly solved by general reinforcement learning (RL) algorithms due to the intractable long-term constraints, we first transform the CMDP into an MDP by leveraging the Lyapunov optimization technique. Then, a deep RL-based algorithm is proposed to solve the MDP. To expedite the training process, an optimization subroutine is embedded in the proposed algorithm to directly obtain the optimal edge computing resource allocation. Extensive simulation results are provided to demonstrate that the proposed RL-based algorithm can significantly reduce the average service delay while preserving long-term inference accuracy with a high probability.
translated by 谷歌翻译
多访问边缘计算(MEC)是一个新兴的计算范式,将云计算扩展到网络边缘,以支持移动设备上的资源密集型应用程序。作为MEC的关键问题,服务迁移需要决定如何迁移用户服务,以维持用户在覆盖范围和容量有限的MEC服务器之间漫游的服务质量。但是,由于动态的MEC环境和用户移动性,找到最佳的迁移策略是棘手的。许多现有研究根据完整的系统级信息做出集中式迁移决策,这是耗时的,并且缺乏理想的可扩展性。为了应对这些挑战,我们提出了一种新颖的学习驱动方法,该方法以用户为中心,可以通过使用不完整的系统级信息来做出有效的在线迁移决策。具体而言,服务迁移问题被建模为可观察到的马尔可夫决策过程(POMDP)。为了解决POMDP,我们设计了一个新的编码网络,该网络结合了长期记忆(LSTM)和一个嵌入式矩阵,以有效提取隐藏信息,并进一步提出了一种定制的非政策型演员 - 批判性算法,以进行有效的训练。基于现实世界的移动性痕迹的广泛实验结果表明,这种新方法始终优于启发式和最先进的学习驱动算法,并且可以在各种MEC场景上取得近乎最佳的结果。
translated by 谷歌翻译
Unmanned aerial vehicle (UAV) swarms are considered as a promising technique for next-generation communication networks due to their flexibility, mobility, low cost, and the ability to collaboratively and autonomously provide services. Distributed learning (DL) enables UAV swarms to intelligently provide communication services, multi-directional remote surveillance, and target tracking. In this survey, we first introduce several popular DL algorithms such as federated learning (FL), multi-agent Reinforcement Learning (MARL), distributed inference, and split learning, and present a comprehensive overview of their applications for UAV swarms, such as trajectory design, power control, wireless resource allocation, user assignment, perception, and satellite communications. Then, we present several state-of-the-art applications of UAV swarms in wireless communication systems, such us reconfigurable intelligent surface (RIS), virtual reality (VR), semantic communications, and discuss the problems and challenges that DL-enabled UAV swarms can solve in these applications. Finally, we describe open problems of using DL in UAV swarms and future research directions of DL enabled UAV swarms. In summary, this survey provides a comprehensive survey of various DL applications for UAV swarms in extensive scenarios.
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
与LTE网络相比,5G的愿景在于提供较高的数据速率,低延迟(为了实现近实时应用程序),大大增加了基站容量以及用户的接近完美服务质量(QoS)。为了提供此类服务,5G系统将支持LTE,NR,NR-U和Wi-Fi等访问技术的各种组合。每种无线电访问技术(RAT)都提供不同类型的访问,这些访问应在用户中对其进行最佳分配和管理。除了资源管理外,5G系统还将支持双重连接服务。因此,网络的编排对于系统经理在旧式访问技术方面来说是一个更困难的问题。在本文中,我们提出了一种基于联合元学习(FML)的大鼠分配算法,该算法使RAN Intelligent Controller(RIC)能够更快地适应动态变化的环境。我们设计了一个包含LTE和5G NR服务技术的模拟环境。在模拟中,我们的目标是在传输的截止日期内满足UE需求,以提供更高的QoS值。我们将提出的算法与单个RL试剂,爬行动物算法和基于规则的启发式方法进行了比较。仿真结果表明,提出的FML方法分别在第一部部署回合21%和12%时达到了较高的缓存率。此外,在比较方法中,提出的方法最快地适应了新任务和环境。
translated by 谷歌翻译
车辆到车辆(V2V)通信的性能在很大程度上取决于使用的调度方法。虽然集中式网络调度程序提供高V2V通信可靠性,但它们的操作通常仅限于具有完整的蜂窝网络覆盖范围的区域。相比之下,在细胞外覆盖区域中,使用了相对效率低下的分布式无线电资源管理。为了利用集中式方法的好处来增强V2V通信在缺乏蜂窝覆盖的道路上的可靠性,我们建议使用VRLS(车辆加固学习调度程序),这是一种集中的调度程序,该调度程序主动为覆盖外的V2V Communications主动分配资源,以前}车辆离开蜂窝网络覆盖范围。通过在模拟的车辆环境中进行培训,VRL可以学习一项适应环境变化的调度策略,从而消除了在复杂的现实生活环境中对有针对性(重新)培训的需求。我们评估了在不同的移动性,网络负载,无线通道和资源配置下VRL的性能。 VRL的表现优于最新的区域中最新分布式调度算法,而无需蜂窝网络覆盖,通过在高负载条件下将数据包错误率降低了一半,并在低负载方案中实现了接近最大的可靠性。
translated by 谷歌翻译
使用人工智能(AI)赋予无线网络中数据量的前所未有的数据量激增,为提供无处不在的数据驱动智能服务而开辟了新的视野。通过集中收集数据集和培训模型来实现传统的云彩中心学习(ML)基础的服务。然而,这种传统的训练技术包括两个挑战:(i)由于数据通信增加而导致的高通信和能源成本,(ii)通过允许不受信任的各方利用这些信息来威胁数据隐私。最近,鉴于这些限制,一种新兴的新兴技术,包括联合学习(FL),以使ML带到无线网络的边缘。通过以分布式方式培训全局模型,可以通过FL Server策划的全局模型来提取数据孤岛的好处。 FL利用分散的数据集和参与客户的计算资源,在不影响数据隐私的情况下开发广义ML模型。在本文中,我们介绍了对FL的基本面和能够实现技术的全面调查。此外,提出了一个广泛的研究,详细说明了无线网络中的流体的各种应用,并突出了他们的挑战和局限性。进一步探索了FL的疗效,其新兴的前瞻性超出了第五代(B5G)和第六代(6G)通信系统。本调查的目的是在关键的无线技术中概述了流动的技术,这些技术将作为建立对该主题的坚定了解的基础。最后,我们向未来的研究方向提供前进的道路。
translated by 谷歌翻译
本文提出了一种有效且新颖的多重深度强化学习(MADRL)的方法,用于解决联合虚拟网络功能(VNF)的位置和路由(P&R),其中同时提供了具有差异性要求的多个服务请求。服务请求的差异要求反映出其延迟和成本敏感的因素。我们首先构建了VNF P&R问题,以共同减少NP完整的服务延迟和资源消耗成本的加权总和。然后,将关节VNF P&R问题分解为两个迭代子任务:放置子任务和路由子任务。每个子任务由多个并发并行顺序决策过程组成。通过调用深层确定性策略梯度方法和多代理技术,MADRL-P&R框架旨在执行两个子任务。提出了新的联合奖励和内部奖励机制,以匹配安置和路由子任务的目标和约束。我们还提出了基于参数迁移的模型重新训练方法来处理不断变化的网络拓扑。通过实验证实,提议的MADRL-P&R框架在服务成本和延迟方面优于其替代方案,并为个性化服务需求提供了更高的灵活性。基于参数迁移的模型重新训练方法可以在中等网络拓扑变化下有效加速收敛。
translated by 谷歌翻译
尽管深度神经网络(DNN)已成为多个无处不在的应用程序的骨干技术,但它们在资源受限的机器中的部署,例如物联网(IoT)设备,仍然具有挑战性。为了满足这种范式的资源要求,引入了与IoT协同作用的深入推断。但是,DNN网络的分布遭受严重的数据泄漏。已经提出了各种威胁,包括黑盒攻击,恶意参与者可以恢复送入其设备的任意输入。尽管许多对策旨在实现隐私的DNN,但其中大多数会导致额外的计算和较低的准确性。在本文中,我们提出了一种方法,该方法通过重新考虑分配策略而无需牺牲模型性能来针对协作深度推断的安全性。特别是,我们检查了使该模型容易受到黑盒威胁的不同DNN分区,并得出了应分配每个设备的数据量以隐藏原始输入的所有权。我们将这种方法制定为一种优化,在该方法中,我们在共同推导的延迟与数据级别的数据级别之间建立了权衡。接下来,为了放大最佳解决方案,我们将方法塑造为支持异质设备以及多个DNN/数据集的增强学习(RL)设计。
translated by 谷歌翻译
网络切片允许移动网络运营商虚拟化基础架构,并提供定制的切片,以支持具有异构要求的各种用例。在线深度加强学习(DRL)在解决网络问题和消除模拟 - 现实差异方面表现出有希望的潜力。然而,在线DRL优化跨域资源,作为DRL的随机探索违反了切片的服务级别协议(SLA)和基础架构的资源限制。在本文中,我们提出了一个在线端到端网络切片系统的Onslicing,以实现最小的资源用法,同时满足切片的SLA。 Onslicing允许为每个切片个性化学习,并通过使用新的约束感知策略更新方法和主动基线切换机制来维护其SLA。在基础架构中的切片和参数协调中,符合基础设施的资源限制,符合基础架构的资源限制。 Onslicing进一步减轻了在早期学习阶段的在线学习的差表现不佳,该阶段模仿基于规则的解决方案。此外,我们设计了四个新的域管理员,可以分别在零档的时间尺寸,传输,核心和边缘网络中启用动态资源配置。我们在基于OpenAirInterface的端到端切片测试平面上实现了onSlicing,其中4G LTE和5G NR,OpenDaylight SDN平台和OpenAir-CN核心网络。实验结果表明,与基于规则的解决方案相比,持续达到61.3%的使用量减少,并在在线学习阶段保持近零违规(0.06%)。随着在线学习融合,与最先进的在线DRL解决方案相比,在没有任何违规的情况下,在没有任何违规的情况下减少了12.5%的使用。
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
网络切片(NS)对于有效启用下一代网络中的发散网络应用至关重要。尽管如此,网络服务中的复杂服务质量(QoS)要求和多样性的异质性需要网络切片供应(NSP)优化的高计算时间。传统优化方法在满足网络应用程序的低潜伏期和高可靠性方面具有挑战性。为此,我们将实时NSP建模为在线网络切片配置(ONSP)问题。具体而言,我们将ONSP问题作为在线多目标整数编程优化(MOIPO)问题。然后,我们通过将近端策略优化(PPO)方法应用于交通需求预测来近似于Moipo问题的解决方案。我们的仿真结果表明,与最先进的Moipo求解器相比,该方法的有效性具有较低的SLA违规率和网络操作成本。
translated by 谷歌翻译
事件处理是动态和响应互联网(物联网)的基石。该领域的最近方法基于代表性状态转移(REST)原则,其允许将事件处理任务放置在遵循相同原理的任何设备上。但是,任务应在边缘设备之间正确分布,以确保公平资源利用率和保证无缝执行。本文调查了深入学习的使用,以公平分配任务。提出了一种基于关注的神经网络模型,在不同场景下产生有效的负载平衡解决方案。所提出的模型基于变压器和指针网络架构,并通过Advantage演员批评批评学习算法训练。该模型旨在缩放到事件处理任务的数量和边缘设备的数量,不需要重新调整甚至再刷新。广泛的实验结果表明,拟议的模型在许多关键绩效指标中优于传统的启发式。通用设计和所获得的结果表明,所提出的模型可能适用于几个其他负载平衡问题变化,这使得该提案是由于其可扩展性和效率而在现实世界场景中使用的有吸引力的选择。
translated by 谷歌翻译