With the rapid development of cloud computing, virtual machine scheduling has become one of the most important but challenging issues for the cloud computing community, especially for practical heterogeneous request sequences. By analyzing the impact of request heterogeneity on some popular heuristic schedulers, it can be found that existing scheduling algorithms can not handle the request heterogeneity properly and efficiently. In this paper, a plug-and-play virtual machine scheduling intensifier, called Resource Assigner (ReAssigner), is proposed to enhance the scheduling efficiency of any given scheduler for heterogeneous requests. The key idea of ReAssigner is to pre-assign roles to physical resources and let resources of the same role form a virtual cluster to handle homogeneous requests. ReAssigner can cooperate with arbitrary schedulers by restricting their scheduling space to virtual clusters. With evaluations on the real dataset from Huawei Cloud, the proposed ReAssigner achieves significant scheduling performance improvement compared with some state-of-the-art scheduling methods.
translated by 谷歌翻译
介绍了一种名为VMagent的新型模拟器,以帮助RL研究人员更好地探索新方法,特别是对于虚拟机调度。VMagent由实用虚拟机(VM)调度任务的启发,并提供了一个有效的仿真平台,可以反映云计算的实际情况。从实际云计算结束了三种情况(衰落,恢复和扩展),对应于许多强化学习挑战(高维度和行动空间,高于寿命和终身需求)。VMagent为RL研究人员提供了灵活的配置,以设计考虑不同的问题特征的定制调度环境。从VM调度角度来看,VMagent还有助于探索更好的基于学习的调度解决方案。
translated by 谷歌翻译
我们为处理顺序决策和外在不确定性的应用程序开发了增强学习(RL)框架,例如资源分配和库存管理。在这些应用中,不确定性仅由于未来需求等外源变量所致。一种流行的方法是使用历史数据预测外源变量,然后对预测进行计划。但是,这种间接方法需要对外源过程进行高保真模型,以确保良好的下游决策,当外源性过程复杂时,这可能是不切实际的。在这项工作中,我们提出了一种基于事后观察学习的替代方法,该方法避开了对外源过程进行建模的建模。我们的主要见解是,与Sim2real RL不同,我们可以在历史数据中重新审视过去的决定,并在这些应用程序中对其他动作产生反事实后果。我们的框架将事后最佳的行动用作政策培训信号,并在决策绩效方面具有强大的理论保证。我们使用框架开发了一种算法,以分配计算资源,以用于现实世界中的Microsoft Azure工作负载。结果表明,我们的方法比域特异性的启发式方法和SIM2REAL RL基准学习更好的政策。
translated by 谷歌翻译
培训深神经网络(DNNS)在企业和云数据中心都广受欢迎。现有的DNN培训调度程序将GPU视为主要资源,并分配其他资源,例如CPU和内存与作业要求的GPU数量成正比。不幸的是,这些调度程序不考虑作业对CPU,内存和存储资源分配的敏感性的影响。在这项工作中,我们提出了Synergy,这是一种对共享GPU群集的资源敏感调度程序。通过乐观的分析,协同作用侵犯了DNN对不同资源的敏感性;某些工作可能会从GPU育儿分配中受益更多,而某些工作可能不会受到GPU育儿分配的影响。 Synergy使用新的近乎最佳的在线算法在共享的多租户集群上安排的一组作业进行了多余的工作量感知作业。我们的实验表明,与传统的GPU育儿计划相比,工作量感知的CPU和内存分配可以提高平均JCT高达3.4倍。
translated by 谷歌翻译
The core of the computer business now offers subscription-based on-demand services with the help of cloud computing. We may now share resources among multiple users by using virtualization, which creates a virtual instance of a computer system running in an abstracted hardware layer. It provides infinite computing capabilities through its massive cloud datacenters, in contrast to early distributed computing models, and has been incredibly popular in recent years because to its continually growing infrastructure, user base, and hosted data volume. This article suggests a conceptual framework for a workload management paradigm in cloud settings that is both safe and performance-efficient. A resource management unit is used in this paradigm for energy and performing virtual machine allocation with efficiency, assuring the safe execution of users' applications, and protecting against data breaches brought on by unauthorised virtual machine access real-time. A secure virtual machine management unit controls the resource management unit and is created to produce data on unlawful access or intercommunication. Additionally, a workload analyzer unit works simultaneously to estimate resource consumption data to help the resource management unit be more effective during virtual machine allocation. The suggested model functions differently to effectively serve the same objective, including data encryption and decryption prior to transfer, usage of trust access mechanism to prevent unauthorised access to virtual machines, which creates extra computational cost overhead.
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
关键性服务已被广泛部署在云环境中。为了成本效益,通常在服务器上共同介绍多个服务。因此,在这些复杂的共同定位案例中,运行时资源调度成为QoS控制的枢轴。但是,调度勘探空间随着服务器资源的增加而迅速扩大,使调度程序几乎无法迅速提供理想的解决方案。更重要的是,我们观察到计划探索空间中有“资源悬崖”。它们会影响勘探效率,并始终导致严重的QoS波动。在先前的调度程序中,无法轻松避免资源悬崖。为了解决这些问题,我们提出了一种基于ML的新型智能调度程序-OSML。它了解建筑提示(例如,IPC,Cache Misses,内存足迹等)之间的相关性,调度解决方案和QoS需求基于我们从在现成服务器上运行的11个广泛部署的服务中收集的数据集。 OSML采用多个ML模型来协作工作,以预测QoS变化,调整调度以及在复杂的共同定位案例中违反QoS违规行为。 OSML可以在调度期间明智地避免资源悬崖,并比以前的共同定位的LC服务更快地达到最佳解决方案。实验结果表明,与以前的研究相比,OSML支持较高的负载,并符合QoS目标较低的QoS目标,而收敛时间较短。
translated by 谷歌翻译
As the number of distributed services (or microservices) of cloud-native applications grows, resource management becomes a challenging task. These applications tend to be user-facing and latency-sensitive, and our goal is to continuously minimize the amount of CPU resources allocated while still satisfying the application latency SLO. Although previous efforts have proposed simple heuristics and sophisticated ML-based techniques, we believe that a practical resource manager should accurately scale CPU resources for diverse applications, with minimum human efforts and operation overheads. To this end, we ask: can we systematically break resource management down to subproblems solvable by practical policies? Based on the notion of CPU-throttle-based performance target, we decouple the mechanisms of SLO feedback and resource control, and implement a two-level framework -- Autothrottle. It combines a lightweight learned controller at the global level, and agile per-microservice controllers at the local level. We evaluate Autothrottle on three microservice applications, with both short-term and 21-day production workload traces. Empirical results show Autothrottle's superior CPU core savings up to 26.21% over the best-performing baselines across applications, while maintaining the latency SLO.
translated by 谷歌翻译
在带有电动车队的乘车系统中,充电是一个复杂的决策过程。大多数电动汽车(EV)出租车服务要求驾驶员做出利己主义决定,从而导致分散的临时充电策略。车辆之间通常缺乏或不共享移动性系统的当前状态,因此无法做出最佳的决定。大多数现有方法都不将时间,位置和持续时间结合到全面的控制算法中,也不适合实时操作。因此,我们提出了一种实时预测性充电方法,用于使用一个名为“闲置时间开发(ITX)”的单个操作员进行乘车服务,该方法预测了车辆闲置并利用这些时期来收获能量的时期。它依靠图形卷积网络和线性分配算法来设计最佳的车辆和充电站配对,以最大程度地提高利用的空闲时间。我们通过对纽约市现实世界数据集的广泛模拟研究评估了我们的方法。结果表明,就货币奖励功能而言,ITX的表现优于所有基线方法至少提高5%(相当于6,000个车辆操作的$ 70,000),该奖励奖励功能的建模旨在复制现实世界中乘车系统的盈利能力。此外,与基线方法相比,ITX可以将延迟至少减少4.68%,并且通常通过促进顾客在整个车队中更好地传播乘客的舒适度。我们的结果还表明,ITX使车辆能够在白天收获能量,稳定电池水平,并增加需求意外激增的弹性。最后,与表现最佳的基线策略相比,峰值负载减少了17.39%,这使网格操作员受益,并为更可持续的电网使用铺平了道路。
translated by 谷歌翻译
Machine learning (ML) models can leak information about users, and differential privacy (DP) provides a rigorous way to bound that leakage under a given budget. This DP budget can be regarded as a new type of compute resource in workloads of multiple ML models training on user data. Once it is used, the DP budget is forever consumed. Therefore, it is crucial to allocate it most efficiently to train as many models as possible. This paper presents the scheduler for privacy that optimizes for efficiency. We formulate privacy scheduling as a new type of multidimensional knapsack problem, called privacy knapsack, which maximizes DP budget efficiency. We show that privacy knapsack is NP-hard, hence practical algorithms are necessarily approximate. We develop an approximation algorithm for privacy knapsack, DPK, and evaluate it on microbenchmarks and on a new, synthetic private-ML workload we developed from the Alibaba ML cluster trace. We show that DPK: (1) often approaches the efficiency-optimal schedule, (2) consistently schedules more tasks compared to a state-of-the-art privacy scheduling algorithm that focused on fairness (1.3-1.7x in Alibaba, 1.0-2.6x in microbenchmarks), but (3) sacrifices some level of fairness for efficiency. Therefore, using DPK, DP ML operators should be able to train more models on the same amount of user data while offering the same privacy guarantee to their users.
translated by 谷歌翻译
丹尼德缩放结束和摩尔法的放缓使能量使用数据中心在不可持续的道路上。数据中心已经是全球电力使用的大部分,应用需求以快速缩放。我们认为,数据中心计算的碳强度的大幅减少可以通过以软件为中心的方法来实现:通过修改系统API,通过修改系统API来使应用程序开发人员可见的能量和碳,使其成为可能进行知情的贸易性能和碳排放之间,并通过提高应用程序编程水平,以便灵活地使用更节能的计算和存储方法。我们还为系统软件奠定了一个研究议程,以减少数据中心计算的碳足迹。
translated by 谷歌翻译
In this paper, we design a resource management scheme to support stateful applications, which will be prevalent in 6G networks. Different from stateless applications, stateful applications require context data while executing computing tasks from user terminals (UTs). Using a multi-tier computing paradigm with servers deployed at the core network, gateways, and base stations to support stateful applications, we aim to optimize long-term resource reservation by jointly minimizing the usage of computing, storage, and communication resources and the cost from reconfiguring resource reservation. The coupling among different resources and the impact of UT mobility create challenges in resource management. To address the challenges, we develop digital twin (DT) empowered network planning with two elements, i.e., multi-resource reservation and resource reservation reconfiguration. First, DTs are designed for collecting UT status data, based on which UTs are grouped according to their mobility patterns. Second, an algorithm is proposed to customize resource reservation for different groups to satisfy their different resource demands. Last, a Meta-learning-based approach is developed to reconfigure resource reservation for balancing the network resource usage and the reconfiguration cost. Simulation results demonstrate that the proposed DT-empowered network planning outperforms benchmark frameworks by using less resources and incurring lower reconfiguration costs.
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
With the increasing growth of information through smart devices, increasing the quality level of human life requires various computational paradigms presentation including the Internet of Things, fog, and cloud. Between these three paradigms, the cloud computing paradigm as an emerging technology adds cloud layer services to the edge of the network so that resource allocation operations occur close to the end-user to reduce resource processing time and network traffic overhead. Hence, the resource allocation problem for its providers in terms of presenting a suitable platform, by using computational paradigms is considered a challenge. In general, resource allocation approaches are divided into two methods, including auction-based methods(goal, increase profits for service providers-increase user satisfaction and usability) and optimization-based methods(energy, cost, network exploitation, Runtime, reduction of time delay). In this paper, according to the latest scientific achievements, a comprehensive literature study (CLS) on artificial intelligence methods based on resource allocation optimization without considering auction-based methods in various computing environments are provided such as cloud computing, Vehicular Fog Computing, wireless, IoT, vehicular networks, 5G networks, vehicular cloud architecture,machine-to-machine communication(M2M),Train-to-Train(T2T) communication network, Peer-to-Peer(P2P) network. Since deep learning methods based on artificial intelligence are used as the most important methods in resource allocation problems; Therefore, in this paper, resource allocation approaches based on deep learning are also used in the mentioned computational environments such as deep reinforcement learning, Q-learning technique, reinforcement learning, online learning, and also Classical learning methods such as Bayesian learning, Cummins clustering, Markov decision process.
translated by 谷歌翻译
基于深度强化学习(DRL)的神经调度程序已经显示出巨大的解决现实世界资源分配问题的潜力,因为它们在集群计算领域表现出显着的性能增长。在本文中,我们通过广泛的实验和与非神经,启发式调度程序进行比较,调查了神经调度程序对芯片(SOC)资源分配的域(SOC)资源域的可行性。关键发现是三倍。首先,由于i)SOC计算资源的异质性和ii)由传入工作中的随机性引起的可变动作集,因此为群集计算域而设计的神经调度程序对SOC无法正常工作。其次,我们的新型神经调度程序技术,折衷的相互作用匹配(EIM)克服了上述挑战,从而显着改善了现有的神经调度程序。具体而言,我们合理化了基于EIM的神经调度程序的性能增长背后的根本原因。第三,我们发现平均处理元件(PE)切换延迟和平均PE计算时间的比率也会显着影响神经SOC调度程序的性能,即使使用EIM。因此,未来的神经SOC调度程序设计必须考虑该指标及其实施开销,以实施实用性。
translated by 谷歌翻译
In recent years, the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks. Such challenges can be potentially overcome by integrating communication, computing, caching, and control (i4C) technologies. In this survey, we first give a snapshot of different aspects of the i4C, comprising background, motivation, leading technological enablers, potential applications, and use cases. Next, we describe different models of communication, computing, caching, and control (4C) to lay the foundation of the integration approach. We review current state-of-the-art research efforts related to the i4C, focusing on recent trends of both conventional and artificial intelligence (AI)-based integration approaches. We also highlight the need for intelligence in resources integration. Then, we discuss integration of sensing and communication (ISAC) and classify the integration approaches into various classes. Finally, we propose open challenges and present future research directions for beyond 5G networks, such as 6G.
translated by 谷歌翻译
本文提出了一种有效且新颖的多重深度强化学习(MADRL)的方法,用于解决联合虚拟网络功能(VNF)的位置和路由(P&R),其中同时提供了具有差异性要求的多个服务请求。服务请求的差异要求反映出其延迟和成本敏感的因素。我们首先构建了VNF P&R问题,以共同减少NP完整的服务延迟和资源消耗成本的加权总和。然后,将关节VNF P&R问题分解为两个迭代子任务:放置子任务和路由子任务。每个子任务由多个并发并行顺序决策过程组成。通过调用深层确定性策略梯度方法和多代理技术,MADRL-P&R框架旨在执行两个子任务。提出了新的联合奖励和内部奖励机制,以匹配安置和路由子任务的目标和约束。我们还提出了基于参数迁移的模型重新训练方法来处理不断变化的网络拓扑。通过实验证实,提议的MADRL-P&R框架在服务成本和延迟方面优于其替代方案,并为个性化服务需求提供了更高的灵活性。基于参数迁移的模型重新训练方法可以在中等网络拓扑变化下有效加速收敛。
translated by 谷歌翻译
Edge Computing通过同时且连续执行延迟敏感的机器学习(ML)应用程序来启用智能物联网的系统。这些基于边缘的机器学习系统通常是电池供电的(即能量限制的)。他们使用具有不同计算性能的异质资源(例如CPU,GPU和/或FPGA)来满足ML应用程序的延迟约束。面临的挑战是,就这些系统的能量和延迟约束分配了在异质边缘计算系统(HEC)上对不同ML应用程序的请求。为此,我们研究和分析资源分配解决方案,这些解决方案可以在考虑能量限制的同时增加准时任务完成率。重要的是,我们研究了边缘友好的(轻巧)多目标映射启发式方法,这些启发式启发式方法不会偏向于特定的应用程序类型以实现目标;取而代之的是,启发式方法在其映射决策中考虑了同一ML应用程序中的“公平性”。绩效评估表明,根据潜伏期和能源目标,尤其是在低至中等请求的到达率方面,提出的启发式胜诉率优于异质系统中广泛使用的启发式方法。我们观察到准时任务完成率提高了8.9%,节能提高了12.6%,而没有在边缘系统上施加任何明显的开销。
translated by 谷歌翻译
Edge Federation是一种新的计算范式,无缝地互连多个边缘服务提供商的资源。此类系统中的一个关键挑战是在受约束设备中部署基于延迟和AI的资源密集型应用程序。为了应对这一挑战,我们提出了一种新型的基于记忆有效的深度学习模型,即生成优化网络(GON)。与甘斯不同,成人使用单个网络既区分输入又生成样本,从而大大降低了它们的内存足迹。利用奇数的低内存足迹,我们提出了一种称为Dragon的分散性故障耐受性方法,该方法运行模拟(按照数字建模双胞胎)来快速预测和优化边缘联邦的性能。在多个基于Raspberry-Pi的联合边缘配置上使用现实世界边缘计算基准测试的广泛实验表明,龙可以胜过故障检测和服务质量(QOS)指标的基线方法。具体而言,所提出的方法给出了与最佳深度学习方法(DL)方法更高的F1分数,而与启发式方法相比,记忆力较低。这使得违反能源消耗,响应时间和服务水平协议分别提高了74%,63%和82%。
translated by 谷歌翻译
事件处理是动态和响应互联网(物联网)的基石。该领域的最近方法基于代表性状态转移(REST)原则,其允许将事件处理任务放置在遵循相同原理的任何设备上。但是,任务应在边缘设备之间正确分布,以确保公平资源利用率和保证无缝执行。本文调查了深入学习的使用,以公平分配任务。提出了一种基于关注的神经网络模型,在不同场景下产生有效的负载平衡解决方案。所提出的模型基于变压器和指针网络架构,并通过Advantage演员批评批评学习算法训练。该模型旨在缩放到事件处理任务的数量和边缘设备的数量,不需要重新调整甚至再刷新。广泛的实验结果表明,拟议的模型在许多关键绩效指标中优于传统的启发式。通用设计和所获得的结果表明,所提出的模型可能适用于几个其他负载平衡问题变化,这使得该提案是由于其可扩展性和效率而在现实世界场景中使用的有吸引力的选择。
translated by 谷歌翻译