介绍了一种名为VMagent的新型模拟器,以帮助RL研究人员更好地探索新方法,特别是对于虚拟机调度。VMagent由实用虚拟机(VM)调度任务的启发,并提供了一个有效的仿真平台,可以反映云计算的实际情况。从实际云计算结束了三种情况(衰落,恢复和扩展),对应于许多强化学习挑战(高维度和行动空间,高于寿命和终身需求)。VMagent为RL研究人员提供了灵活的配置,以设计考虑不同的问题特征的定制调度环境。从VM调度角度来看,VMagent还有助于探索更好的基于学习的调度解决方案。
translated by 谷歌翻译
With the rapid development of cloud computing, virtual machine scheduling has become one of the most important but challenging issues for the cloud computing community, especially for practical heterogeneous request sequences. By analyzing the impact of request heterogeneity on some popular heuristic schedulers, it can be found that existing scheduling algorithms can not handle the request heterogeneity properly and efficiently. In this paper, a plug-and-play virtual machine scheduling intensifier, called Resource Assigner (ReAssigner), is proposed to enhance the scheduling efficiency of any given scheduler for heterogeneous requests. The key idea of ReAssigner is to pre-assign roles to physical resources and let resources of the same role form a virtual cluster to handle homogeneous requests. ReAssigner can cooperate with arbitrary schedulers by restricting their scheduling space to virtual clusters. With evaluations on the real dataset from Huawei Cloud, the proposed ReAssigner achieves significant scheduling performance improvement compared with some state-of-the-art scheduling methods.
translated by 谷歌翻译
The core of the computer business now offers subscription-based on-demand services with the help of cloud computing. We may now share resources among multiple users by using virtualization, which creates a virtual instance of a computer system running in an abstracted hardware layer. It provides infinite computing capabilities through its massive cloud datacenters, in contrast to early distributed computing models, and has been incredibly popular in recent years because to its continually growing infrastructure, user base, and hosted data volume. This article suggests a conceptual framework for a workload management paradigm in cloud settings that is both safe and performance-efficient. A resource management unit is used in this paradigm for energy and performing virtual machine allocation with efficiency, assuring the safe execution of users' applications, and protecting against data breaches brought on by unauthorised virtual machine access real-time. A secure virtual machine management unit controls the resource management unit and is created to produce data on unlawful access or intercommunication. Additionally, a workload analyzer unit works simultaneously to estimate resource consumption data to help the resource management unit be more effective during virtual machine allocation. The suggested model functions differently to effectively serve the same objective, including data encryption and decryption prior to transfer, usage of trust access mechanism to prevent unauthorised access to virtual machines, which creates extra computational cost overhead.
translated by 谷歌翻译
多租户机器学习服务已成为数据中心的数据密集型工作负载,具有GPU资源的繁重。由于大规模,许多调整参数和繁重的资源使用量,评估和基准真实集群的机器学习服务通常是不切实际的。在这次演示中,我们展示了AnalySim,一个集群模拟器,可以为多租户学习服务提供高效的设计探索。具体而言,通过跟踪驱动的群集工作负载仿真,Analysim可以轻松测试和分析许多性能指标中的各种调度策略,例如GPU资源利用率。 Analysim根据物理拓扑和逻辑分区模拟群集计算资源。该工具已用于大致用途,以了解不同调度策略与来自超过1000个GPU的实际生产集群的轨迹的影响。我们发现抢占和迁移能够显着降低平均工作完成时间并减轻资源碎片问题。
translated by 谷歌翻译
我们为处理顺序决策和外在不确定性的应用程序开发了增强学习(RL)框架,例如资源分配和库存管理。在这些应用中,不确定性仅由于未来需求等外源变量所致。一种流行的方法是使用历史数据预测外源变量,然后对预测进行计划。但是,这种间接方法需要对外源过程进行高保真模型,以确保良好的下游决策,当外源性过程复杂时,这可能是不切实际的。在这项工作中,我们提出了一种基于事后观察学习的替代方法,该方法避开了对外源过程进行建模的建模。我们的主要见解是,与Sim2real RL不同,我们可以在历史数据中重新审视过去的决定,并在这些应用程序中对其他动作产生反事实后果。我们的框架将事后最佳的行动用作政策培训信号,并在决策绩效方面具有强大的理论保证。我们使用框架开发了一种算法,以分配计算资源,以用于现实世界中的Microsoft Azure工作负载。结果表明,我们的方法比域特异性的启发式方法和SIM2REAL RL基准学习更好的政策。
translated by 谷歌翻译
With the increasing growth of information through smart devices, increasing the quality level of human life requires various computational paradigms presentation including the Internet of Things, fog, and cloud. Between these three paradigms, the cloud computing paradigm as an emerging technology adds cloud layer services to the edge of the network so that resource allocation operations occur close to the end-user to reduce resource processing time and network traffic overhead. Hence, the resource allocation problem for its providers in terms of presenting a suitable platform, by using computational paradigms is considered a challenge. In general, resource allocation approaches are divided into two methods, including auction-based methods(goal, increase profits for service providers-increase user satisfaction and usability) and optimization-based methods(energy, cost, network exploitation, Runtime, reduction of time delay). In this paper, according to the latest scientific achievements, a comprehensive literature study (CLS) on artificial intelligence methods based on resource allocation optimization without considering auction-based methods in various computing environments are provided such as cloud computing, Vehicular Fog Computing, wireless, IoT, vehicular networks, 5G networks, vehicular cloud architecture,machine-to-machine communication(M2M),Train-to-Train(T2T) communication network, Peer-to-Peer(P2P) network. Since deep learning methods based on artificial intelligence are used as the most important methods in resource allocation problems; Therefore, in this paper, resource allocation approaches based on deep learning are also used in the mentioned computational environments such as deep reinforcement learning, Q-learning technique, reinforcement learning, online learning, and also Classical learning methods such as Bayesian learning, Cummins clustering, Markov decision process.
translated by 谷歌翻译
关键性服务已被广泛部署在云环境中。为了成本效益,通常在服务器上共同介绍多个服务。因此,在这些复杂的共同定位案例中,运行时资源调度成为QoS控制的枢轴。但是,调度勘探空间随着服务器资源的增加而迅速扩大,使调度程序几乎无法迅速提供理想的解决方案。更重要的是,我们观察到计划探索空间中有“资源悬崖”。它们会影响勘探效率,并始终导致严重的QoS波动。在先前的调度程序中,无法轻松避免资源悬崖。为了解决这些问题,我们提出了一种基于ML的新型智能调度程序-OSML。它了解建筑提示(例如,IPC,Cache Misses,内存足迹等)之间的相关性,调度解决方案和QoS需求基于我们从在现成服务器上运行的11个广泛部署的服务中收集的数据集。 OSML采用多个ML模型来协作工作,以预测QoS变化,调整调度以及在复杂的共同定位案例中违反QoS违规行为。 OSML可以在调度期间明智地避免资源悬崖,并比以前的共同定位的LC服务更快地达到最佳解决方案。实验结果表明,与以前的研究相比,OSML支持较高的负载,并符合QoS目标较低的QoS目标,而收敛时间较短。
translated by 谷歌翻译
随着计算能力已成为数字经济时代的核心生产力,计算和网络收敛的概念(CNC),根据用户的需求,可以动态地安排和分配网络和计算资源,并引起广泛关注。基于任务的属性,网络编排平面需要灵活地部署任务以适当计算节点并将路径安排到计算节点。这是一个涉及资源调度和路径布置的编排问题。由于CNC是相对较新的,因此在本文中,我们回顾了有关CNC的一些研究和应用。然后,我们使用强化学习(RL)设计了CNC编排方法,这是第一次尝试,可以灵活地分配和安排计算资源和网络资源。旨在高利润和低潜伏期。同时,我们使用多因素来确定优化目标,以便根据来自不同方面的总绩效(例如成本,利润,延迟和系统过载)在我们的实验中优化了编排策略。实验表明,与贪婪的方法,随机选择和平衡资源方法相比,提出的基于RL的方法可以实现更高的利润和更低的潜伏度。我们证明RL适合CNC编排。本文启动了RL关于CNC编排的应用程序。
translated by 谷歌翻译
资源分组的数据中心(RDDC)提出了一种以资源为中心的数据中心(DC),避免了资源碎片和使任意大小的资源池来分配给任务,而不是服务器规模化的资源碎片和高利用率架构。 RDDC通常对网络施加更大的需求,需要更多的基础设施和成本和功率,因此共同管理服务器和网络资源的新资源分配算法是必不可少的,以确保提供的分配不是由网络瓶颈,并且该请求可以是成功提供了最少的网络资源。我们首次将加强学习(RL)应用于此问题,并显示基于图形神经网络的RL策略可以学习端到端的资源分配策略,以至于最高22.0 \%,42.6接受比率,CPU和内存利用率分别为\%和22.6 \%,在缩放到RDDC拓扑时,以10 ^ 2 \次数超过培训期间看到的RDDC拓扑,并且可以在使用5.3美元的同时实现与最佳基线的可比性表现相比\倍数较少的网络资源。
translated by 谷歌翻译
The exponential growth in demand for digital services drives massive datacenter energy consumption and negative environmental impacts. Promoting sustainable solutions to pressing energy and digital infrastructure challenges is crucial. Several hyperscale cloud providers have announced plans to power their datacenters using renewable energy. However, integrating renewables to power the datacenters is challenging because the power generation is intermittent, necessitating approaches to tackle power supply variability. Hand engineering domain-specific heuristics-based schedulers to meet specific objective functions in such complex dynamic green datacenter environments is time-consuming, expensive, and requires extensive tuning by domain experts. The green datacenters need smart systems and system software to employ multiple renewable energy sources (wind and solar) by intelligently adapting computing to renewable energy generation. We present RARE (Renewable energy Aware REsource management), a Deep Reinforcement Learning (DRL) job scheduler that automatically learns effective job scheduling policies while continually adapting to datacenters' complex dynamic environment. The resulting DRL scheduler performs better than heuristic scheduling policies with different workloads and adapts to the intermittent power supply from renewables. We demonstrate DRL scheduler system design parameters that, when tuned correctly, produce better performance. Finally, we demonstrate that the DRL scheduler can learn from and improve upon existing heuristic policies using Offline Learning.
translated by 谷歌翻译
培训深神经网络(DNNS)在企业和云数据中心都广受欢迎。现有的DNN培训调度程序将GPU视为主要资源,并分配其他资源,例如CPU和内存与作业要求的GPU数量成正比。不幸的是,这些调度程序不考虑作业对CPU,内存和存储资源分配的敏感性的影响。在这项工作中,我们提出了Synergy,这是一种对共享GPU群集的资源敏感调度程序。通过乐观的分析,协同作用侵犯了DNN对不同资源的敏感性;某些工作可能会从GPU育儿分配中受益更多,而某些工作可能不会受到GPU育儿分配的影响。 Synergy使用新的近乎最佳的在线算法在共享的多租户集群上安排的一组作业进行了多余的工作量感知作业。我们的实验表明,与传统的GPU育儿计划相比,工作量感知的CPU和内存分配可以提高平均JCT高达3.4倍。
translated by 谷歌翻译
事件处理是动态和响应互联网(物联网)的基石。该领域的最近方法基于代表性状态转移(REST)原则,其允许将事件处理任务放置在遵循相同原理的任何设备上。但是,任务应在边缘设备之间正确分布,以确保公平资源利用率和保证无缝执行。本文调查了深入学习的使用,以公平分配任务。提出了一种基于关注的神经网络模型,在不同场景下产生有效的负载平衡解决方案。所提出的模型基于变压器和指针网络架构,并通过Advantage演员批评批评学习算法训练。该模型旨在缩放到事件处理任务的数量和边缘设备的数量,不需要重新调整甚至再刷新。广泛的实验结果表明,拟议的模型在许多关键绩效指标中优于传统的启发式。通用设计和所获得的结果表明,所提出的模型可能适用于几个其他负载平衡问题变化,这使得该提案是由于其可扩展性和效率而在现实世界场景中使用的有吸引力的选择。
translated by 谷歌翻译
随着强化学习(RL)的最新流行率,在推荐平台(例如电子商务和新闻提要网站)中利用RL来利用RL进行广泛的兴趣。为了获得更好的分配,将最近基于RL的广告分配方法的输入从点单项目升级到列表项目的布置。但是,这也导致了国家行动对的高维空间,因此很难以良好的概括能力学习列表表示。这进一步阻碍了RL药物的探索,并导致样本效率差。为了解决这个问题,我们提出了一种基于RL的新方法,用于广告分配,该方法通过利用Meituan食品交付平台上的任务特定信号来学习更好的列表表示形式。具体而言,我们根据对ADS分配的先前领域知识分别提出基于重建,预测和对比度学习的三个不同的辅助任务。我们在Meituan食品输送平台上进行了广泛的实验,以评估拟议的辅助任务的有效性。离线和在线实验结果都表明,与最先进的基线相比,提出的方法可以学习更好的列表表示形式,并获得更高的平台收入。
translated by 谷歌翻译
动态作业车间调度问题(DJSP)是一类是专门考虑固有的不确定性,如切换顺序要求和现实的智能制造的设置可能机器故障调度任务。因为传统方法不能动态生成环境的扰动面有效调度策略,我们制定DJSP马尔可夫决策过程(MDP)通过强化学习(RL)加以解决。为此,我们提出了一个灵活的混合架构,采用析取图的状态和一组通用的调度规则与之前最小的领域知识的行动空间。注意机制被用作状态的特征提取的图形表示学习(GRL)模块,并且采用双决斗深Q-网络与优先重放和嘈杂的网络(D3QPN)到每个状态映射到最适当的调度规则。此外,我们提出Gymjsp,基于众所周知的或图书馆公共标杆,提供了RL和DJSP研究社区标准化现成的现成工具。各种DJSP实例综合实验证实,我们提出的框架是优于基准算法可在所有情况下,较小的完工时间,并提供了在混合架构的各个组成部分的有效性实证理由。
translated by 谷歌翻译
通过将云资源转换为用户的邻近来减轻云计算所拥有的限制来引入雾计算。雾环境使其有限的资源可用于大量用户部署其无服务器的应用程序,由多个无服务器功能组成。引入迷雾环境背后的主要意图是通过其有限的资源来满足延迟和位置敏感无服务器应用程序的需求。最近的研究主要侧重于将最大资源分配给来自FOG节点的这些应用程序,而不是充分利用云环境。这引入了在将资源提供给最大连接用户的负面影响。为了解决此问题,在本文中,我们调查了用户请求的最佳百分比,该请求应由雾和云实现。因此,我们提出了Def-Driel,系统地部署了使用深度增强学习的雾和云环境中无服务器功能,使用若干现实生活参数,例如来自附近FOG节点,用户的优先级的用户的距离和延迟,与最近的相关算法相比,无服务器应用程序的优先级及其资源需求等。从模拟和比较结果,可以清楚地观察到其对其他算法的优势及其对现实生活场景的适用性。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
在本文中,我们介绍了有关典型乘车共享系统中决策优化问题的强化学习方法的全面,深入的调查。涵盖了有关乘车匹配,车辆重新定位,乘车,路由和动态定价主题的论文。在过去的几年中,大多数文献都出现了,并且要继续解决一些核心挑战:模型复杂性,代理协调和多个杠杆的联合优化。因此,我们还引入了流行的数据集和开放式仿真环境,以促进进一步的研发。随后,我们讨论了有关该重要领域的强化学习研究的许多挑战和机会。
translated by 谷歌翻译
Machine learning (ML) models can leak information about users, and differential privacy (DP) provides a rigorous way to bound that leakage under a given budget. This DP budget can be regarded as a new type of compute resource in workloads of multiple ML models training on user data. Once it is used, the DP budget is forever consumed. Therefore, it is crucial to allocate it most efficiently to train as many models as possible. This paper presents the scheduler for privacy that optimizes for efficiency. We formulate privacy scheduling as a new type of multidimensional knapsack problem, called privacy knapsack, which maximizes DP budget efficiency. We show that privacy knapsack is NP-hard, hence practical algorithms are necessarily approximate. We develop an approximation algorithm for privacy knapsack, DPK, and evaluate it on microbenchmarks and on a new, synthetic private-ML workload we developed from the Alibaba ML cluster trace. We show that DPK: (1) often approaches the efficiency-optimal schedule, (2) consistently schedules more tasks compared to a state-of-the-art privacy scheduling algorithm that focused on fairness (1.3-1.7x in Alibaba, 1.0-2.6x in microbenchmarks), but (3) sacrifices some level of fairness for efficiency. Therefore, using DPK, DP ML operators should be able to train more models on the same amount of user data while offering the same privacy guarantee to their users.
translated by 谷歌翻译
Deep reinforcement learning algorithms have succeeded in several challenging domains. Classic Online RL job schedulers can learn efficient scheduling strategies but often takes thousands of timesteps to explore the environment and adapt from a randomly initialized DNN policy. Existing RL schedulers overlook the importance of learning from historical data and improving upon custom heuristic policies. Offline reinforcement learning presents the prospect of policy optimization from pre-recorded datasets without online environment interaction. Following the recent success of data-driven learning, we explore two RL methods: 1) Behaviour Cloning and 2) Offline RL, which aim to learn policies from logged data without interacting with the environment. These methods address the challenges concerning the cost of data collection and safety, particularly pertinent to real-world applications of RL. Although the data-driven RL methods generate good results, we show that the performance is highly dependent on the quality of the historical datasets. Finally, we demonstrate that by effectively incorporating prior expert demonstrations to pre-train the agent, we short-circuit the random exploration phase to learn a reasonable policy with online training. We utilize Offline RL as a \textbf{launchpad} to learn effective scheduling policies from prior experience collected using Oracle or heuristic policies. Such a framework is effective for pre-training from historical datasets and well suited to continuous improvement with online data collection.
translated by 谷歌翻译