多租户机器学习服务已成为数据中心的数据密集型工作负载,具有GPU资源的繁重。由于大规模,许多调整参数和繁重的资源使用量,评估和基准真实集群的机器学习服务通常是不切实际的。在这次演示中,我们展示了AnalySim,一个集群模拟器,可以为多租户学习服务提供高效的设计探索。具体而言,通过跟踪驱动的群集工作负载仿真,Analysim可以轻松测试和分析许多性能指标中的各种调度策略,例如GPU资源利用率。 Analysim根据物理拓扑和逻辑分区模拟群集计算资源。该工具已用于大致用途,以了解不同调度策略与来自超过1000个GPU的实际生产集群的轨迹的影响。我们发现抢占和迁移能够显着降低平均工作完成时间并减轻资源碎片问题。
translated by 谷歌翻译
培训深神经网络(DNNS)在企业和云数据中心都广受欢迎。现有的DNN培训调度程序将GPU视为主要资源,并分配其他资源,例如CPU和内存与作业要求的GPU数量成正比。不幸的是,这些调度程序不考虑作业对CPU,内存和存储资源分配的敏感性的影响。在这项工作中,我们提出了Synergy,这是一种对共享GPU群集的资源敏感调度程序。通过乐观的分析,协同作用侵犯了DNN对不同资源的敏感性;某些工作可能会从GPU育儿分配中受益更多,而某些工作可能不会受到GPU育儿分配的影响。 Synergy使用新的近乎最佳的在线算法在共享的多租户集群上安排的一组作业进行了多余的工作量感知作业。我们的实验表明,与传统的GPU育儿计划相比,工作量感知的CPU和内存分配可以提高平均JCT高达3.4倍。
translated by 谷歌翻译
关键性服务已被广泛部署在云环境中。为了成本效益,通常在服务器上共同介绍多个服务。因此,在这些复杂的共同定位案例中,运行时资源调度成为QoS控制的枢轴。但是,调度勘探空间随着服务器资源的增加而迅速扩大,使调度程序几乎无法迅速提供理想的解决方案。更重要的是,我们观察到计划探索空间中有“资源悬崖”。它们会影响勘探效率,并始终导致严重的QoS波动。在先前的调度程序中,无法轻松避免资源悬崖。为了解决这些问题,我们提出了一种基于ML的新型智能调度程序-OSML。它了解建筑提示(例如,IPC,Cache Misses,内存足迹等)之间的相关性,调度解决方案和QoS需求基于我们从在现成服务器上运行的11个广泛部署的服务中收集的数据集。 OSML采用多个ML模型来协作工作,以预测QoS变化,调整调度以及在复杂的共同定位案例中违反QoS违规行为。 OSML可以在调度期间明智地避免资源悬崖,并比以前的共同定位的LC服务更快地达到最佳解决方案。实验结果表明,与以前的研究相比,OSML支持较高的负载,并符合QoS目标较低的QoS目标,而收敛时间较短。
translated by 谷歌翻译
分布式培训已成为培训大型神经网络(NN)模型的普遍性和有效的方法,该模型加工大规模数据。然而,满足来自各种NN模型,多样化计算资源的要求以及在培训工作期间的动态变化是非常挑战的。在这项研究中,我们在系统的端到端视图中设计了我们的分布式训练框架,以提供不同场景的内置自适应能力,特别是对于工业应用和生产环境,通过完全考虑资源分配,模型分区,任务放置和分布式执行。基于统一的分布式图和统一群集对象,我们的自适应框架配备了全球成本模型和全局计划者,可以实现任意并行,资源感知的放置,多模式执行,容错和弹性分布式。训练。实验表明,我们的框架可以满足应用程序的多样性和资源的异质性满足各种要求和具有竞争力的性能。具有260亿参数的Ernie语言模型在数千个AI处理器上有效地培训,可扩展性较弱的91.7%。通过采用异质管道异步执行,从推荐系统的模型的吞吐量可以分别增加到2.1倍,仅增加了GPU和CPU培训的3.3倍。此外,容错和弹性分布式培训已成功应用于在线工业应用,这减少了长期培训工作的数量,增加了34.49%,并在全球调度效率增加了33.91%生产环境。
translated by 谷歌翻译
深度学习(DL)模型在许多应用领域中取得了卓越的性能,包括愿景,语言,医疗,商业广告,娱乐等。随着快速的发展,DL应用和潜在的服务硬件都表现出强大的缩放趋势,即例如,模型缩放和计算缩放,例如,最近的预先训练模型,具有数百亿次参数,具有〜TB级存储器消耗,以及提供数百个TFLOPS的最新GPU加速器。在扩大趋势,新的问题和挑战中出现了DL推理服务系统,这逐渐朝着大型深度学习服务系统(LDS)趋势。该调查旨在总结和分类大规模深度学习服务系统的新兴挑战和优化机会。通过提供新的分类法,总结计算范例,并详细说明最近的技术进步,我们希望这项调查能够在新的优化视角下阐明,并激励小说在大型深度学习系统优化中的作品。
translated by 谷歌翻译
The core of the computer business now offers subscription-based on-demand services with the help of cloud computing. We may now share resources among multiple users by using virtualization, which creates a virtual instance of a computer system running in an abstracted hardware layer. It provides infinite computing capabilities through its massive cloud datacenters, in contrast to early distributed computing models, and has been incredibly popular in recent years because to its continually growing infrastructure, user base, and hosted data volume. This article suggests a conceptual framework for a workload management paradigm in cloud settings that is both safe and performance-efficient. A resource management unit is used in this paradigm for energy and performing virtual machine allocation with efficiency, assuring the safe execution of users' applications, and protecting against data breaches brought on by unauthorised virtual machine access real-time. A secure virtual machine management unit controls the resource management unit and is created to produce data on unlawful access or intercommunication. Additionally, a workload analyzer unit works simultaneously to estimate resource consumption data to help the resource management unit be more effective during virtual machine allocation. The suggested model functions differently to effectively serve the same objective, including data encryption and decryption prior to transfer, usage of trust access mechanism to prevent unauthorised access to virtual machines, which creates extra computational cost overhead.
translated by 谷歌翻译
The exponential growth in demand for digital services drives massive datacenter energy consumption and negative environmental impacts. Promoting sustainable solutions to pressing energy and digital infrastructure challenges is crucial. Several hyperscale cloud providers have announced plans to power their datacenters using renewable energy. However, integrating renewables to power the datacenters is challenging because the power generation is intermittent, necessitating approaches to tackle power supply variability. Hand engineering domain-specific heuristics-based schedulers to meet specific objective functions in such complex dynamic green datacenter environments is time-consuming, expensive, and requires extensive tuning by domain experts. The green datacenters need smart systems and system software to employ multiple renewable energy sources (wind and solar) by intelligently adapting computing to renewable energy generation. We present RARE (Renewable energy Aware REsource management), a Deep Reinforcement Learning (DRL) job scheduler that automatically learns effective job scheduling policies while continually adapting to datacenters' complex dynamic environment. The resulting DRL scheduler performs better than heuristic scheduling policies with different workloads and adapts to the intermittent power supply from renewables. We demonstrate DRL scheduler system design parameters that, when tuned correctly, produce better performance. Finally, we demonstrate that the DRL scheduler can learn from and improve upon existing heuristic policies using Offline Learning.
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
Coflow是最近提出的网络抽象,以帮助提高数据并行计算作业的通信性能。在多阶段作业中,每个作业包括多个Coflows,由定向的非循环图(DAG)表示。有效地调度Coflows对于提高数据中心中的数据并行计算性能至关重要。与手动调度启发式相比,现有的工作Deepweave [1]利用强化学习(RL)框架自动生成高效的CoFlow调度策略。它采用图形神经网络(GNN)来编码一组嵌入向量中的作业信息,并将包含整个作业信息的平面嵌入载体馈送到策略网络。然而,这种方法的可扩展性差,因为它无法应对由任意尺寸和形状的DAG表示的作业,这需要大型策略网络来处理难以训练的高维嵌入载体。在本文中,我们首先利用了一条定向的无循环图神经网络(DAGNN)来处理输入并提出一种新型流水线-DAGNN,其可以有效地加速DAGNN的特征提取过程。接下来,我们馈送由可调度的Coflows组成的嵌入序列,而不是将所有Coflows的平面嵌入到策略网络上,并输出优先级序列,这使得策略网络的大小仅取决于特征的维度而不是产品的维度作业的DAG中的节点数量和节点数量,提高优先级调度策略的准确性,我们将自我注意机制纳入深度RL模型,以捕获嵌入序列不同部分之间的交互,以使输出优先级进行输出优先级分数相关。基于此模型,我们开发了一种用于在线多级作业的Coflow调度算法。
translated by 谷歌翻译
计算机架构和系统已优化了很长时间,以便高效执行机器学习(ML)模型。现在,是时候重新考虑ML和系统之间的关系,并让ML转换计算机架构和系统的设计方式。这有一个双重含义:改善设计师的生产力,以及完成良性周期。在这篇论文中,我们对应用ML进行计算机架构和系统设计的工作进行了全面的审查。首先,我们考虑ML技术在架构/系统设计中的典型作用,即快速预测建模或设计方法,我们执行高级分类学。然后,我们总结了通过ML技术解决的计算机架构/系统设计中的常见问题,并且所用典型的ML技术来解决它们中的每一个。除了在狭义中强调计算机架构外,我们采用数据中心可被认为是仓库规模计算机的概念;粗略的计算机系统中提供粗略讨论,例如代码生成和编译器;我们还注意ML技术如何帮助和改造设计自动化。我们进一步提供了对机会和潜在方向的未来愿景,并设想应用ML的计算机架构和系统将在社区中蓬勃发展。
translated by 谷歌翻译
我们设计了一个用户友好且可扩展的知识图构建(KGC)系统,用于从非结构化语料库中提取结构化知识。与现有的KGC系统不同,Gbuilder提供了一种灵活且用户定义的管道,可以包含IE模型的快速开发。可以使用更多基于内置的模板或启发式操作员和可编程操作员来适应来自不同域的数据。此外,我们还为Gbuilder设计了基于云的自适应任务计划,以确保其在大规模知识图构造上的可扩展性。实验评估不仅证明了Gbuilder在统一平台中组织多个信息提取模型的能力,还证实了其在大规模KGC任务上的高可扩展性。
translated by 谷歌翻译
基于深度强化学习(DRL)的神经调度程序已经显示出巨大的解决现实世界资源分配问题的潜力,因为它们在集群计算领域表现出显着的性能增长。在本文中,我们通过广泛的实验和与非神经,启发式调度程序进行比较,调查了神经调度程序对芯片(SOC)资源分配的域(SOC)资源域的可行性。关键发现是三倍。首先,由于i)SOC计算资源的异质性和ii)由传入工作中的随机性引起的可变动作集,因此为群集计算域而设计的神经调度程序对SOC无法正常工作。其次,我们的新型神经调度程序技术,折衷的相互作用匹配(EIM)克服了上述挑战,从而显着改善了现有的神经调度程序。具体而言,我们合理化了基于EIM的神经调度程序的性能增长背后的根本原因。第三,我们发现平均处理元件(PE)切换延迟和平均PE计算时间的比率也会显着影响神经SOC调度程序的性能,即使使用EIM。因此,未来的神经SOC调度程序设计必须考虑该指标及其实施开销,以实施实用性。
translated by 谷歌翻译
无线电接入网络(RAN)技术继续见证巨大的增长,开放式运行越来越最近的势头。在O-RAN规范中,RAN智能控制器(RIC)用作自动化主机。本文介绍了对O-RAN堆栈相关的机器学习(ML)的原则,特别是加强学习(RL)。此外,我们审查无线网络的最先进的研究,并将其投入到RAN框架和O-RAN架构的层次结构上。我们在整个开发生命周期中提供ML / RL模型面临的挑战的分类:从系统规范到生产部署(数据采集,模型设计,测试和管理等)。为了解决挑战,我们将一组现有的MLOPS原理整合,当考虑RL代理时,具有独特的特性。本文讨论了系统的生命周期模型开发,测试和验证管道,称为:RLOPS。我们讨论了RLOP的所有基本部分,包括:模型规范,开发和蒸馏,生产环境服务,运营监控,安全/安全和数据工程平台。根据这些原则,我们提出了最佳实践,以实现自动化和可重复的模型开发过程。
translated by 谷歌翻译
As the number of distributed services (or microservices) of cloud-native applications grows, resource management becomes a challenging task. These applications tend to be user-facing and latency-sensitive, and our goal is to continuously minimize the amount of CPU resources allocated while still satisfying the application latency SLO. Although previous efforts have proposed simple heuristics and sophisticated ML-based techniques, we believe that a practical resource manager should accurately scale CPU resources for diverse applications, with minimum human efforts and operation overheads. To this end, we ask: can we systematically break resource management down to subproblems solvable by practical policies? Based on the notion of CPU-throttle-based performance target, we decouple the mechanisms of SLO feedback and resource control, and implement a two-level framework -- Autothrottle. It combines a lightweight learned controller at the global level, and agile per-microservice controllers at the local level. We evaluate Autothrottle on three microservice applications, with both short-term and 21-day production workload traces. Empirical results show Autothrottle's superior CPU core savings up to 26.21% over the best-performing baselines across applications, while maintaining the latency SLO.
translated by 谷歌翻译
研究过程自动化 - 对科学仪器,计算机,数据存储和其他资源的可靠,高效和可重复执行的可靠,高效和可重复执行,这是现代科学的基本要素。我们在此处报告Globus研究数据管理平台内的新服务,该服务可以将各种研究过程的规范作为可重复使用的动作集,流量以及在异质研究环境中执行此类流动的集合。为了以广泛的空间范围(例如,从科学仪器到远程数据中心)和时间范围(从几秒钟到几周),这些Globus自动化服务功能:1)云托管以可靠地执行长期持久的流量,尽管零星的失败,但这些Globus自动化服务功能:1) ; 2)声明性符号和可扩展的异步行动提供商API,用于定义和执行涉及任意资源的各种行动和流动规范; 3)授权授权机制,用于安全调用动作。这些服务允许研究人员将广泛的研究任务的管理外包和自动化为可靠,可扩展和安全的云平台。我们向Globus自动化服务提供用例
translated by 谷歌翻译
越来越多的科学发现需要复杂而可扩展的工作流程。工作流程已成为``新应用程序'',其中多尺度计算活动包括多个和异构的可执行任务。特别是,将AI/ML模型引入传统的HPC工作流程已成为高度准确建模的推动力,与传统方法相比,通常会减少计算需求。本章将讨论将AI/ML模型集成到HPC计算的各种模式,从而导致不同类型的AI耦合HPC工作流程。激励了跨科学领域的AI/ML和HPC耦合的需求越来越多,然后以每种模式的许多生产级用例来体现。我们还讨论了极端尺度AI耦合的HPC广告系列的主要挑战 - 任务异质性,适应性,性能 - 以及旨在解决这些问题的几种框架和中间件解决方案。尽管HPC工作流程和AI/ML计算范例都是独立有效的,但我们强调了它们的整合和最终收敛如何导致一系列领域的科学性能的显着改善,最终导致了科学探索,否则就无法实现。
translated by 谷歌翻译
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
translated by 谷歌翻译
随着越来越多的机器和深度学习应用在高能量物理中,方便地访问专用基础设施代表了快速高效的研发需求。这项工作探讨了不同类型的云服务,以使用TensorFlow数据并行策略在并行环境中训练生成的对冲网络(GaN)。更具体地,我们并将培训过程并行化多个GPU和Google Tensor处理单元(TPU),我们比较两个算法:TensorFlow内置逻辑和自定义循环,优化,以便更高控制分配给每个GPU工作者的元素或TPU核心。将所生成的数据的质量与Monte Carlo仿真进行比较。获得训练过程的线性加速,同时在物理结果方面保留大部分性能。此外,我们根据多个GPU节点,以规模,在多个GPU节点上进行基准测试,在不同的公共云提供商上部署培训过程,寻求整体效率和成本效益。数据科学,云部署选项和相关经济学的组合允许异构地突发,探索基于云的服务的全部潜力。
translated by 谷歌翻译
This paper studies a model for online job scheduling in green datacenters. In green datacenters, resource availability depends on the power supply from the renewables. Intermittent power supply from renewables leads to intermittent resource availability, inducing job delays (and associated costs). Green datacenter operators must intelligently manage their workloads and available power supply to extract maximum benefits. The scheduler's objective is to schedule jobs on a set of resources to maximize the total value (revenue) while minimizing the overall job delay. A trade-off exists between achieving high job value on the one hand and low expected delays on the other. Hence, the aims of achieving high rewards and low costs are in opposition. In addition, datacenter operators often prioritize multiple objectives, including high system utilization and job completion. To accomplish the opposing goals of maximizing total job value and minimizing job delays, we apply the Proportional-Integral-Derivative (PID) Lagrangian methods in Deep Reinforcement Learning to job scheduling problem in the green datacenter environment. Lagrangian methods are widely used algorithms for constrained optimization problems. We adopt a controls perspective to learn the Lagrange multiplier with proportional, integral, and derivative control, achieving favorable learning dynamics. Feedback control defines cost terms for the learning agent, monitors the cost limits during training, and continuously adjusts the learning parameters to achieve stable performance. Our experiments demonstrate improved performance compared to scheduling policies without the PID Lagrangian methods. Experimental results illustrate the effectiveness of the Constraint Controlled Reinforcement Learning (CoCoRL) scheduler that simultaneously satisfies multiple objectives.
translated by 谷歌翻译
云自动缩放机制通常基于缩放集群的无功自动化规则,每当某些指标,例如情况下的平均CPU使用量超过预定义阈值。调整这些规则在缩放群集时变得特别繁琐,群集涉及不可忽略的时间来引导新实例,因为它经常在生产云服务中发生。要处理此问题,我们提出了一种基于在不久的将来进化的系统的自动缩放云服务的架构。我们的方法利用时序预测技术,如基于机器学习和人工神经网络的那些,以预测关键指标的未来动态,例如资源消耗度量,并在它们上应用基于阈值的缩放策略。结果是一种预测自动化策略,例如,能够在云应用程序的负载中自动预测峰值,并提前触发适当的缩放操作以适应流量的预期增加。我们将我们的方法称为开源OpenStack组件,它依赖于并扩展,并扩展了Monasca所提供的监控能力,从而增加了可以通过散热或尖林等管制成分来利用的预测度量。我们使用经常性神经网络和多层的Perceptron显示实验结果,作为预测器,与简单的线性回归和传统的非预测自动缩放策略进行比较。但是,所提出的框架允许根据需要轻松定制预测政策。
translated by 谷歌翻译