过去的研究提出了许多硬件预取技术,其中大多数依赖于利用一种特定类型的程序上下文信息(例如,程序计数器,Cacheline地址)来预测未来的存储器访问。这些技术完全忽略了整个系统上的预取器的不良影响(例如,内存带宽使用),或将系统级反馈结合为返回为系统 - 不知预取算法。我们表明,由于其固有的无法在预取帐户中占用多种不同类型的程序上下文和系统级反馈信息,因此在广泛的工作负载和系统配置中往往会在广泛的工作负载和系统配置中丢失其性能效益。在本文中,我们进行了设计一个整体预取算法的案例,该算法学习使用多种不同类型的程序上下文和系统级反馈信息来预取。为此,我们提出了Pythia,它将预取器制定为钢筋学习代理。对于每种需求请求,Pythia会观察多种不同类型的程序上下文信息以进行预取决定。对于每个预取决定,Pythia接收数字奖励,该奖励评估当前内存带宽使用情况下的预取质量。 Pythia使用此奖励来加强程序上下文信息和预取决定之间的相关性,以在将来生成高度准确,及时和系统感知的预取请求。我们使用仿真和硬件综合的广泛评估表明,Pythia在各种工作负载和系统配置中优于多种最先进的预取器,同时在桌面类处理器中产生的1.03%的面积开销,并且工作负载中没有软件更改。 Pythia的源代码可以从https://github.com/cmu-safari/pythia自由下载。
translated by 谷歌翻译
长期负载请求继续限制高性能处理器的性能。为了提高处理器的潜伏能力,建筑师主要依赖两种关键技术:复杂的数据预脱水和较大的芯片固定缓存。在这项工作中,我们表明:1)即使是先进的先进预摘要,也只能预测一半的外芯片负载请求,平均在广泛的工作负载中,而2)由于尺寸的增加,并且片上缓存的复杂性,花片载荷请求的延迟的很大一部分用于访问片上缓存层次结构。这项工作的目的是通过从其关键路径上删除片上缓存访问延迟来加速片外负载请求。为此,我们提出了一种称为爱马仕(Hermes)的新技术,其关键想法是:1)准确预测哪些负载请求可能会偏离芯片,2)猜测预测的芯片外载荷直接从主芯片负载所需的数据内存,同时也同时访问此类负载的高速缓存层次结构。为了启用爱马仕,我们开发了一种新的轻巧,基于智障的外芯片加载预测技术,该技术学会使用多个程序功能(例如,程序计数器的序列)来识别芯片外负载请求。对于每个负载请求,预测器都会观察一组程序功能,以预测负载是否会外芯片。如果预计负载将放置芯片,Hermes一旦生成负载的物理地址,就会直接向内存控制器发出投机请求。如果预测是正确的,则负载最终会错过缓存层次结构,并等待正在进行的投机请求完成,从而将芯片上缓存层次结构访问延迟隐藏在离芯片外负载的关键路径中。我们的评估表明,爱马仕显着提高了最先进的基线的性能。我们开源爱马仕。
translated by 谷歌翻译
虽然离散事件模拟器是建筑研究,设计和开发的必备工具,但它们的实用性受到在调查下的现实应用的极长时间的影响。这项工作描述了一项协调一致的努力,其中机器学习(ML)用于加速离散事件仿真。首先,构建了用于静态指令属性和动态处理器状态的基于ML的指令延迟预测框架。然后,基于所提出的指令延迟预测器来实现GPU加速的并行模拟器,并且验证了其模拟精度和吞吐量并针对最先进的模拟器评估。利用现代GPU,基于ML的模拟器显着优于传统的模拟器。
translated by 谷歌翻译
计算机架构和系统已优化了很长时间,以便高效执行机器学习(ML)模型。现在,是时候重新考虑ML和系统之间的关系,并让ML转换计算机架构和系统的设计方式。这有一个双重含义:改善设计师的生产力,以及完成良性周期。在这篇论文中,我们对应用ML进行计算机架构和系统设计的工作进行了全面的审查。首先,我们考虑ML技术在架构/系统设计中的典型作用,即快速预测建模或设计方法,我们执行高级分类学。然后,我们总结了通过ML技术解决的计算机架构/系统设计中的常见问题,并且所用典型的ML技术来解决它们中的每一个。除了在狭义中强调计算机架构外,我们采用数据中心可被认为是仓库规模计算机的概念;粗略的计算机系统中提供粗略讨论,例如代码生成和编译器;我们还注意ML技术如何帮助和改造设计自动化。我们进一步提供了对机会和潜在方向的未来愿景,并设想应用ML的计算机架构和系统将在社区中蓬勃发展。
translated by 谷歌翻译
我们为处理顺序决策和外在不确定性的应用程序开发了增强学习(RL)框架,例如资源分配和库存管理。在这些应用中,不确定性仅由于未来需求等外源变量所致。一种流行的方法是使用历史数据预测外源变量,然后对预测进行计划。但是,这种间接方法需要对外源过程进行高保真模型,以确保良好的下游决策,当外源性过程复杂时,这可能是不切实际的。在这项工作中,我们提出了一种基于事后观察学习的替代方法,该方法避开了对外源过程进行建模的建模。我们的主要见解是,与Sim2real RL不同,我们可以在历史数据中重新审视过去的决定,并在这些应用程序中对其他动作产生反事实后果。我们的框架将事后最佳的行动用作政策培训信号,并在决策绩效方面具有强大的理论保证。我们使用框架开发了一种算法,以分配计算资源,以用于现实世界中的Microsoft Azure工作负载。结果表明,我们的方法比域特异性的启发式方法和SIM2REAL RL基准学习更好的政策。
translated by 谷歌翻译
The exponential growth in demand for digital services drives massive datacenter energy consumption and negative environmental impacts. Promoting sustainable solutions to pressing energy and digital infrastructure challenges is crucial. Several hyperscale cloud providers have announced plans to power their datacenters using renewable energy. However, integrating renewables to power the datacenters is challenging because the power generation is intermittent, necessitating approaches to tackle power supply variability. Hand engineering domain-specific heuristics-based schedulers to meet specific objective functions in such complex dynamic green datacenter environments is time-consuming, expensive, and requires extensive tuning by domain experts. The green datacenters need smart systems and system software to employ multiple renewable energy sources (wind and solar) by intelligently adapting computing to renewable energy generation. We present RARE (Renewable energy Aware REsource management), a Deep Reinforcement Learning (DRL) job scheduler that automatically learns effective job scheduling policies while continually adapting to datacenters' complex dynamic environment. The resulting DRL scheduler performs better than heuristic scheduling policies with different workloads and adapts to the intermittent power supply from renewables. We demonstrate DRL scheduler system design parameters that, when tuned correctly, produce better performance. Finally, we demonstrate that the DRL scheduler can learn from and improve upon existing heuristic policies using Offline Learning.
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
基于深度强化学习(DRL)的神经调度程序已经显示出巨大的解决现实世界资源分配问题的潜力,因为它们在集群计算领域表现出显着的性能增长。在本文中,我们通过广泛的实验和与非神经,启发式调度程序进行比较,调查了神经调度程序对芯片(SOC)资源分配的域(SOC)资源域的可行性。关键发现是三倍。首先,由于i)SOC计算资源的异质性和ii)由传入工作中的随机性引起的可变动作集,因此为群集计算域而设计的神经调度程序对SOC无法正常工作。其次,我们的新型神经调度程序技术,折衷的相互作用匹配(EIM)克服了上述挑战,从而显着改善了现有的神经调度程序。具体而言,我们合理化了基于EIM的神经调度程序的性能增长背后的根本原因。第三,我们发现平均处理元件(PE)切换延迟和平均PE计算时间的比率也会显着影响神经SOC调度程序的性能,即使使用EIM。因此,未来的神经SOC调度程序设计必须考虑该指标及其实施开销,以实施实用性。
translated by 谷歌翻译
2048 is a single-player stochastic puzzle game. This intriguing and addictive game has been popular worldwide and has attracted researchers to develop game-playing programs. Due to its simplicity and complexity, 2048 has become an interesting and challenging platform for evaluating the effectiveness of machine learning methods. This dissertation conducts comprehensive research on reinforcement learning and computer game algorithms for 2048. First, this dissertation proposes optimistic temporal difference learning, which significantly improves the quality of learning by employing optimistic initialization to encourage exploration for 2048. Furthermore, based on this approach, a state-of-the-art program for 2048 is developed, which achieves the highest performance among all learning-based programs, namely an average score of 625377 points and a rate of 72% for reaching 32768-tiles. Second, this dissertation investigates several techniques related to 2048, including the n-tuple network ensemble learning, Monte Carlo tree search, and deep reinforcement learning. These techniques are promising for further improving the performance of the current state-of-the-art program. Finally, this dissertation discusses pedagogical applications related to 2048 by proposing course designs and summarizing the teaching experience. The proposed course designs use 2048-like games as materials for beginners to learn reinforcement learning and computer game algorithms. The courses have been successfully applied to graduate-level students and received well by student feedback.
translated by 谷歌翻译
未来的互联网涉及几种新兴技术,例如5G和5G网络,车辆网络,无人机(UAV)网络和物联网(IOT)。此外,未来的互联网变得异质并分散了许多相关网络实体。每个实体可能需要做出本地决定,以在动态和不确定的网络环境下改善网络性能。最近使用标准学习算法,例如单药强化学习(RL)或深入强化学习(DRL),以使每个网络实体作为代理人通过与未知环境进行互动来自适应地学习最佳决策策略。但是,这种算法未能对网络实体之间的合作或竞争进行建模,而只是将其他实体视为可能导致非平稳性问题的环境的一部分。多机构增强学习(MARL)允许每个网络实体不仅观察环境,还可以观察其他实体的政策来学习其最佳政策。结果,MAL可以显着提高网络实体的学习效率,并且最近已用于解决新兴网络中的各种问题。在本文中,我们因此回顾了MAL在新兴网络中的应用。特别是,我们提供了MARL的教程,以及对MARL在下一代互联网中的应用进行全面调查。特别是,我们首先介绍单代机Agent RL和MARL。然后,我们回顾了MAL在未来互联网中解决新兴问题的许多应用程序。这些问题包括网络访问,传输电源控制,计算卸载,内容缓存,数据包路由,无人机网络的轨迹设计以及网络安全问题。
translated by 谷歌翻译
This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems. Building on expertise that began with cooling Google's data centers more efficiently, we recently conducted live experiments on two real-world facilities in partnership with Trane Technologies, a building management system provider. These live experiments had a variety of challenges in areas such as evaluation, learning from offline data, and constraint satisfaction. Our paper describes these challenges in the hope that awareness of them will benefit future applied RL work. We also describe the way we adapted our RL system to deal with these challenges, resulting in energy savings of approximately 9% and 13% respectively at the two live experiment sites.
translated by 谷歌翻译
Performance metrics-driven context caching has a profound impact on throughput and response time in distributed context management systems for real-time context queries. This paper proposes a reinforcement learning based approach to adaptively cache context with the objective of minimizing the cost incurred by context management systems in responding to context queries. Our novel algorithms enable context queries and sub-queries to reuse and repurpose cached context in an efficient manner. This approach is distinctive to traditional data caching approaches by three main features. First, we make selective context cache admissions using no prior knowledge of the context, or the context query load. Secondly, we develop and incorporate innovative heuristic models to calculate expected performance of caching an item when making the decisions. Thirdly, our strategy defines a time-aware continuous cache action space. We present two reinforcement learning agents, a value function estimating actor-critic agent and a policy search agent using deep deterministic policy gradient method. The paper also proposes adaptive policies such as eviction and cache memory scaling to complement our objective. Our method is evaluated using a synthetically generated load of context sub-queries and a synthetic data set inspired from real world data and query samples. We further investigate optimal adaptive caching configurations under different settings. This paper presents, compares, and discusses our findings that the proposed selective caching methods reach short- and long-term cost- and performance-efficiency. The paper demonstrates that the proposed methods outperform other modes of context management such as redirector mode, and database mode, and cache all policy by up to 60% in cost efficiency.
translated by 谷歌翻译
蒙特卡洛树搜索(MCT)是设计游戏机器人或解决顺序决策问题的强大方法。该方法依赖于平衡探索和开发的智能树搜索。MCT以模拟的形式进行随机抽样,并存储动作的统计数据,以在每个随后的迭代中做出更有教育的选择。然而,该方法已成为组合游戏的最新技术,但是,在更复杂的游戏(例如那些具有较高的分支因素或实时系列的游戏)以及各种实用领域(例如,运输,日程安排或安全性)有效的MCT应用程序通常需要其与问题有关的修改或与其他技术集成。这种特定领域的修改和混合方法是本调查的主要重点。最后一项主要的MCT调查已于2012年发布。自发布以来出现的贡献特别感兴趣。
translated by 谷歌翻译
深度强化学习(RL)导致了许多最近和开创性的进步。但是,这些进步通常以培训的基础体系结构的规模增加以及用于训练它们的RL算法的复杂性提高,而均以增加规模的成本。这些增长反过来又使研究人员更难迅速原型新想法或复制已发表的RL算法。为了解决这些问题,这项工作描述了ACME,这是一个用于构建新型RL算法的框架,这些框架是专门设计的,用于启用使用简单的模块化组件构建的代理,这些组件可以在各种执行范围内使用。尽管ACME的主要目标是为算法开发提供一个框架,但第二个目标是提供重要或最先进算法的简单参考实现。这些实现既是对我们的设计决策的验证,也是对RL研究中可重复性的重要贡献。在这项工作中,我们描述了ACME内部做出的主要设计决策,并提供了有关如何使用其组件来实施各种算法的进一步详细信息。我们的实验为许多常见和最先进的算法提供了基准,并显示了如何为更大且更复杂的环境扩展这些算法。这突出了ACME的主要优点之一,即它可用于实现大型,分布式的RL算法,这些算法可以以较大的尺度运行,同时仍保持该实现的固有可读性。这项工作提出了第二篇文章的版本,恰好与模块化的增加相吻合,对离线,模仿和从演示算法学习以及作为ACME的一部分实现的各种新代理。
translated by 谷歌翻译
As the number of distributed services (or microservices) of cloud-native applications grows, resource management becomes a challenging task. These applications tend to be user-facing and latency-sensitive, and our goal is to continuously minimize the amount of CPU resources allocated while still satisfying the application latency SLO. Although previous efforts have proposed simple heuristics and sophisticated ML-based techniques, we believe that a practical resource manager should accurately scale CPU resources for diverse applications, with minimum human efforts and operation overheads. To this end, we ask: can we systematically break resource management down to subproblems solvable by practical policies? Based on the notion of CPU-throttle-based performance target, we decouple the mechanisms of SLO feedback and resource control, and implement a two-level framework -- Autothrottle. It combines a lightweight learned controller at the global level, and agile per-microservice controllers at the local level. We evaluate Autothrottle on three microservice applications, with both short-term and 21-day production workload traces. Empirical results show Autothrottle's superior CPU core savings up to 26.21% over the best-performing baselines across applications, while maintaining the latency SLO.
translated by 谷歌翻译
训练机学习(ML)算法是一个计算密集型过程,由于反复访问大型培训数据集,经常会陷入内存。结果,以处理器为中心的系统(例如CPU,GPU)遭受了内存单元和处理单元之间的昂贵数据移动,这会消耗大量的能量和执行周期。以内存为中心的计算系统,即具有内存(PIM)功能,可以减轻此数据运动瓶颈。我们的目标是了解现代通用PIM体系结构加速ML培训的潜力。为此,我们(1)在现实世界通用PIM体系结构上实现了几种代表性的经典ML算法(即线性回归,逻辑回归,决策树,K-均值聚类),(2)严格评估并表征它们在准确性,性能和缩放方面以及(3)与CPU和GPU上的对应物实现相比。我们对具有2500多个PIM核心的真实内存计算系统的评估表明,当PIM硬件在必要的操作和数据类型上,通用PIM架构可以极大地加速内存的ML工作负载。例如,我们对决策树的PIM实施比8核Intel Xeon上的最先进的CPU版本$ 27 \ times $ $,并且比最先进的GPU快$ 1.34 \ times $ $ NVIDIA A100上的版本。我们在PIM上的K-Means聚类分别为$ 2.8 \ times $和$ 3.2 \ times $ $,分别是最先进的CPU和GPU版本。据我们所知,我们的工作是第一个评估现实世界中PIM架构的ML培训的工作。我们以关键的观察,外卖和建议结束,可以激发ML工作负载的用户,PIM架构的程序员以及未来以内存计算系统的硬件设计师和架构师。
translated by 谷歌翻译
在过去的25年中,我们目睹了机器学习在编译器领域的广泛应用。选择和相位订购问题。但是,有限的作品已在最先进的编译器(即LLVM)上游,以将前者无缝集成到编译器的优化管道中,以便由用户容易部署。 MLGO是此类项目的第一个项目之一,它仅努力使用强化学习使用基于ML的INLINER来减少二进制的代码大小。本文介绍了mlgoperf;第一个端到端框架,能够使用LLVM的ML Inliner优化性能。它采用二级ML模型来生成用于训练重新定位的增强学习代理的奖励,该辅助剂以前由MLGO用作主要模型。它通过预测分析功能的函数的速度加速来做到这一点,并为主要模型提供快速训练框架,否则将是不切实际的。实验结果表明,MLGOPERF在LLVM在O3时的优化方面的优化分别为SPEC CPU2006和CBENCH基准分别获得了1.8%和2.2%。此外,提出的方法为我们的基准测试带来了自动点守则区域的26%,可以将其转化为额外的3.7%速度值。
translated by 谷歌翻译
GPU编译器是复杂的软件程序,具有许多特定于目标硬件的优化。这些优化通常由使用时间和资源密集型流程的编译器专家手工设计的启发式。在本文中,我们开发了一种GPU编译器自动调节框架,使用禁止策略的深度加强学习来生成提高图形应用程序帧速率的启发式。此外,我们展示了这些学习的启发式的恢复能力,通过分析他们在没有再培训的代码检查中的一年内的稳定性来频繁编译更新。我们表明,我们的机器基于机器的学习编译器自动调节框架匹配或超过98%的图形基准的帧速率,平均隆起为1.6%,高达15.8%。
translated by 谷歌翻译
In this work, we propose MUSTACHE, a new page cache replacement algorithm whose logic is learned from observed memory access requests rather than fixed like existing policies. We formulate the page request prediction problem as a categorical time series forecasting task. Then, our method queries the learned page request forecaster to obtain the next $k$ predicted page memory references to better approximate the optimal B\'el\'ady's replacement algorithm. We implement several forecasting techniques using advanced deep learning architectures and integrate the best-performing one into an existing open-source cache simulator. Experiments run on benchmark datasets show that MUSTACHE outperforms the best page replacement heuristic (i.e., exact LRU), improving the cache hit ratio by 1.9% and reducing the number of reads/writes required to handle cache misses by 18.4% and 10.3%.
translated by 谷歌翻译
Persistent Memory (PMEM), also known as Non-Volatile Memory (NVM), can deliver higher density and lower cost per bit when compared with DRAM. Its main drawback is that it is typically slower than DRAM. On the other hand, DRAM has scalability problems due to its cost and energy consumption. Soon, PMEM will likely coexist with DRAM in computer systems but the biggest challenge is to know which data to allocate on each type of memory. This paper describes a methodology for identifying and characterizing application objects that have the most influence on the application's performance using Intel Optane DC Persistent Memory. In the first part of our work, we built a tool that automates the profiling and analysis of application objects. In the second part, we build a machine learning model to predict the most critical object within large-scale graph-based applications. Our results show that using isolated features does not bring the same benefit compared to using a carefully chosen set of features. By performing data placement using our predictive model, we can reduce the execution time degradation by 12\% (average) and 30\% (max) when compared to the baseline's approach based on LLC misses indicator.
translated by 谷歌翻译