随着越来越多的机器和深度学习应用在高能量物理中,方便地访问专用基础设施代表了快速高效的研发需求。这项工作探讨了不同类型的云服务,以使用TensorFlow数据并行策略在并行环境中训练生成的对冲网络(GaN)。更具体地,我们并将培训过程并行化多个GPU和Google Tensor处理单元(TPU),我们比较两个算法:TensorFlow内置逻辑和自定义循环,优化,以便更高控制分配给每个GPU工作者的元素或TPU核心。将所生成的数据的质量与Monte Carlo仿真进行比较。获得训练过程的线性加速,同时在物理结果方面保留大部分性能。此外,我们根据多个GPU节点,以规模,在多个GPU节点上进行基准测试,在不同的公共云提供商上部署培训过程,寻求整体效率和成本效益。数据科学,云部署选项和相关经济学的组合允许异构地突发,探索基于云的服务的全部潜力。
translated by 谷歌翻译
我们旨在通过引入全面的分布式深度学习(DDL)探索器来解决此问题,该研究人员可以确定DDL在公共云上运行时遭受的各种执行“失速”。我们已经通过扩展先前的工作来估算两种类型的通信失速 - 互连和网络摊位来实现剖面。我们使用Profiler培训流行的DNN模型来表征各种AWS GPU实例,并列出了用户做出明智决定的优势和缺点。我们观察到,较昂贵的GPU实例可能不是所有DNN型号的性能最多,并且AWS可能会在次优的硬件互连资源分配次优。具体而言,与单个实例的培训相比,机内互连可以引入高达90%的DNN培训时间和网络连接的实例的通信开销,而与网络连接的实例可能会遭受高达5倍的速度。此外,我们对DNN宏观特征的影响进行建模,例如层的数量和通信摊位上的梯度数量。最后,我们为用户提出了一个基于衡量的建议模型,以降低DDL的公共云货币成本。
translated by 谷歌翻译
负责将数据从存储转移到GPU的同时,在培训机器学习模型的同时,数据加载器可能会大大提高培训工作的绩效。最近的进步不仅通过大大减少训练时间,而且还提供了新功能,例如从远程存储(如S3)加载数据,这表明了希望。在本文中,我们是第一个将数据加载器区分为深度学习(DL)工作流程中的单独组件并概述其结构和功能的组件。最后,我们提供了可用的不同数据库,其功能,可用性和性能方面的权衡以及从中获得的见解的全面比较。
translated by 谷歌翻译
深度学习领域目睹了对极端计算和内存密集型神经网络的显着转变。这些较新的较大模型使研究人员能够推进各种领域的最先进的工具。这种现象刺激了在更多的硬件加速器上产生了针对神经网络的分布式训练的算法。在本文中,我们讨论并比较了当前的最先进的框架,以实现大规模的分布式深度学习。首先,我们调查分布式学习中的当前实践,并确定所使用的不同类型的并行性。然后,我们提出了对大型图像和语言培训任务的性能进行了经验结果。此外,我们解决了他们的统计效率和内存消耗行为。根据我们的结果,我们讨论了阻碍性能的每个框架的算法和实现部分。
translated by 谷歌翻译
气候变化所扩大的极端天气正在造成全球日益毁灭性的影响。由于高计算成本和严格的时间到解决方案限制,目前基于物理的数值天气预测(NWP)的使用限制了精度。我们报告说,数据驱动的深度学习地球系统模拟器Fourcastnet可以预测全球天气,并在接近最先进的准确性的同时,比NWP更快地产生五个量子的预测。四个超级计算系统(Selene,Perlmutter和Juwels Booster高达3,808 nvidia a100 GPU)在三个超级计算系统上进行了优化,并有效地缩放,并在混合精度中获得140.8 PETAFLOPS(该规模的峰值为11.9%)。在3,072GPU上在Juwels Booster上测量的训练四界的时间到达的时间为67.4分钟,相对于最新的NWP,在推理中,相对于最先进的NWP的时间更快。 Fourcastnet提前一周可产生准确的瞬时天气预测,使巨大的合奏更好地捕捉了极端天气,并支持更高的全球预测决议。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
现代深度学习应用程序需要越来越多地计算培训最先进的模型。为了解决这一需求,大型企业和机构使用专用的高性能计算集群,其建筑和维护既昂贵又远远超出大多数组织的预算。结果,一些研究方向成为几个大型工业甚至更少的学术作用者的独家领域。为了减轻这种差异,较小的团体可以汇集他们的计算资源并运行有利于所有参与者的协作实验。这种范式称为网格或志愿者计算,在众多科学领域看到了成功的应用。然而,由于高延迟,不对称带宽以及志愿者计算独特的几个挑战,使用这种用于机器学习的方法是困难的。在这项工作中,我们仔细分析了这些约束,并提出了一种专门用于协作培训的新型算法框架。我们展示了我们在现实条件下的SWAV和Albert预先预价的方法的有效性,并在成本的一小部分中实现了与传统设置相当的性能。最后,我们提供了一份成功的协作语言模型预先追溯的详细报告,有40名参与者。
translated by 谷歌翻译
Novel artificial intelligence (AI) technology has expedited various scientific research, e.g., cosmology, physics and bioinformatics, inevitably becoming a significant category of workload on high performance computing (HPC) systems. Existing AI benchmarks tend to customize well-recognized AI applications, so as to evaluate the AI performance of HPC systems under predefined problem size, in terms of datasets and AI models. Due to lack of scalability on the problem size, static AI benchmarks might be under competent to help understand the performance trend of evolving AI applications on HPC systems, in particular, the scientific AI applications on large-scale systems. In this paper, we propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems with scaling the problem sizes of customized AI applications. To enable scalability, SAIH builds a set of novel mechanisms for augmenting problem sizes. As the data and model constantly scale, we can investigate the trend and range of AI performance on HPC systems, and further diagnose system bottlenecks. To verify our methodology, we augment a cosmological AI application to evaluate a real HPC system equipped with GPUs as a case study of SAIH.
translated by 谷歌翻译
深度学习培训是一个昂贵的过程,可广泛使用GPU,但并非所有模型训练都饱和现代强大的GPU。 Multi-Instance GPU(MIG)是NVIDIA引入的一项新技术,可以分区GPU,以更好地适合不需要所有内存和计算完整GPU的资源的工作负载。在本文中,我们研究了在深度学习工作负载下的三种尺寸工作负载下的MIG启用A100 GPU的性能,这些尺寸重点是使用Resnet模型进行图像识别培训。当在GPU允许的各种MIG实例上孤立运行时,我们还研究了这些工作负载的行为,此外还可以在同一GPU共同列入同类的同质实例上并行运行它们。我们的结果表明,当工作负载太小而无法孤立地利用整个GPU时,使用MIG可以显着改善GPU的利用率。通过并行训练多个小型型号,尽管每单位时间的时间增加了,但每单位时间的GPU可以执行更多的工作,导致$ \ sim $ \ sim $ 3倍吞吐量。相比之下,对于已经很好地利用了整个GPU的中型和大型工作量,MIG仅提供边际性能的改进。然而,我们观察到,使用单独的MIG分区并行的训练模型不会表现出强调具有MIG在现代GPU上具有功能的价值的干扰。
translated by 谷歌翻译
Deep learning based recommendation models (DLRM) are widely used in several business critical applications. Training such recommendation models efficiently is challenging primarily because they consist of billions of embedding-based parameters which are often stored remotely leading to significant overheads from embedding access. By profiling existing DLRM training, we observe that only 8.5% of the iteration time is spent in forward/backward pass while the remaining time is spent on embedding and model synchronization. Our key insight in this paper is that access to embeddings have a specific structure and pattern which can be used to accelerate training. We observe that embedding accesses are heavily skewed, with almost 1% of embeddings represent more than 92% of total accesses. Further, we observe that during training we can lookahead at future batches to determine exactly which embeddings will be needed at what iteration in the future. Based on these insight, we propose Bagpipe, a system for training deep recommendation models that uses caching and prefetching to overlap remote embedding accesses with the computation. We designed an Oracle Cacher, a new system component which uses our lookahead algorithm to generate optimal cache update decisions and provide strong consistency guarantees. Our experiments using three datasets and two models shows that our approach provides a speed up of up to 6.2x compared to state of the art baselines, while providing the same convergence and reproducibility guarantees as synchronous training.
translated by 谷歌翻译
该项目旨在使用称为KubeFlow [1]的开源工具(端到端ML堆栈编排工具包)探索在Kubernetes上部署机器学习模型的过程。我们以管道形式创建端到端的机器学习模型,并分析各个点,包括设置,部署模型,性能,限制,限制和功能。我们希望我们的项目几乎像一个研讨会/入门报告一样,可以帮助Vanilla Cloud/Kubernetes用户对KubeFlow的零知识使用KubeFlow来部署ML模型。从不同的云上的设置到通过互联网提供训练有素的模型 - 我们提供详细信息和指标,详细介绍KubeFlow的性能。
translated by 谷歌翻译
基于深度学习的模型占主导地位的生产推荐系统的当前景观。此外,近年来目睹了模型规模的指数增长 - 从谷歌的2016年模型,最新的Facebook的型号有10亿个参数,具有12万亿参数。型号容量的每次跳跃都有显着的质量增强,这使我们相信100万亿参数的时代即将来临。然而,即使在工业规模数据中心内,这些模型的培训也在挑战。这种困难是从训练计算的惊人的异质性继承 - 模型的嵌入层可以包括总模型尺寸的99.99%,这是极其内存密集的;虽然其余的神经网络越来越多地计算密集型。为支持培训此类巨大模式,迫切需要有效的分布式培训系统。在本文中,我们通过仔细共同设计优化算法和分布式系统架构来解决这一挑战。具体而言,为了确保培训效率和训练精度,我们设计一种新型混合训练算法,其中嵌入层和密集的神经网络由不同的同步机制处理;然后,我们构建一个名为Persia的系统(短暂的并行推荐培训系统,其中包含混合加速),以支持这种混合培训算法。理论上的示范和实证研究均达到100万亿参数,以证明了波斯的系统设计和实施。我们将Pensia公开使用(在https://github.com/persiamml/persia),以便任何人都能够以100万亿参数的规模轻松培训推荐模型。
translated by 谷歌翻译
A growing number of Machine Learning Frameworks recently made Deep Learning accessible to a wider audience of engineers, scientists, and practitioners, by allowing straightforward use of complex neural network architectures and algorithms. However, since deep learning is rapidly evolving, not only through theoretical advancements but also with respect to hardware and software engineering, ML frameworks often lose backward compatibility and introduce technical debt that can lead to bottlenecks and sub-optimal resource utilization. Moreover, the focus is in most cases not on deep learning engineering, but rather on new models and theoretical advancements. In this work, however, we focus on engineering, more specifically on the data loading pipeline in the PyTorch Framework. We designed a series of benchmarks that outline performance issues of certain steps in the data loading process. Our findings show that for classification tasks that involve loading many files, like images, the training wall-time can be significantly improved. With our new, modified ConcurrentDataloader we can reach improvements in GPU utilization and significantly reduce batch loading time, up to 12X. This allows for the use of the cloud-based, S3-like object storage for datasets, and have comparable training time as if datasets are stored on local drives.
translated by 谷歌翻译
Distributed deep learning (DDL) systems strongly depend on network performance. Current electronic packet switched (EPS) network architectures and technologies suffer from variable diameter topologies, low-bisection bandwidth and over-subscription affecting completion time of communication and collective operations. We introduce a near-exascale, full-bisection bandwidth, all-to-all, single-hop, all-optical network architecture with nanosecond reconfiguration called RAMP, which supports large-scale distributed and parallel computing systems (12.8~Tbps per node for up to 65,536 nodes). For the first time, a custom RAMP-x MPI strategy and a network transcoder is proposed to run MPI collective operations across the optical circuit switched (OCS) network in a schedule-less and contention-less manner. RAMP achieves 7.6-171$\times$ speed-up in completion time across all MPI operations compared to realistic EPS and OCS counterparts. It can also deliver a 1.3-16$\times$ and 7.8-58$\times$ reduction in Megatron and DLRM training time respectively} while offering 42-53$\times$ and 3.3-12.4$\times$ improvement in energy consumption and cost respectively.
translated by 谷歌翻译
TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Tensor-Flow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, generalpurpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous "parameter server" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that Tensor-Flow achieves for several real-world applications.
translated by 谷歌翻译
在过去十年中,已经开发出新的深度学习(DL)算法,工作负载和硬件来解决各种问题。尽管工作量和硬件生态系统的进步,DL系统的编程方法是停滞不前的。 DL工作负载从DL库中的高度优化,特定于平台和不灵活的内核,或者在新颖的操作员的情况下,通过具有强大性能的DL框架基元建立参考实现。这项工作介绍了Tensor加工基元(TPP),一个编程抽象,用于高效的DL工作负载的高效,便携式实现。 TPPS定义了一组紧凑而多才多艺的2D张镜操作员(或虚拟张量ISA),随后可以用作构建块,以在高维张量上构建复杂的运算符。 TPP规范是平台 - 不可行的,因此通过TPPS表示的代码是便携式的,而TPP实现是高度优化的,并且特定于平台。我们展示了我们使用独立内核和端到端DL&HPC工作负载完全通过TPPS表达的方法的效力和生存性,这在多个平台上优于最先进的实现。
translated by 谷歌翻译
联合学习(FL)作为边缘设备的有希望的技术,以协作学习共享预测模型,同时保持其训练数据,从而解耦了从需要存储云中的数据的机器学习的能力。然而,在规模和系统异质性方面,FL难以现实地实现。虽然有许多用于模拟FL算法的研究框架,但它们不支持在异构边缘设备上进行可扩展的流程。在本文中,我们呈现花 - 一种全面的FL框架,通过提供新的设施来执行大规模的FL实验并考虑丰富的异构流程来区分现有平台。我们的实验表明花卉可以仅使用一对高端GPU在客户尺寸下进行FL实验。然后,研究人员可以将实验无缝地迁移到真实设备中以检查设计空间的其他部分。我们认为花卉为社区提供了一个批判性的新工具,用于研究和发展。
translated by 谷歌翻译
由于GPU高度平行的架构,GPU受到训练深度学习模型的青睐。结果,大多数有关培训优化的研究都集中在GPU上。但是,在决定如何选择适当的培训硬件时,经常在成本和效率之间进行权衡。特别是,如果对CPU的培训更有效,则CPU服务器可能会有益,因为它们会产生更少的硬件更新成本并更好地利用现有基础架构。本文为使用CPU的培训深度学习模型做出了一些贡献。首先,它提出了一种优化Intel CPU的深度学习模型培训的方法和一个名为ProfileDNN的工具包,我们开发了它来改善性能分析。其次,我们描述了一种通用培训优化方法,该方法指导我们的工作流程,并探讨了几个案例研究,我们确定了绩效问题,然后优化了Pytorch的Intel扩展,从而导致了Retinanet-Resnext50模型的总体2X训练性能提高。第三,我们展示了如何利用ProfileDNN的可视化功能,这使我们能够查明瓶颈并创建一个自定义焦点损失内核,该内核比正式参考Pytorch实现更快。
translated by 谷歌翻译
我们在JAX中写入了一个用于分布式矩阵分解的开源库,用于分布式矩阵分解。我们的设计允许通过缩放可用TPU核心的数量来有效地利用TPU架构并缩放到O(B)行/列的矩阵分解问题。为了使未来的大规模矩阵分解方法研究以及说明我们自己实现的可扩展性属性,我们还建立了一个名为WebGraph的真实网络链路预测数据集。该数据集可以很容易地建模为矩阵分解问题。我们基于子图的局部性和稀疏性属性创建了此数据集的多个变体。WebGraph的最大变体有大约365米的节点,并在大约20分钟内训练单个时期饰面,256个TPU核心。我们包括在所有WebGraph的所有变体上的ALX的速度和性能编号。框架代码和数据集都是开放的。
translated by 谷歌翻译
越来越多的科学发现需要复杂而可扩展的工作流程。工作流程已成为``新应用程序'',其中多尺度计算活动包括多个和异构的可执行任务。特别是,将AI/ML模型引入传统的HPC工作流程已成为高度准确建模的推动力,与传统方法相比,通常会减少计算需求。本章将讨论将AI/ML模型集成到HPC计算的各种模式,从而导致不同类型的AI耦合HPC工作流程。激励了跨科学领域的AI/ML和HPC耦合的需求越来越多,然后以每种模式的许多生产级用例来体现。我们还讨论了极端尺度AI耦合的HPC广告系列的主要挑战 - 任务异质性,适应性,性能 - 以及旨在解决这些问题的几种框架和中间件解决方案。尽管HPC工作流程和AI/ML计算范例都是独立有效的,但我们强调了它们的整合和最终收敛如何导致一系列领域的科学性能的显着改善,最终导致了科学探索,否则就无法实现。
translated by 谷歌翻译