负责将数据从存储转移到GPU的同时,在培训机器学习模型的同时,数据加载器可能会大大提高培训工作的绩效。最近的进步不仅通过大大减少训练时间,而且还提供了新功能,例如从远程存储(如S3)加载数据,这表明了希望。在本文中,我们是第一个将数据加载器区分为深度学习(DL)工作流程中的单独组件并概述其结构和功能的组件。最后,我们提供了可用的不同数据库,其功能,可用性和性能方面的权衡以及从中获得的见解的全面比较。
translated by 谷歌翻译
A growing number of Machine Learning Frameworks recently made Deep Learning accessible to a wider audience of engineers, scientists, and practitioners, by allowing straightforward use of complex neural network architectures and algorithms. However, since deep learning is rapidly evolving, not only through theoretical advancements but also with respect to hardware and software engineering, ML frameworks often lose backward compatibility and introduce technical debt that can lead to bottlenecks and sub-optimal resource utilization. Moreover, the focus is in most cases not on deep learning engineering, but rather on new models and theoretical advancements. In this work, however, we focus on engineering, more specifically on the data loading pipeline in the PyTorch Framework. We designed a series of benchmarks that outline performance issues of certain steps in the data loading process. Our findings show that for classification tasks that involve loading many files, like images, the training wall-time can be significantly improved. With our new, modified ConcurrentDataloader we can reach improvements in GPU utilization and significantly reduce batch loading time, up to 12X. This allows for the use of the cloud-based, S3-like object storage for datasets, and have comparable training time as if datasets are stored on local drives.
translated by 谷歌翻译
深度学习领域目睹了对极端计算和内存密集型神经网络的显着转变。这些较新的较大模型使研究人员能够推进各种领域的最先进的工具。这种现象刺激了在更多的硬件加速器上产生了针对神经网络的分布式训练的算法。在本文中,我们讨论并比较了当前的最先进的框架,以实现大规模的分布式深度学习。首先,我们调查分布式学习中的当前实践,并确定所使用的不同类型的并行性。然后,我们提出了对大型图像和语言培训任务的性能进行了经验结果。此外,我们解决了他们的统计效率和内存消耗行为。根据我们的结果,我们讨论了阻碍性能的每个框架的算法和实现部分。
translated by 谷歌翻译
我们旨在通过引入全面的分布式深度学习(DDL)探索器来解决此问题,该研究人员可以确定DDL在公共云上运行时遭受的各种执行“失速”。我们已经通过扩展先前的工作来估算两种类型的通信失速 - 互连和网络摊位来实现剖面。我们使用Profiler培训流行的DNN模型来表征各种AWS GPU实例,并列出了用户做出明智决定的优势和缺点。我们观察到,较昂贵的GPU实例可能不是所有DNN型号的性能最多,并且AWS可能会在次优的硬件互连资源分配次优。具体而言,与单个实例的培训相比,机内互连可以引入高达90%的DNN培训时间和网络连接的实例的通信开销,而与网络连接的实例可能会遭受高达5倍的速度。此外,我们对DNN宏观特征的影响进行建模,例如层的数量和通信摊位上的梯度数量。最后,我们为用户提出了一个基于衡量的建议模型,以降低DDL的公共云货币成本。
translated by 谷歌翻译
深度学习培训是一个昂贵的过程,可广泛使用GPU,但并非所有模型训练都饱和现代强大的GPU。 Multi-Instance GPU(MIG)是NVIDIA引入的一项新技术,可以分区GPU,以更好地适合不需要所有内存和计算完整GPU的资源的工作负载。在本文中,我们研究了在深度学习工作负载下的三种尺寸工作负载下的MIG启用A100 GPU的性能,这些尺寸重点是使用Resnet模型进行图像识别培训。当在GPU允许的各种MIG实例上孤立运行时,我们还研究了这些工作负载的行为,此外还可以在同一GPU共同列入同类的同质实例上并行运行它们。我们的结果表明,当工作负载太小而无法孤立地利用整个GPU时,使用MIG可以显着改善GPU的利用率。通过并行训练多个小型型号,尽管每单位时间的时间增加了,但每单位时间的GPU可以执行更多的工作,导致$ \ sim $ \ sim $ 3倍吞吐量。相比之下,对于已经很好地利用了整个GPU的中型和大型工作量,MIG仅提供边际性能的改进。然而,我们观察到,使用单独的MIG分区并行的训练模型不会表现出强调具有MIG在现代GPU上具有功能的价值的干扰。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
Accurate reporting of energy and carbon usage is essential for understanding the potential climate impacts of machine learning research. We introduce a framework that makes this easier by providing a simple interface for tracking realtime energy consumption and carbon emissions, as well as generating standardized online appendices. Utilizing this framework, we create a leaderboard for energy efficient reinforcement learning algorithms to incentivize responsible research in this area as an example for other areas of machine learning. Finally, based on case studies using our framework, we propose strategies for mitigation of carbon emissions and reduction of energy consumption. By making accounting easier, we hope to further the sustainable development of machine learning experiments and spur more research into energy efficient algorithms.
translated by 谷歌翻译
联合学习(FL)作为边缘设备的有希望的技术,以协作学习共享预测模型,同时保持其训练数据,从而解耦了从需要存储云中的数据的机器学习的能力。然而,在规模和系统异质性方面,FL难以现实地实现。虽然有许多用于模拟FL算法的研究框架,但它们不支持在异构边缘设备上进行可扩展的流程。在本文中,我们呈现花 - 一种全面的FL框架,通过提供新的设施来执行大规模的FL实验并考虑丰富的异构流程来区分现有平台。我们的实验表明花卉可以仅使用一对高端GPU在客户尺寸下进行FL实验。然后,研究人员可以将实验无缝地迁移到真实设备中以检查设计空间的其他部分。我们认为花卉为社区提供了一个批判性的新工具,用于研究和发展。
translated by 谷歌翻译
传统的数据湖泊通过启用时间旅行,运行SQL查询,使用酸性交易摄入数据以及可视化PBABYTE尺度数据集在云存储中,为分析工作负载提供了关键的数据基础架构。它们使组织能够分解数据孤岛,解锁数据驱动的决策,提高运营效率并降低成本。但是,随着深度学习接管常见的分析工作流程,传统数据湖泊对诸如自然语言处理(NLP),音频处理,计算机视觉和涉及非尾巴数据集的应用程序的有用程度降低。本文介绍了Deep Lake,这是一个开源湖泊,用于在Activeloop开发的深度学习应用程序。 Deep Lake保持了一项关键区别的香草数据湖的好处:它以张量的形式存储复杂数据,例如图像,视频,注释以及表格数据,并将数据迅速流式传输到网络上(a )张量查询语言,(b)浏览器可视化引擎或(c)不牺牲GPU利用率的深度学习框架。可以从Pytorch,Tensorflow,Jax,与许多MLOPS工具集成在一起的数据集。
translated by 谷歌翻译
输入管道,其摄取和转换输入数据,是培训机器学习(ML)模型的重要组成部分。然而,实现有效的输入管道有挑战性,因为它需要推理有关并行性,异步的推理和细粒度分析信息的可变性。我们对谷歌数据中心超过200万毫升工作的分析表明,大量模型培训工作可以从更快的输入数据管道中受益。与此同时,我们的分析表明,大多数工作都不饱和主机硬件,指向基于软件的瓶颈的方向。这些发现的动机,我们提出了水管工,一种用于在ML输入管道中找到瓶颈的工具。管道工使用可扩展和可解释的操作分析分析模型来自动调整Host资源约束下的并行性,预取和缓存。在五个代表性ML管道上,水管工可获得最多46倍的误配置管道的加速。通过自动化缓存,与最先进的调谐器相比,水管工获得超过40%的端到端加速。
translated by 谷歌翻译
现代深度学习应用程序需要越来越多地计算培训最先进的模型。为了解决这一需求,大型企业和机构使用专用的高性能计算集群,其建筑和维护既昂贵又远远超出大多数组织的预算。结果,一些研究方向成为几个大型工业甚至更少的学术作用者的独家领域。为了减轻这种差异,较小的团体可以汇集他们的计算资源并运行有利于所有参与者的协作实验。这种范式称为网格或志愿者计算,在众多科学领域看到了成功的应用。然而,由于高延迟,不对称带宽以及志愿者计算独特的几个挑战,使用这种用于机器学习的方法是困难的。在这项工作中,我们仔细分析了这些约束,并提出了一种专门用于协作培训的新型算法框架。我们展示了我们在现实条件下的SWAV和Albert预先预价的方法的有效性,并在成本的一小部分中实现了与传统设置相当的性能。最后,我们提供了一份成功的协作语言模型预先追溯的详细报告,有40名参与者。
translated by 谷歌翻译
随着越来越多的机器和深度学习应用在高能量物理中,方便地访问专用基础设施代表了快速高效的研发需求。这项工作探讨了不同类型的云服务,以使用TensorFlow数据并行策略在并行环境中训练生成的对冲网络(GaN)。更具体地,我们并将培训过程并行化多个GPU和Google Tensor处理单元(TPU),我们比较两个算法:TensorFlow内置逻辑和自定义循环,优化,以便更高控制分配给每个GPU工作者的元素或TPU核心。将所生成的数据的质量与Monte Carlo仿真进行比较。获得训练过程的线性加速,同时在物理结果方面保留大部分性能。此外,我们根据多个GPU节点,以规模,在多个GPU节点上进行基准测试,在不同的公共云提供商上部署培训过程,寻求整体效率和成本效益。数据科学,云部署选项和相关经济学的组合允许异构地突发,探索基于云的服务的全部潜力。
translated by 谷歌翻译
随着机器学习系统的计算要求以及机器学习框架的规模和复杂性的增加,基本框架创新变得具有挑战性。尽管计算需求驱动了最近的编译器,网络和硬件的进步,但通过机器学习工具对这些进步的利用却以较慢的速度发生。这部分是由于与现有框架原型制作新的计算范式有关的困难。大型框架将机器学习研究人员和从业人员作为最终用户的优先级优先,并且很少关注能够向前推动框架的系统研究人员 - 我们认为两者都是同等重要的利益相关者。我们介绍了手电筒,这是一个开源库,旨在通过优先考虑开放式,模块化,可定制的内部设备以及最新的,可用于研究的模型和培训设置,以刺激机器学习工具和系统的创新。手电筒使系统研究人员能够快速原型并尝试机器学习计算中的新思想,并且开销低,与其他流行的机器学习框架竞争并经常超过其他流行的机器学习框架。我们将手电筒视为一种工具,可以使可以使广泛使用的图书馆受益,并使机器学习和系统研究人员更加紧密地结合在一起。手电筒可从https://github.com/flashlight/flashlight获得。
translated by 谷歌翻译
比较不同的汽车框架是具有挑战性的,并且经常做错了。我们引入了一个开放且可扩展的基准测试,该基准遵循最佳实践,并在比较自动框架时避免常见错误。我们对71个分类和33项回归任务进行了9个著名的自动框架进行了详尽的比较。通过多面分析,评估模型的准确性,与推理时间的权衡以及框架失败,探索了自动框架之间的差异。我们还使用Bradley-terry树来发现相对自动框架排名不同的任务子集。基准配备了一个开源工具,该工具与许多自动框架集成并自动化经验评估过程端到端:从框架安装和资源分配到深入评估。基准测试使用公共数据集,可以轻松地使用其他Automl框架和任务扩展,并且具有最新结果的网站。
translated by 谷歌翻译
TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Tensor-Flow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, generalpurpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous "parameter server" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that Tensor-Flow achieves for several real-world applications.
translated by 谷歌翻译
Deep learning based recommendation models (DLRM) are widely used in several business critical applications. Training such recommendation models efficiently is challenging primarily because they consist of billions of embedding-based parameters which are often stored remotely leading to significant overheads from embedding access. By profiling existing DLRM training, we observe that only 8.5% of the iteration time is spent in forward/backward pass while the remaining time is spent on embedding and model synchronization. Our key insight in this paper is that access to embeddings have a specific structure and pattern which can be used to accelerate training. We observe that embedding accesses are heavily skewed, with almost 1% of embeddings represent more than 92% of total accesses. Further, we observe that during training we can lookahead at future batches to determine exactly which embeddings will be needed at what iteration in the future. Based on these insight, we propose Bagpipe, a system for training deep recommendation models that uses caching and prefetching to overlap remote embedding accesses with the computation. We designed an Oracle Cacher, a new system component which uses our lookahead algorithm to generate optimal cache update decisions and provide strong consistency guarantees. Our experiments using three datasets and two models shows that our approach provides a speed up of up to 6.2x compared to state of the art baselines, while providing the same convergence and reproducibility guarantees as synchronous training.
translated by 谷歌翻译
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.
translated by 谷歌翻译
无论是在功能选择的领域还是可解释的AI领域,都有基于其重要性的“排名”功能的愿望。然后可以将这种功能重要的排名用于:(1)减少数据集大小或(2)解释机器学习模型。但是,在文献中,这种特征排名没有以系统的,一致的方式评估。许多论文都有不同的方式来争论哪些具有重要性排名最佳的特征。本文通过提出一种新的评估方法来填补这一空白。通过使用合成数据集,可以事先知道特征重要性得分,从而可以进行更系统的评估。为了促进使用新方法的大规模实验,在Python建造了一个名为FSEVAL的基准测定框架。该框架允许并行运行实验,并在HPC系统上的计算机上分布。通过与名为“权重和偏见”的在线平台集成,可以在实时仪表板上进行交互探索图表。该软件作为开源软件发布,并在PYPI平台上以包裹发行。该研究结束时,探索了一个这样的大规模实验,以在许多方面找到参与算法的优势和劣势。
translated by 谷歌翻译
自动化机器学习(Automl)努力自动配置机器学习算法及其组合的整体(软件)解决方案 - 机器学习管道 - 针对手头的学习任务(数据集)量身定制。在过去十年中,Automl已成为具有数百个贡献的热门研究课题。虽然Automl提供了许多前景,但也称它也是相当资源密集的,这是其主要批评的主要观点之一。高资源消耗的主要原因是许多方法依赖于许多ML管道的(昂贵)评估,同时寻找良好的候选者。由于使用许多数据集和方法进行了大规模实验,因此在Automl方法研究的背景下放大了这个问题,每个数据都是用几种重复来排除随机效应的几个重复的实验。本文阐述了最近的绿色AI的精神,是为了提高对问题的自动化研究人员的意识,并详细阐述可能的补救措施。为此,我们确定了四类行动,社区可能采取更加可持续的自动化计划,即接近设计,基准,研究激励和透明度。
translated by 谷歌翻译
该项目旨在使用称为KubeFlow [1]的开源工具(端到端ML堆栈编排工具包)探索在Kubernetes上部署机器学习模型的过程。我们以管道形式创建端到端的机器学习模型,并分析各个点,包括设置,部署模型,性能,限制,限制和功能。我们希望我们的项目几乎像一个研讨会/入门报告一样,可以帮助Vanilla Cloud/Kubernetes用户对KubeFlow的零知识使用KubeFlow来部署ML模型。从不同的云上的设置到通过互联网提供训练有素的模型 - 我们提供详细信息和指标,详细介绍KubeFlow的性能。
translated by 谷歌翻译