In this paper, we present PARTIME, a software library written in Python and based on PyTorch, designed specifically to speed up neural networks whenever data is continuously streamed over time, for both learning and inference. Existing libraries are designed to exploit data-level parallelism, assuming that samples are batched, a condition that is not naturally met in applications that are based on streamed data. Differently, PARTIME starts processing each data sample at the time in which it becomes available from the stream. PARTIME wraps the code that implements a feed-forward multi-layer network and it distributes the layer-wise processing among multiple devices, such as Graphics Processing Units (GPUs). Thanks to its pipeline-based computational scheme, PARTIME allows the devices to perform computations in parallel. At inference time this results in scaling capabilities that are theoretically linear with respect to the number of devices. During the learning stage, PARTIME can leverage the non-i.i.d. nature of the streamed data with samples that are smoothly evolving over time for efficient gradient computations. Experiments are performed in order to empirically compare PARTIME with classic non-parallel neural computations in online learning, distributing operations on up to 8 NVIDIA GPUs, showing significant speedups that are almost linear in the number of devices, mitigating the impact of the data transfer overhead.
translated by 谷歌翻译
深度神经网络(DNN)模型通常是从​​一层到另一层的依次训练的,这会导致向前,向后和更新锁定的问题,从而导致训练时间的性能差。减轻这些问题的现有并行策略提供了次优的运行时性能。在这项工作中,我们提出了一种新颖的层面分区和合并,向前和向后通过并行框架,以提供更好的训练性能。拟议工作的新颖性包括1)层面分区和合并模型,该模型可以最大程度地降低设备之间的通信开销,而不会在培训过程中没有现有策略的记忆成本; 2)向后通过和向后通过并行化和优化,以解决更新锁定问题并最大程度地减少总培训成本。对实际用例的实验评估表明,所提出的方法在训练速度方面优于最先进的方法。并在不损害非平行方法的准确性性能的情况下实现几乎线性加速。
translated by 谷歌翻译
TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Tensor-Flow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, generalpurpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous "parameter server" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that Tensor-Flow achieves for several real-world applications.
translated by 谷歌翻译
深度学习领域目睹了对极端计算和内存密集型神经网络的显着转变。这些较新的较大模型使研究人员能够推进各种领域的最先进的工具。这种现象刺激了在更多的硬件加速器上产生了针对神经网络的分布式训练的算法。在本文中,我们讨论并比较了当前的最先进的框架,以实现大规模的分布式深度学习。首先,我们调查分布式学习中的当前实践,并确定所使用的不同类型的并行性。然后,我们提出了对大型图像和语言培训任务的性能进行了经验结果。此外,我们解决了他们的统计效率和内存消耗行为。根据我们的结果,我们讨论了阻碍性能的每个框架的算法和实现部分。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
由于新兴的深度神经网络(DNN)模型的规模继续增大,使用大型GPU集群培训DNN是实现可接受培训时间的基本要求。在本文中,我们考虑了集群大小的未来增加的情况将导致全局批量大小用于培训模型以达到基本限制:超出某个点,更大的全球批量尺寸会导致样品效率降低,总体上升准确性的时间。因此,为了实现培训性能的进一步改进,我们必须考虑“强大的缩放”策略,该策略保持全局批量大小常量,并将较小的批次分配给每个GPU。不幸的是,这使得能够有效地使用群集资源。我们呈现DeepPool,通过两个关键思想解决这种效率挑战的系统。首先,突发并行性将大量GPU分配给突发中的前景作业,以利用整个层的并行性的不均匀性。其次,GPU多路复用优先考虑前台培训工作的吞吐量,而背景培训作业包装以回收未充分利用的GPU资源,从而提高集群范围利用率。这两个想法在一起使DeepPool能够在群集刻度大的单一任务中通过标准数据并行度进行2.2 - 2.4倍的完整性。
translated by 谷歌翻译
在过去的十年中,深度神经网络(DNNS)的规模成倍增长,只剩下那些具有大量基于数据中心的资源的人具有开发和培训此类模型的能力。对于可能只有有限的资源(例如,单个多GPU服务器)的研究人员的长尾巴的主要挑战之一是GPU内存能力与模型大小相比。问题是如此严重,以至于训练大规模DNN模型的内存需求通常可以超过单个服务器上所有可用GPU的总容量;这个问题只会随着不断增长的模型大小的趋势而变得更糟。当前依赖于虚拟化GPU内存的解决方案(通过向CPU内存交换/从CPU内存)会产生过多的交换开销。在本文中,我们提出了一个新的培训框架,和谐和倡导者,重新思考了DNN框架如何安排计算并移动数据以在单个商品服务器上有效地推动培训大规模模型的边界。在各种大型DNN模型中,Harmony能够将交换负载最多减少两个数量级,并在具有虚拟化内存的高度优化基线上获得高达7.6倍的训练吞吐量加速。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.
translated by 谷歌翻译
We introduce Breadth-First Pipeline Parallelism, a novel training schedule which optimizes the combination of pipeline and data parallelism. Breadth-First Pipeline Parallelism lowers training time, cost and memory usage by combining a high GPU utilization with a small batch size per GPU, and by making use of fully sharded data parallelism. Experimentally, we observed increases of up to 53% in training speed.
translated by 谷歌翻译
Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.
translated by 谷歌翻译
Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks. In many cases, increasing model capacity beyond the memory limit of a single accelerator has required developing special algorithms or infrastructure. These solutions are often architecture-specific and do not transfer to other tasks. To address the need for efficient and task-independent model parallelism, we introduce GPipe, a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers. By pipelining different sub-sequences of layers on separate accelerators, GPipe provides the flexibility of scaling a variety of different networks to gigantic sizes efficiently. Moreover, GPipe utilizes a novel batchsplitting pipelining algorithm, resulting in almost linear speedup when a model is partitioned across multiple accelerators. We demonstrate the advantages of GPipe by training large-scale neural networks on two different tasks with distinct network architectures: (i) Image Classification: We train a 557-million-parameter AmoebaNet model and attain a top-1 accuracy of 84.4% on ImageNet-2012, (ii) Multilingual Neural Machine Translation: We train a single 6-billion-parameter, 128-layer Transformer model on a corpus spanning over 100 languages and achieve better quality than all bilingual models.Preprint. Under review.
translated by 谷歌翻译
我们介绍了连续的3D卷积神经网络(CO3D CNNS),这是一种新的计算公式,用于时空3D CNN,其中视频是按帧而不是剪辑处理的框架。在要求框架预测的在线任务中,CO3D CNN分配给常规3D CNN的计算冗余,即在重叠夹中出现的框架上重复的卷积。我们表明,连续的3D CNN可以重复使用先前存在的3D-CNN权重,以减少与时间接受场成比例的每次预测浮点操作(拖船),同时保留相似的记忆要求和准确性。这是通过Kinetics-400和Charades上的多种模型进行验证的:COX3D模型在Kinetics-400上达到了最新的复杂性/准确性权衡,并减少了Flops 12.1-15.3倍,并提高了2.3-3.8%与常规X3D模型相比,准确性,同时将峰值存储器消耗降低了48%。此外,我们研究了CO3D CNN在启动时的瞬态响应,并对公开可用的3D CNN进行了广泛的硬件处理特性基准。
translated by 谷歌翻译
现代消费电子设备已为其主要功能采用了深度学习的情报服务。供应商最近开始在设备上执行情报服务,以在设备中保存个人数据,降低网络和云成本。我们发现了通过使用用户数据更新神经网络的情况,而无需将数据暴露在设备中:设备培训。例如,我们可能会添加一个新课程,我的狗Alpha用于机器人真空吸尘器,适应用户口音的语音识别,让文本到语音说话,好像用户会说话。但是,目标设备的资源限制遇到了重大困难。我们建议NNTrainer,这是一个轻巧的设备培训框架。我们描述了NNTrainer实施的神经网络的优化技术,这些技术与传统一起评估。评估表明,NNTrainer可以将内存消耗降低至1/28,而不会恶化准确性或训练时间,并有效地个性化了对设备上的应用程序。 NNTrainer是跨平台和实用的开源软件,该软件正在作者隶属关系中部署到数百万个设备。
translated by 谷歌翻译
过去的几年见证了基于变压器的模型的成功,其规模和应用方案继续积极发展。变压器模型的当前景观越来越多样化:该模型大小差异很大,最大的参数是最大的。模型特性由于特征的混合物所引入的稀疏性而有所不同。目标应用程序方案可以是关键延迟或面向吞吐量的情况;部署硬件可以是具有不同类型的内存和存储等单身或多GPU系统。随着多样性的增加和变压器模型的快速发展速度,设计高性能和高效的推理系统非常具有挑战性。在本文中,我们提出了DeepSpeed推断,这是用于解决上述挑战的变压器模型推理的全面系统解决方案。深速推理包括(1)一种多GPU推理解决方案,可最大程度地减少潜伏度,同时最大化密集和稀疏变压器模型的吞吐量,当它们适合聚集的GPU内存时,以及(2)一种异质推理解决方案,该解决方案利用CPU和NVME内存中的CPU和NVME内存。除了GPU内存和计算以使高推理吞吐量具有不适合聚集GPU内存的大型推理吞吐量。对于面向延迟的方案,深速推理可将延迟降低到最新的7倍,而对于面向吞吐量的方案,延迟的潜伏期将延迟减少到1.5倍以上。此外,它通过利用数百个GPU来实现实时延迟约束下的参数量表推断,这是一个前所未有的推理。它可以比仅使用GPU的解决方案更大的25倍模型,同时提供84个TFLOPS(超过50美元的A6000峰值)。
translated by 谷歌翻译
分散算法是一种计算形式,通过依赖于直接连接代理之间的低成本通信的本地动态实现全局目标。在涉及分布式数据集的大规模优化任务中,分散算法显示出强大,有时优越,性能与中央节点的分布式算法。最近,发展分散的深度学习算法引起了极大的关注。它们被视为使用参数服务器或环形恢复协议的那些的低通信开销替代方案。但是,缺乏易于使用和高效的软件包仅在纸上保持了最分散的算法。为了填补差距,我们介绍了Bluefog,一个Python库进行了直接的,高性能的不同分散算法的实现。基于各种通信操作的统一抽象,Bluefog提供直观的接口来实现分散的算法的频谱,从使用静态无向图的那些,用于使用动态和定向图形的同步操作进行异步操作。 Bluefog还采用了多种系统级加速技术,以进一步优化深度学习任务的性能。在主流DNN培训任务中,Bluefog达到了更高的吞吐量,并实现了一个总体上的吞吐量1.2 \ times \ sim 1.8 \ times $ speedup,这是一个基于环 - allyuce的最先进的分布式深度学习包。 Bluefog是https://github.com/bluefog-lib/bluefog的开源。
translated by 谷歌翻译
基于von-neumann架构的传统计算系统,数据密集型工作负载和应用程序(如机器学习)和应用程序都是基本上限制的。随着数据移动操作和能量消耗成为计算系统设计中的关键瓶颈,对近数据处理(NDP),机器学习和特别是神经网络(NN)的加速器等非传统方法的兴趣显着增加。诸如Reram和3D堆叠的新兴内存技术,这是有效地架构基于NN的基于NN的加速器,因为它们的工作能力是:高密度/低能量存储和近记忆计算/搜索引擎。在本文中,我们提出了一种为NN设计NDP架构的技术调查。通过基于所采用的内存技术对技术进行分类,我们强调了它们的相似之处和差异。最后,我们讨论了需要探索的开放挑战和未来的观点,以便改进和扩展未来计算平台的NDP架构。本文对计算机学习领域的计算机架构师,芯片设计师和研究人员来说是有价值的。
translated by 谷歌翻译
深度学习使用由其重量进行参数化的神经网络。通常通过调谐重量来直接最小化给定损耗功能来训练神经网络。在本文中,我们建议将权重重新参数转化为网络中各个节点的触发强度的目标。给定一组目标,可以计算使得发射强度最佳地满足这些目标的权重。有人认为,通过我们称之为级联解压缩的过程,使用培训的目标解决爆炸梯度的问题,并使损失功能表面更加光滑,因此导致更容易,培训更快,以及潜在的概括,神经网络。它还允许更容易地学习更深层次和经常性的网络结构。目标对重量的必要转换有额外的计算费用,这是在许多情况下可管理的。在目标空间中学习可以与现有的神经网络优化器相结合,以额外收益。实验结果表明了使用目标空间的速度,以及改进的泛化的示例,用于全连接的网络和卷积网络,以及调用和处理长时间序列的能力,并使用经常性网络进行自然语言处理。
translated by 谷歌翻译
在小型电池约束的物流设备上部署现代TinyML任务需要高计算能效。使用非易失性存储器(NVM)的模拟内存计算(IMC)承诺在深神经网络(DNN)推理中的主要效率提高,并用作DNN权重的片上存储器存储器。然而,在系统级别尚未完全理解IMC的功能灵活性限制及其对性能,能量和面积效率的影响。为了目标实际的端到端的IOT应用程序,IMC阵列必须括在异构可编程系统中,引入我们旨在解决这项工作的新系统级挑战。我们介绍了一个非均相紧密的聚类架构,整合了8个RISC-V核心,内存计算加速器(IMA)和数字加速器。我们在高度异构的工作负载上基准测试,例如来自MobileNetv2的瓶颈层,显示出11.5倍的性能和9.5倍的能效改进,而在核心上高度优化并行执行相比。此外,我们通过将我们的异构架构缩放到多阵列加速器,探讨了在IMC阵列资源方面对全移动级DNN(MobileNetv2)的端到端推断的要求。我们的结果表明,我们的解决方案在MobileNetv2的端到端推断上,在执行延迟方面比现有的可编程架构更好,比最先进的异构解决方案更好的数量级集成内存计算模拟核心。
translated by 谷歌翻译
Recent deep learning models are difficult to train using a large batch size, because commodity machines may not have enough memory to accommodate both the model and a large data batch size. The batch size is one of the hyper-parameters used in the training model, and it is dependent on and is limited by the target machine memory capacity because the batch size can only fit into the remaining memory after the model is uploaded. Moreover, the data item size is also an important factor because if each data item size is larger then the batch size that can fit into the remaining memory becomes smaller. This paper proposes a framework called Micro-Batch Streaming (MBS) to address this problem. This method helps deep learning models to train by providing a batch streaming method that splits a batch into a size that can fit in the remaining memory and streams them sequentially. A loss normalization algorithm based on the gradient accumulation is used to maintain the performance. The purpose of our method is to allow deep learning models to train using larger batch sizes that exceed the memory capacity of a system without increasing the memory size or using multiple devices (GPUs).
translated by 谷歌翻译