为低功耗设备上的高性能计算选择适当的编程范例可以很有用来加快计算。许多Android设备都有一个集成的GPU,虽然没有正式支持 - OpenCL框架可以在Android设备上用于寻址这些GPU。 OpenCL支持线程和数据并行性。使用GPU的应用程序必须考虑到用户可以在任何时刻暂停用户或Android操作系统。我们已创建一个包装器库,允许在Android设备上使用OpenCL。已经写入OpenCL程序可以用几乎没有修改来执行。我们使用此库将DBSCAN和kmeans算法的性能与同一设备上的其他单个和多线程实现的ARM-V7平板电脑的集成GPU进行比较。我们调查了哪些编程范式和语言允许执行速度和能耗之间的最佳权衡。在Android设备上使用GPU进行HPC,可以帮助在遥控区域下进行计算密集型机器学习或数据挖掘任务,在恶劣的环境条件下以及能源供应是一个问题的领域。
translated by 谷歌翻译
TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Tensor-Flow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, generalpurpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous "parameter server" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that Tensor-Flow achieves for several real-world applications.
translated by 谷歌翻译
基于代理的建模(ABM),仿真(ABS)和分布式计算(ABC)是建立的方法。互联网和基于Web的技术是合适的运营商。本文是一份技术报告,其中具有JavaScript Agent Machine(JAM)平台的某些教程,以及使用AgentJS编程的代理程序,该代理是广泛使用的JavaScript编程语言的子集,用于编程基于移动状态的反应性代理。除了解释特定设计选择的动机以及在JavaScript中介绍架构和代理编程的核心概念外,简短示例还说明了JAM平台的功能及其组件,用于部署大型多机构系统在强大的强大中诸如互联网之类的异质环境。果酱适合在强大的异质和移动环境中部署。最后,果酱可用于ABC以及在统一方法中用于ABS,最终使移动人群感测和模拟(ABS)。
translated by 谷歌翻译
机器学习算法必须能够有效地应对大量数据集。因此,他们必须在任何现代系统上进行良好的扩展,并能够利用独立于供应商的加速器的计算能力。在监督学习领域,支持向量机(SVM)被广泛使用。但是,即使是现代化和优化的实现,例如LIBSVM或ThunderSVM对于尖端硬件的大型非平凡的密集数据集也不能很好地扩展:大多数SVM实现基于顺序最小优化,这是一种优化的固有顺序算法。因此,它们不适合高度平行的GPU。此外,我们不知道支持不同供应商的CPU和GPU的性能便携式实现。我们已经开发了PLSSVM库来解决这两个问题。首先,我们将SVM的配方作为最小二乘问题。然后训练SVM沸腾以求解已知高度平行算法的线性方程系统。其次,我们提供了一个独立但高效的实现:PLSSVM使用不同的可互换后端 - openmp,cuda,opencl,sycl-支持来自多个GPU的NVIDIA,AMD或INTEL等各种供应商的现代硬件。 PLSSVM可以用作LIBSVM的倒入替换。与LIBSVM相比,与ThunderSVM相比,我们观察到高达10的CPU和GPU的加速度。我们的实施量表在多核CPU上缩放,并在多达256个CPU线程和多个GPU上平行加速为74.7,在四个GPU上的并行加速为3.71。代码,实用程序脚本和文档都可以在GitHub上获得:https://github.com/sc-sgs/plssvm。
translated by 谷歌翻译
本文介绍了专门针对软件定义无线电(SDR)的新域特异性嵌入式语言(DSEL)。从一组精心设计的组件中,它可以构建有效的软件数字通信系统,能够以简单明了的方式利用现代处理器体系结构的并行性。特别是,提出的DSEL使管道和序列重复技术的组合能够从数字通信系统中提取时间和空间并行性。我们利用了真实用例上的DSEL功能:用于完全在软件中设计的广泛使用的DVB-S2标准的完全数字收发器。通过评估,我们展示了建议的软件DVB-S2收发器如何从现代高端多核CPU目标中获得最大的收益。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
负责将数据从存储转移到GPU的同时,在培训机器学习模型的同时,数据加载器可能会大大提高培训工作的绩效。最近的进步不仅通过大大减少训练时间,而且还提供了新功能,例如从远程存储(如S3)加载数据,这表明了希望。在本文中,我们是第一个将数据加载器区分为深度学习(DL)工作流程中的单独组件并概述其结构和功能的组件。最后,我们提供了可用的不同数据库,其功能,可用性和性能方面的权衡以及从中获得的见解的全面比较。
translated by 谷歌翻译
我们展示了一个新的开源软件,用于快速评估量子电路和绝热进化,这充分利用了硬件加速器。越来越多的Quantum Computing兴趣和Quantum硬件设备的最新发展的兴趣激励了新的高级计算工具的开发,其专注于性能和使用简单性。在这项工作中,我们介绍了一种新的Quantum仿真框架,使开发人员能够将硬件或平台实现的所有复杂方面委托给库,以便他们专注于手头的问题和量子算法。该软件采用Scratch设计,使用仿真性能,代码简单和用户友好的界面作为目标目标。它利用了硬件加速,例如多线CPU,单个GPU和多GPU设备。
translated by 谷歌翻译
现代生活是由连接到互联网的电子设备驱动的。新兴研究领域的新兴研究领域(IoT)已变得流行,就像连接设备数量稳定增加一样 - 现在超过500亿。由于这些设备中的许多用于执行\ gls*{cv}任务,因此必须了解其针对性能的功耗。我们在执行对象分类时报告了NVIDIA JETSON NANO板的功耗概况和分析。作者对使用Yolov5模型进行了有关每帧功耗和每秒(FPS)帧输出的广泛分析。结果表明,Yolov5N在吞吐量(即12.34 fps)和低功耗(即0.154 MWH/Frafe)方面优于其他Yolov5变体。
translated by 谷歌翻译
一般矩阵乘法或GEMM内核在高性能计算和机器学习中占据中心位置。最近的NVIDIA GPU包括Gemm加速器,如Nvidia的张量核心。他们的剥削受到双语言问题的阻碍:它需要低级编程,这意味着低程序员的工作效率或使用只提供有限组件集的库。由于建立的组件方面的REPRASING算法经常引入开销,因此图书馆缺乏灵活性限制了探索新算法的自由。因此,使用GEMMS的研究人员无法立即享受编程生产力,高性能和研究灵活性。在本文中,我们解决了这个问题。我们在科学朱莉娅编程语言中展示了三组抽象和接口来编程宝石。界面和抽象共同设计用于研究人员的需求和朱莉娅的特征,以实现足够的担忧和灵活性的充分分离,以便在不支付性能价格的情况下轻松地扩展基本宝石。将我们的Gemms与最先进的图书馆Cublas和Cutlass进行比较,我们证明我们的性能在图书馆的相同球场中,并且在某些情况下甚至超过它,而无需在CUDA C ++中编写单行代码或者组装,而不面临灵活限制。
translated by 谷歌翻译
机器学习中的隐私和安全挑战(ML)已成为ML普遍的开发以及最近对大型攻击表面的展示,已成为一个关键的话题。作为一种成熟的以系统为导向的方法,在学术界和行业中越来越多地使用机密计算来改善各种ML场景的隐私和安全性。在本文中,我们将基于机密计算辅助的ML安全性和隐私技术的发现系统化,以提供i)保密保证和ii)完整性保证。我们进一步确定了关键挑战,并提供有关ML用例现有可信赖的执行环境(TEE)系统中限制的专门分析。我们讨论了潜在的工作,包括基础隐私定义,分区的ML执行,针对ML的专用发球台设计,TEE Awawe Aware ML和ML Full Pipeline保证。这些潜在的解决方案可以帮助实现强大的TEE ML,以保证无需引入计算和系统成本。
translated by 谷歌翻译
这项工作侧重于特定于域的加速器的有效敏捷设计方法。我们采用垂直开发堆栈的功能逐个功能增强,并将其应用于TVM / VTA推理加速器。我们已经增强了VTA设计空间,并启用了用于额外工作负载的端到端支持。这是通过增强VTA微架构和指令集架构(ISA)来实现的,以及通过增强TVM编译堆栈来支持各种VTA配置。 VTA TSIM实现(基于凿子)已通过ALU / GEMM执行单元的完全流水线版本增强。在TSIM中,内存宽度现在可以在8-64字节之间。对于支持较大的刮板,已经使场宽度更加灵活。已添加新的说明:元素 - WISE 8位乘法,支持深度卷积,并使用焊盘值的选择加载以支持最大池。还添加了对更多层和更好的双缓冲。完全管制的ALU / GEMM有助于显着帮助:4.9倍的循环较少,最小区域更改为在默认配置下运行RESET-18。可以实例化特征在于11.5倍的循环计数的配置,以12倍的循环计数更大的区域。显示了区域性能帕累托曲线上的许多点,展示了执行单元尺寸,内存接口宽度和刻痕尺寸的余额。最后,VTA现在能够运行MobileNet 1.0和所有层进行Resnet,包括先前禁用的池和完全连接的图层。 TVM / VTA架构始终在几分钟内以RTL呈现端到端工作量评估。通过我们的修改,它现在提供了更大的可行配置,具有广泛的成本与性能。所有提到的所有功能都可以在OpenSource叉中提供,而这些功能的子集已经上游。
translated by 谷歌翻译
K-Nearest邻居搜索是各种应用程序中的基本任务之一,层次可导航的小世界(HNSW)最近在大规模云服务中引起了人们的注意,因为它在提供快速搜索的同时很容易扩展数据库。另一方面,将可编程逻辑和单个板上的可编程逻辑模块结合在一起的计算存储设备(CSD)变得流行,以解决现代计算系统的数据带宽瓶颈。在本文中,我们提出了一个计算存储平台,该平台可以加速基于SMARTSSSD CSD的基于图形的最近的邻居搜索算法。为此,我们更修改算法在硬件上更适合,并使用基于HLS和RTL的方法实现两种类型的加速器,并采用各种优化方法。此外,我们扩展了提议的平台,以拥有4个SMARTSSS,并应用图形并行性以进一步提高系统性能。结果,拟议的计算存储平台在258.66W的功率耗散时,SIFT1B数据集的每秒吞吐量达到75.59个查询,该数据集的功率耗散为12.83倍,比常规CPU和GPU和GPU更快,更快的10.43 x和10.43 x和24.33 x - 基于基于的服务器平台。借助多稳定的存储和自定义加速能力,我们相信所提出的计算存储平台是针对成本敏感的云数据中心的有前途的解决方案。
translated by 谷歌翻译
Array programming provides a powerful, compact, expressive syntax for accessing, manipulating, and operating on data in vectors, matrices, and higher-dimensional arrays [1]. NumPy is the primary array programming library for the Python language [2,3,4,5]. It plays an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, material science, engineering, finance, and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves [6] and the first imaging of a black hole [7].Here we show how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring, and analyzing scientific data. NumPy is the foundation upon which the entire scientific Python universe is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Because of its central position in the ecosystem, NumPy increasingly plays the role of an interoperability layer between these new array computation libraries.
translated by 谷歌翻译
There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms -such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) -requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-ofthe-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM's ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator.The system is open sourced and in production use inside several major companies.
translated by 谷歌翻译
Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast. Julia questions notions generally held as "laws of nature" by practitioners of numerical computing:1. High-level dynamic programs have to be slow, 2. One must prototype in one language and then rewrite in another language for speed or deployment, and 3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts.We introduce the Julia programming language and its design -a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming.Julia shows that one can have machine performance without sacrificing human convenience.
translated by 谷歌翻译
Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.
translated by 谷歌翻译
A growing number of Machine Learning Frameworks recently made Deep Learning accessible to a wider audience of engineers, scientists, and practitioners, by allowing straightforward use of complex neural network architectures and algorithms. However, since deep learning is rapidly evolving, not only through theoretical advancements but also with respect to hardware and software engineering, ML frameworks often lose backward compatibility and introduce technical debt that can lead to bottlenecks and sub-optimal resource utilization. Moreover, the focus is in most cases not on deep learning engineering, but rather on new models and theoretical advancements. In this work, however, we focus on engineering, more specifically on the data loading pipeline in the PyTorch Framework. We designed a series of benchmarks that outline performance issues of certain steps in the data loading process. Our findings show that for classification tasks that involve loading many files, like images, the training wall-time can be significantly improved. With our new, modified ConcurrentDataloader we can reach improvements in GPU utilization and significantly reduce batch loading time, up to 12X. This allows for the use of the cloud-based, S3-like object storage for datasets, and have comparable training time as if datasets are stored on local drives.
translated by 谷歌翻译
我们提出了TOD,这是一个在分布式多GPU机器上进行有效且可扩展的离群检测(OD)的系统。 TOD背后的一个关键思想是将OD应用程序分解为基本张量代数操作。这种分解使TOD能够通过利用硬件和软件中深度学习基础架构的最新进展来加速OD计算。此外,要在有限内存的现代GPU上部署昂贵的OD算法,我们引入了两种关键技术。首先,可证明的量化可以加快OD计算的速度,并通过以较低的精度执行特定的浮点操作来减少其内存足迹,同时证明没有准确的损失。其次,为了利用多个GPU的汇总计算资源和内存能力,我们引入了自动批处理,该批次将OD计算分解为小批次,以便在多个GPU上并行执行。 TOD支持一套全面且多样化的OD算法,例如LOF,PCA和HBOS以及实用程序功能。对真实和合成OD数据集的广泛评估表明,TOD平均比领先的基于CPU的OD系统PYOD快11.6倍(最大加速度为38.9倍),并且比各种GPU底线要处理的数据集更大。值得注意的是,TOD可以直接整合其他OD算法,并提供了将经典OD算法与深度学习方法相结合的统一框架。这些组合产生了无限数量的OD方法,其中许多方法是新颖的,可以很容易地在TOD中进行原型。
translated by 谷歌翻译
深度学习培训是一个昂贵的过程,可广泛使用GPU,但并非所有模型训练都饱和现代强大的GPU。 Multi-Instance GPU(MIG)是NVIDIA引入的一项新技术,可以分区GPU,以更好地适合不需要所有内存和计算完整GPU的资源的工作负载。在本文中,我们研究了在深度学习工作负载下的三种尺寸工作负载下的MIG启用A100 GPU的性能,这些尺寸重点是使用Resnet模型进行图像识别培训。当在GPU允许的各种MIG实例上孤立运行时,我们还研究了这些工作负载的行为,此外还可以在同一GPU共同列入同类的同质实例上并行运行它们。我们的结果表明,当工作负载太小而无法孤立地利用整个GPU时,使用MIG可以显着改善GPU的利用率。通过并行训练多个小型型号,尽管每单位时间的时间增加了,但每单位时间的GPU可以执行更多的工作,导致$ \ sim $ \ sim $ 3倍吞吐量。相比之下,对于已经很好地利用了整个GPU的中型和大型工作量,MIG仅提供边际性能的改进。然而,我们观察到,使用单独的MIG分区并行的训练模型不会表现出强调具有MIG在现代GPU上具有功能的价值的干扰。
translated by 谷歌翻译