嵌入式机器学习(ML)系统现在已成为部署ML服务任务的主要平台,预计对于培训ML模型而言非常重要。随之而来的是,在严格的内存约束下,总体高效部署,尤其是低功率和高吞吐量实现的挑战。在这种情况下,与常规SRAM相比,由于其非挥发性,较高的细胞密度和可伸缩性特征,STT-MRAM和SOT-MRAM等非易失性记忆(NVM)技术具有显着优势。虽然先前的工作已经调查了NVM对通用应用的几种架构含义,但在这项工作中,我们提出了DeepNVM ++,这是一个综合框架,用于表征,模型和分析基于NVM的GPU架构中的基于NVM的CACHES,通过结合技术特异性的技术应用程序(DL)应用程序(DL)应用程序。电路级模型和各种DL工作负载的实际内存行为。 DEEPNVM ++依赖于使用常规SRAM和新兴STT-MRAM和SOT-MRAM Technologies实施的最后级别缓存的ISO容量和ISO区域性能和能量模型。在ISO容量的情况下,与常规的SRAM相比,STT-MRAM和SOT-MRAM可提供高达3.8倍和4.7倍的能量延迟产品(EDP)的降低以及2.4倍和2.8倍面积。在ISO-AREA假设下,STT-MRAM和SOT-MRAM可提供高达2.2倍和2.4倍的EDP降低,并且与SRAM相比,分别可容纳2.3倍和3.3倍的缓存能力。我们还执行可伸缩性分析,并表明与大型缓存能力相比,STT-MRAM和SOT-MRAM与SRAM相比实现了EDP的降低。 DEEPNVM ++在STT-/SOT-MRAM技术上进行了证明,可用于DL应用中GPU中最后一级缓存的任何NVM技术的表征,建模和分析。
translated by 谷歌翻译
随着机器学习和系统社区努力通过自定义深度神经网络(DNN)加速器,多样的精度或量化水平以及模型压缩技术来实现更高的能源效率,因此需要设计空间探索框架,以结合量化意识的处理。在具有准确和快速的功率,性能和区域模型的同时,进入加速器设计空间。在这项工作中,我们提出了Quidam,这是一种高度参数化的量化量化DNN加速器和模型共探索框架。我们的框架可以促进对DNN加速器设计空间探索的未来研究,以提供各种设计选择,例如位精度,处理元素类型,处理元素的刮擦大小,全局缓冲区大小,总处理元素的数量和DNN配置。我们的结果表明,不同的精确度和处理元素类型会导致每个区域和能量性能方面的显着差异。具体而言,我们的框架标识了广泛的设计点,其中每个面积和能量的性能分别差异超过5倍和35倍。通过拟议的框架,我们表明,与最佳基于INT16的实施相比,轻巧的处理元素可在准确性结果上实现,每个区域的性能和能源改善高达5.7倍。最后,由于预先特征的功率,性能和区域模型的效率,Quidam可以将设计勘探过程加快3-4个数量级,因为它消除了每种设计的昂贵合成和表征的需求。
translated by 谷歌翻译
基于von-neumann架构的传统计算系统,数据密集型工作负载和应用程序(如机器学习)和应用程序都是基本上限制的。随着数据移动操作和能量消耗成为计算系统设计中的关键瓶颈,对近数据处理(NDP),机器学习和特别是神经网络(NN)的加速器等非传统方法的兴趣显着增加。诸如Reram和3D堆叠的新兴内存技术,这是有效地架构基于NN的基于NN的加速器,因为它们的工作能力是:高密度/低能量存储和近记忆计算/搜索引擎。在本文中,我们提出了一种为NN设计NDP架构的技术调查。通过基于所采用的内存技术对技术进行分类,我们强调了它们的相似之处和差异。最后,我们讨论了需要探索的开放挑战和未来的观点,以便改进和扩展未来计算平台的NDP架构。本文对计算机学习领域的计算机架构师,芯片设计师和研究人员来说是有价值的。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
深神经网络(DNNS)在各种机器学习(ML)应用程序中取得了巨大成功,在计算机视觉,自然语言处理和虚拟现实等中提供了高质量的推理解决方案。但是,基于DNN的ML应用程序也带来计算和存储要求的增加了很多,对于具有有限的计算/存储资源,紧张的功率预算和较小形式的嵌入式系统而言,这尤其具有挑战性。挑战还来自各种特定应用的要求,包括实时响应,高通量性能和可靠的推理准确性。为了应对这些挑战,我们介绍了一系列有效的设计方法,包括有效的ML模型设计,定制的硬件加速器设计以及硬件/软件共同设计策略,以启用嵌入式系统上有效的ML应用程序。
translated by 谷歌翻译
Increasing popularity of deep-learning-powered applications raises the issue of vulnerability of neural networks to adversarial attacks. In other words, hardly perceptible changes in input data lead to the output error in neural network hindering their utilization in applications that involve decisions with security risks. A number of previous works have already thoroughly evaluated the most commonly used configuration - Convolutional Neural Networks (CNNs) against different types of adversarial attacks. Moreover, recent works demonstrated transferability of the some adversarial examples across different neural network models. This paper studied robustness of the new emerging models such as SpinalNet-based neural networks and Compact Convolutional Transformers (CCT) on image classification problem of CIFAR-10 dataset. Each architecture was tested against four White-box attacks and three Black-box attacks. Unlike VGG and SpinalNet models, attention-based CCT configuration demonstrated large span between strong robustness and vulnerability to adversarial examples. Eventually, the study of transferability between VGG, VGG-inspired SpinalNet and pretrained CCT 7/3x1 models was conducted. It was shown that despite high effectiveness of the attack on the certain individual model, this does not guarantee the transferability to other models.
translated by 谷歌翻译
作为其核心计算,一种自我发挥的机制可以在整个输入序列上分配成对相关性。尽管表现良好,但计算成对相关性的成本高昂。尽管最近的工作表明了注意力分数低的元素的运行时间修剪的好处,但自我发挥机制的二次复杂性及其芯片内存能力的需求被忽略了。这项工作通过构建一个称为Sprint的加速器来解决这些约束,该加速器利用RERAM横杆阵列的固有并行性以近似方式计算注意力分数。我们的设计使用RERAM内的轻质模拟阈值电路来降低注意力评分,从而使Sprint只能获取一小部分相关数据到芯片内存。为了减轻模型准确性的潜在负面影响,Sprint重新计算数字中少数获取数据的注意力评分。相关注意分数的组合内修剪和片上重新计算可以将Sprint转化为仅线性的二次复杂性。此外,我们即使修剪后,我们也可以识别并利用相邻的注意操作之间的动态空间位置,从而消除了昂贵但冗余的数据获取。我们在各种最新的变压器模型上评估了我们提出的技术。平均而言,当使用总16KB芯片内存时,Sprint会产生7.5倍的速度和19.6倍的能量,而实际上与基线模型的等值级相当(平均为0.36%的降级)。
translated by 谷歌翻译
低功率边缘-AI功能对于支持元视野的设备扩展现实(XR)应用至关重要。在这项工作中,我们研究了两个代表性的XR工作负载:(i)手动检测和(ii)眼睛分割,用于硬件设计空间探索。对于这两种应用,我们都会训练深层神经网络,并分析量化和硬件特定瓶颈的影响。通过模拟,我们评估了CPU和两个收缩推理加速器实现。接下来,我们将这些硬件解决方案与先进的技术节点进行比较。评估了将最新的新兴非易失性记忆技术(STT/SOT/VGSOT MRAM)集成到XR-AI推论管道中的影响。我们发现,可以通过在7nm节点的设计中引入非挥发性记忆来实现手部检测(IPS = 40)和眼部分割(IPS = 6)的显着能源益处(IPS = 40)(IPS = 6)。 (推断每秒)。此外,由于MRAM与传统的SRAM相比,由于MRAM的较小形式,我们可以大大减少面积(> = 30%)。
translated by 谷歌翻译
神经网络(NNS)的重要性和复杂性正在增长。神经网络的性能(和能源效率)可以通过计算或内存资源约束。在内存阵列附近或内部放置计算的内存处理(PIM)范式是加速内存绑定的NNS的可行解决方案。但是,PIM体系结构的形式各不相同,其中不同的PIM方法导致不同的权衡。我们的目标是分析基于NN的性能和能源效率的基于DRAM的PIM架构。为此,我们分析了三个最先进的PIM架构:(1)UPMEM,将处理器和DRAM阵列集成到一个2D芯片中; (2)Mensa,是针对边缘设备量身定制的基于3D堆栈的PIM架构; (3)Simdram,它使用DRAM的模拟原理来执行位序列操作。我们的分析表明,PIM极大地受益于内存的NNS:(1)UPMEM在GPU需要内存过度按要求的通用矩阵 - 矢量乘数内核时提供23x高端GPU的性能; (2)Mensa在Google Edge TPU上提高了3.0倍和3.1倍的能源效率和吞吐量,用于24个Google Edge NN型号; (3)SIMDRAM在三个二进制NNS中以16.7倍/1.4倍的速度优于CPU/GPU。我们得出的结论是,由于固有的建筑设计选择,NN模型的理想PIM体系结构取决于模型的独特属性。
translated by 谷歌翻译
在小型电池约束的物流设备上部署现代TinyML任务需要高计算能效。使用非易失性存储器(NVM)的模拟内存计算(IMC)承诺在深神经网络(DNN)推理中的主要效率提高,并用作DNN权重的片上存储器存储器。然而,在系统级别尚未完全理解IMC的功能灵活性限制及其对性能,能量和面积效率的影响。为了目标实际的端到端的IOT应用程序,IMC阵列必须括在异构可编程系统中,引入我们旨在解决这项工作的新系统级挑战。我们介绍了一个非均相紧密的聚类架构,整合了8个RISC-V核心,内存计算加速器(IMA)和数字加速器。我们在高度异构的工作负载上基准测试,例如来自MobileNetv2的瓶颈层,显示出11.5倍的性能和9.5倍的能效改进,而在核心上高度优化并行执行相比。此外,我们通过将我们的异构架构缩放到多阵列加速器,探讨了在IMC阵列资源方面对全移动级DNN(MobileNetv2)的端到端推断的要求。我们的结果表明,我们的解决方案在MobileNetv2的端到端推断上,在执行延迟方面比现有的可编程架构更好,比最先进的异构解决方案更好的数量级集成内存计算模拟核心。
translated by 谷歌翻译
State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power.Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS/s working directly on a compressed network, corresponding to 3 TOPS/s on an uncompressed network, and processes FC layers of AlexNet at 1.88×10 4 frames/sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency.
translated by 谷歌翻译
边缘计算是加速机器学习算法支持移动设备的流行目标,而无需通信潜伏在云中处理它们。机器学习的边缘部署主要考虑传统问题,例如其安装的交换约束(尺寸,重量和功率)。但是,考虑到体现能量和碳的重要贡献,这种指标不足以考虑计算的环境影响。在本文中,我们探讨了用于推理和在线培训的卷积神经网络加速引擎的权衡。特别是,我们探讨了内存处理(PIM)方法,移动GPU加速器以及最近发布的FPGA的使用,并将它们与新颖的赛车记忆PIM进行比较。用赛车记忆PIM替换支持PIM的DDR3可以恢复其体现的能量,以至于1年。对于高活动比,与支持PIM的赛车记忆相比,移动GPU可以更可持续,但具有更高的体现能量可以克服。
translated by 谷歌翻译
机器学习(ML)的广泛部署正在引起严重的关注,以保护为收集培训数据做出贡献的用户的隐私。差异隐私(DP)作为保护保护的实用标准,在行业中迅速获得势头。尽管DP的重要性,但是在计算机系统社区中,几乎没有探索这种新兴ML算法对系统设计的影响。在这项工作中,我们对名为DP-SGD的最先进的私人ML培训算法进行了详细的工作量表征。我们发现了DP-SGD的几种独特属性(例如,其高内存能力和计算需求与非私人ML),从而引起其关键瓶颈。基于我们的分析,我们提出了一个名为Diva的差异私有ML的加速器,该加速器在计算利用率方面具有显着改善,从而导致2.6倍的能量效率与常规收缩期阵列。
translated by 谷歌翻译
With an ever-growing number of parameters defining increasingly complex networks, Deep Learning has led to several breakthroughs surpassing human performance. As a result, data movement for these millions of model parameters causes a growing imbalance known as the memory wall. Neuromorphic computing is an emerging paradigm that confronts this imbalance by performing computations directly in analog memories. On the software side, the sequential Backpropagation algorithm prevents efficient parallelization and thus fast convergence. A novel method, Direct Feedback Alignment, resolves inherent layer dependencies by directly passing the error from the output to each layer. At the intersection of hardware/software co-design, there is a demand for developing algorithms that are tolerable to hardware nonidealities. Therefore, this work explores the interrelationship of implementing bio-plausible learning in-situ on neuromorphic hardware, emphasizing energy, area, and latency constraints. Using the benchmarking framework DNN+NeuroSim, we investigate the impact of hardware nonidealities and quantization on algorithm performance, as well as how network topologies and algorithm-level design choices can scale latency, energy and area consumption of a chip. To the best of our knowledge, this work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa. The best results achieved for accuracy remain Backpropagation-based, notably when facing hardware imperfections. Direct Feedback Alignment, on the other hand, allows for significant speedup due to parallelization, reducing training time by a factor approaching N for N-layered networks.
translated by 谷歌翻译
超比计算(HDC)是由大脑启发的新出现的计算框架,其在数千个尺寸上运行以模拟认知的载体。与运行数量的传统计算框架不同,HDC,如大脑,使用高维随机向量并能够一次学习。 HDC基于明确定义的算术运算集,并且是高度误差的。 HDC的核心运营操纵高清vectors以散装比特方式,提供许多机会利用并行性。遗憾的是,在传统的von-neuman架构上,处理器中的高清矢量的连续运动可以使认知任务过度缓慢和能量密集。硬件加速器只会略微改进相关的指标。相反,只有使用新兴铭文设备内存的HDC框架的部分实施,已报告了相当大的性能/能源收益。本文介绍了一种基于赛道内存(RTM)的架构,以便在内存中进行和加速整个HDC框架。所提出的解决方案需要最小的附加CMOS电路,并在称为横向读取(TR)的RTM中跨多个域的读取操作,以实现排他性或(XOR)和添加操作。为了最小化CMOS电路的开销,我们提出了一种基于RTM纳米线的计数机制,其利用TR操作和标准RTM操作。使用语言识别作为用例,分别与FPGA设计相比,整体运行时和能耗降低了7.8倍和5.3倍。与最先进的内存实现相比,所提出的HDC系统将能耗降低8.6倍。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
本文探讨了超线性增长趋势的环境影响,从整体角度来看,跨越数据,算法和系统硬件。我们通过在行业规模机器学习用例中检查模型开发周期来表征AI计算的碳足迹,同时考虑系统硬件的生命周期。进一步迈出一步,我们捕获AI计算的操作和制造碳足迹,并为硬件 - 软件设计和尺度优化的结束分析以及如何帮助降低AI的整体碳足迹。根据行业经验和经验教训,我们分享关键挑战,并在AI的许多方面上绘制了重要的发展方向。我们希望本文提出的关键信息和见解能够激发社区以环保的方式推进AI领域。
translated by 谷歌翻译
Resistive Random-Access Memory (RRAM) is well-suited to accelerate neural network (NN) workloads as RRAM-based Processing-in-Memory (PIM) architectures natively support highly-parallel multiply-accumulate (MAC) operations that form the backbone of most NN workloads. Unfortunately, NN workloads such as transformers require support for non-MAC operations (e.g., softmax) that RRAM cannot provide natively. Consequently, state-of-the-art works either integrate additional digital logic circuits to support the non-MAC operations or offload the non-MAC operations to CPU/GPU, resulting in significant performance and energy efficiency overheads due to data movement. In this work, we propose NEON, a novel compiler optimization to enable the end-to-end execution of the NN workload in RRAM. The key idea of NEON is to transform each non-MAC operation into a lightweight yet highly-accurate neural network. Utilizing neural networks to approximate the non-MAC operations provides two advantages: 1) We can exploit the key strength of RRAM, i.e., highly-parallel MAC operation, to flexibly and efficiently execute non-MAC operations in memory. 2) We can simplify RRAM's microarchitecture by eliminating the additional digital logic circuits while reducing the data movement overheads. Acceleration of the non-MAC operations in memory enables NEON to achieve a 2.28x speedup compared to an idealized digital logic-based RRAM. We analyze the trade-offs associated with the transformation and demonstrate feasible use cases for NEON across different substrates.
translated by 谷歌翻译
当今的大多数计算机视觉管道都是围绕深神经网络构建的,卷积操作需要大部分一般的计算工作。与标准算法相比,Winograd卷积算法以更少的MAC计算卷积,当使用具有2x2尺寸瓷砖$ F_2 $的版本时,3x3卷积的操作计数为2.25倍。即使收益很大,Winograd算法具有较大的瓷砖尺寸,即$ f_4 $,在提高吞吐量和能源效率方面具有更大的潜力,因为它将所需的MAC降低了4倍。不幸的是,具有较大瓷砖尺寸的Winograd算法引入了数值问题,这些问题阻止了其在整数域特异性加速器上的使用和更高的计算开销,以在空间和Winograd域之间转换输入和输出数据。为了解锁Winograd $ F_4 $的全部潜力,我们提出了一种新颖的Tap-Wise量化方法,该方法克服了使用较大瓷砖的数值问题,从而实现了仅整数的推断。此外,我们介绍了以功率和区域效率的方式处理Winograd转换的自定义硬件单元,并展示了如何将此类自定义模块集成到工业级,可编程的DSA中。对大量最先进的计算机视觉基准进行了广泛的实验评估表明,Tap-Wise量化算法使量化的Winograd $ F_4 $网络几乎与FP32基线一样准确。 Winograd增强的DSA可实现高达1.85倍的能源效率,最高可用于最先进的细分和检测网络的端到端速度高达1.83倍。
translated by 谷歌翻译
我们旨在通过引入全面的分布式深度学习(DDL)探索器来解决此问题,该研究人员可以确定DDL在公共云上运行时遭受的各种执行“失速”。我们已经通过扩展先前的工作来估算两种类型的通信失速 - 互连和网络摊位来实现剖面。我们使用Profiler培训流行的DNN模型来表征各种AWS GPU实例,并列出了用户做出明智决定的优势和缺点。我们观察到,较昂贵的GPU实例可能不是所有DNN型号的性能最多,并且AWS可能会在次优的硬件互连资源分配次优。具体而言,与单个实例的培训相比,机内互连可以引入高达90%的DNN培训时间和网络连接的实例的通信开销,而与网络连接的实例可能会遭受高达5倍的速度。此外,我们对DNN宏观特征的影响进行建模,例如层的数量和通信摊位上的梯度数量。最后,我们为用户提出了一个基于衡量的建议模型,以降低DDL的公共云货币成本。
translated by 谷歌翻译