低功耗边缘设备上的计算机视觉使应用程序包括搜索和救援和安全性。最先进的计算机视觉算法,例如深神经网络(DNN),对于低功率边缘设备推断而言太大。为了提高效率,一些现有方法并将DNN推断并行于多个边缘设备。但是,这些技术引入了显着的通信和同步开销,或者无法在设备上平衡工作负载。本文展示了分层DNN架构非常适合于多个边缘设备上的并行处理。我们设计一种新的方法,该方法为使用分层DNN的计算机视觉问题创建一个并行推理管道。该方法余额跨越协作设备加载,并降低通信成本,以便于以更高的吞吐量同时处理多个视频帧。我们的实验考虑了一个代表性的计算机视觉问题,其中在每个视频帧上执行图像识别,在多个覆盆子PI 4bs上运行。通过四个合作的低功耗边缘设备,我们的方法达到3.21倍的吞吐量,每帧的每个设备的能耗较低68%,与现有的单设备分层DNN相比,内存减少58%。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
随着人工智能(AI)的积极发展,基于深神经网络(DNN)的智能应用会改变人们的生活方式和生产效率。但是,从网络边缘生成的大量计算和数据成为主要的瓶颈,传统的基于云的计算模式无法满足实时处理任务的要求。为了解决上述问题,通过将AI模型训练和推理功能嵌入网络边缘,Edge Intelligence(EI)成为AI领域的尖端方向。此外,云,边缘和终端设备之间的协作DNN推断提供了一种有希望的方法来增强EI。然而,目前,以EI为导向的协作DNN推断仍处于早期阶段,缺乏对现有研究工作的系统分类和讨论。因此,我们已经对有关以EI为导向的协作DNN推断的最新研究进行了全面调查。在本文中,我们首先回顾了EI的背景和动机。然后,我们为EI分类了四个典型的DNN推理范例,并分析其特征和关键技术。最后,我们总结了协作DNN推断的当前挑战,讨论未来的发展趋势并提供未来的研究方向。
translated by 谷歌翻译
最近,使用卷积神经网络(CNNS)存在移动和嵌入式应用的爆炸性增长。为了减轻其过度的计算需求,开发人员传统上揭示了云卸载,突出了高基础设施成本以及对网络条件的强烈依赖。另一方面,强大的SOC的出现逐渐启用设备执行。尽管如此,低端和中层平台仍然努力充分运行最先进的CNN。在本文中,我们展示了Dyno,一种分布式推断框架,将两全其人的最佳框架结合起来解决了几个挑战,例如设备异质性,不同的带宽和多目标要求。启用这是其新的CNN特定数据包装方法,其在onloading计算时利用CNN的不同部分的精度需求的可变性以及其新颖的调度器,该调度器共同调谐分区点并在运行时传输数据精度适应其执行环境的推理。定量评估表明,Dyno优于当前最先进的,通过竞争对手的CNN卸载系统,在竞争对手的CNN卸载系统上提高吞吐量超过一个数量级,最高可达60倍的数据。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
深度神经网络(DNN)已成为移动和嵌入式系统中的普遍存在的技术,用于图像/对象识别和分类。执行多个DNN的趋势同时加剧了资源受限移动设备上满足严格延迟/准确性要求的现有限制。现有技术通过根据资源动态缩放模型大小来探索精度资源权衡的光。然而,这种模型缩放方法接近迫在眉睫的挑战:(i)模型尺寸的大空间探索,(ii)对不同模型组合的培训时间非常长。在本文中,我们介绍了Legodnn,一种用于在移动视觉系统中运行多DNN工作负载的轻质块粒度缩放解决方案。 Legodnn仅通过在DNN中提取和培训少数常见块(例如,在VGG和RENET中的VGG和8中的8中)来保证短模型培训时间。在运行时,Legodnn最佳地结合了这些块的后代模型,以最大限度地在特定资源和延迟约束下最大限度地提高精度,同时通过DNN的智能块级缩放来降低切换开销。我们在Tensorflow Lite中实现Legodnn,并通过一组普遍的DNN模型,广泛地评估了最先进的技术(浮标缩放,知识蒸馏和模型压缩)。评估结果表明,乐高达在模型尺寸下提供了1,296倍至279,936倍,而在不增加训练时间的情况下,推断准确性的提高高达31.74%,降低缩放能耗减少了71.07%。
translated by 谷歌翻译
在监控和搜索和救援应用程序中,重要的是在低端设备上实时执行多目标跟踪(MOT)。今天的MOT解决方案采用深度神经网络,往往具有高计算复杂性。识别帧大小对跟踪性能的影响,我们提出了深度,一种模型不可知框架尺寸选择方法,可在现有的全卷积网络基跟踪器之上进行操作,以加速跟踪吞吐量。在培训阶段,我们将可检测性分数纳入单次跟踪器架构,使得DeepScale以自我监督的方式学习不同帧大小的表示估计。在推理期间,它可以根据基于用户控制参数根据视觉内容的复杂性来调整帧大小。为了利用边缘服务器上的计算资源,我们提出了两个计算分区模式,即仅使用自适应帧大小传输和边缘服务器辅助跟踪仅适用于MOT,即边缘服务器。 MOT数据集的广泛实验和基准测试证明了深度的有效性和灵活性。与最先进的追踪器相比,DeepScale ++,DeepScale的变种实现1.57倍加速,仅在一个配置中的MOT15数据集上跟踪准确性。我们已经实现和评估了DeepScale ++,以及由NVIDIA JETSON TX2板和GPU服务器组成的小型测试平台上所提出的计算分区方案。实验显示与仅服务器或智能相机的解决方案相比跟踪性能和延迟之间的非琐碎权衡。
translated by 谷歌翻译
深度神经网络(DNN)模型通常是从​​一层到另一层的依次训练的,这会导致向前,向后和更新锁定的问题,从而导致训练时间的性能差。减轻这些问题的现有并行策略提供了次优的运行时性能。在这项工作中,我们提出了一种新颖的层面分区和合并,向前和向后通过并行框架,以提供更好的训练性能。拟议工作的新颖性包括1)层面分区和合并模型,该模型可以最大程度地降低设备之间的通信开销,而不会在培训过程中没有现有策略的记忆成本; 2)向后通过和向后通过并行化和优化,以解决更新锁定问题并最大程度地减少总培训成本。对实际用例的实验评估表明,所提出的方法在训练速度方面优于最先进的方法。并在不损害非平行方法的准确性性能的情况下实现几乎线性加速。
translated by 谷歌翻译
In recent years, image and video delivery systems have begun integrating deep learning super-resolution (SR) approaches, leveraging their unprecedented visual enhancement capabilities while reducing reliance on networking conditions. Nevertheless, deploying these solutions on mobile devices still remains an active challenge as SR models are excessively demanding with respect to workload and memory footprint. Despite recent progress on on-device SR frameworks, existing systems either penalize visual quality, lead to excessive energy consumption or make inefficient use of the available resources. This work presents NAWQ-SR, a novel framework for the efficient on-device execution of SR models. Through a novel hybrid-precision quantization technique and a runtime neural image codec, NAWQ-SR exploits the multi-precision capabilities of modern mobile NPUs in order to minimize latency, while meeting user-specified quality constraints. Moreover, NAWQ-SR selectively adapts the arithmetic precision at run time to equip the SR DNN's layers with wider representational power, improving visual quality beyond what was previously possible on NPUs. Altogether, NAWQ-SR achieves an average speedup of 7.9x, 3x and 1.91x over the state-of-the-art on-device SR systems that use heterogeneous processors (MobiSR), CPU (SplitSR) and NPU (XLSR), respectively. Furthermore, NAWQ-SR delivers an average of 3.2x speedup and 0.39 dB higher PSNR over status-quo INT8 NPU designs, but most importantly mitigates the negative effects of quantization on visual quality, setting a new state-of-the-art in the attainable quality of NPU-based SR.
translated by 谷歌翻译
基于von-neumann架构的传统计算系统,数据密集型工作负载和应用程序(如机器学习)和应用程序都是基本上限制的。随着数据移动操作和能量消耗成为计算系统设计中的关键瓶颈,对近数据处理(NDP),机器学习和特别是神经网络(NN)的加速器等非传统方法的兴趣显着增加。诸如Reram和3D堆叠的新兴内存技术,这是有效地架构基于NN的基于NN的加速器,因为它们的工作能力是:高密度/低能量存储和近记忆计算/搜索引擎。在本文中,我们提出了一种为NN设计NDP架构的技术调查。通过基于所采用的内存技术对技术进行分类,我们强调了它们的相似之处和差异。最后,我们讨论了需要探索的开放挑战和未来的观点,以便改进和扩展未来计算平台的NDP架构。本文对计算机学习领域的计算机架构师,芯片设计师和研究人员来说是有价值的。
translated by 谷歌翻译
深神经网络(DNNS)在各种机器学习(ML)应用程序中取得了巨大成功,在计算机视觉,自然语言处理和虚拟现实等中提供了高质量的推理解决方案。但是,基于DNN的ML应用程序也带来计算和存储要求的增加了很多,对于具有有限的计算/存储资源,紧张的功率预算和较小形式的嵌入式系统而言,这尤其具有挑战性。挑战还来自各种特定应用的要求,包括实时响应,高通量性能和可靠的推理准确性。为了应对这些挑战,我们介绍了一系列有效的设计方法,包括有效的ML模型设计,定制的硬件加速器设计以及硬件/软件共同设计策略,以启用嵌入式系统上有效的ML应用程序。
translated by 谷歌翻译
Distributed deep learning (DDL) systems strongly depend on network performance. Current electronic packet switched (EPS) network architectures and technologies suffer from variable diameter topologies, low-bisection bandwidth and over-subscription affecting completion time of communication and collective operations. We introduce a near-exascale, full-bisection bandwidth, all-to-all, single-hop, all-optical network architecture with nanosecond reconfiguration called RAMP, which supports large-scale distributed and parallel computing systems (12.8~Tbps per node for up to 65,536 nodes). For the first time, a custom RAMP-x MPI strategy and a network transcoder is proposed to run MPI collective operations across the optical circuit switched (OCS) network in a schedule-less and contention-less manner. RAMP achieves 7.6-171$\times$ speed-up in completion time across all MPI operations compared to realistic EPS and OCS counterparts. It can also deliver a 1.3-16$\times$ and 7.8-58$\times$ reduction in Megatron and DLRM training time respectively} while offering 42-53$\times$ and 3.3-12.4$\times$ improvement in energy consumption and cost respectively.
translated by 谷歌翻译
In this paper, we present PARTIME, a software library written in Python and based on PyTorch, designed specifically to speed up neural networks whenever data is continuously streamed over time, for both learning and inference. Existing libraries are designed to exploit data-level parallelism, assuming that samples are batched, a condition that is not naturally met in applications that are based on streamed data. Differently, PARTIME starts processing each data sample at the time in which it becomes available from the stream. PARTIME wraps the code that implements a feed-forward multi-layer network and it distributes the layer-wise processing among multiple devices, such as Graphics Processing Units (GPUs). Thanks to its pipeline-based computational scheme, PARTIME allows the devices to perform computations in parallel. At inference time this results in scaling capabilities that are theoretically linear with respect to the number of devices. During the learning stage, PARTIME can leverage the non-i.i.d. nature of the streamed data with samples that are smoothly evolving over time for efficient gradient computations. Experiments are performed in order to empirically compare PARTIME with classic non-parallel neural computations in online learning, distributing operations on up to 8 NVIDIA GPUs, showing significant speedups that are almost linear in the number of devices, mitigating the impact of the data transfer overhead.
translated by 谷歌翻译
本文提出了一种用于拆分计算的神经体系结构搜索(NAS)方法。拆分计算是一种新兴的机器学习推理技术,可解决在物联网系统中部署深度学习的隐私和延迟挑战。在拆分计算中,神经网络模型通过网络使用Edge服务器和IoT设备进行了分离和合作处理。因此,神经网络模型的体系结构显着影响通信有效载荷大小,模型准确性和计算负载。在本文中,我们解决了优化神经网络体系结构以进行拆分计算的挑战。为此,我们提出了NASC,该NASC共同探讨了最佳模型架构和一个拆分点,以达到延迟需求(即,计算和通信的总延迟较小,都比某个阈值较小)。 NASC采用单发NAS,不需要重复模型培训进行计算高效的体系结构搜索。我们使用硬件(HW) - 基准数据的NAS基础的绩效评估表明,拟议的NASC可以改善``通信潜伏期和模型准确性''的权衡,即,将延迟降低了约40-60%,从基线降低了约40-60%有轻微的精度降解。
translated by 谷歌翻译
已经提出了高效和自适应计算机视觉系统以使计算机视觉任务,例如图像分类和对象检测,针对嵌入或移动设备进行了优化。这些解决方案最近的起源,专注于通过设计具有近似旋钮的自适应系统来优化模型(深神经网络,DNN)或系统。尽管最近的几项努力,但我们表明现有解决方案遭受了两个主要缺点。首先,系统不考虑模型的能量消耗,同时在制定要运行的模型的决定时。其次,由于其他共同居民工作负载,评估不考虑设备上的争用的实际情况。在这项工作中,我们提出了一种高效和自适应的视频对象检测系统,这是联合优化的精度,能量效率和延迟。底层Virtuoso是一个多分支执行内核,它能够在精度 - 能量 - 延迟轴上的不同运行点处运行,以及轻量级运行时调度程序,以选择最佳的执行分支以满足用户要求。要与Virtuoso相当比较,我们基准于15件最先进的或广泛使用的协议,包括更快的R-CNN(FRCNN),YOLO V3,SSD,培训台,SELSA,MEGA,REPP,FastAdapt和我们的内部FRCNN +,YOLO +,SSD +和高效+(我们的变体具有增强的手机效率)的自适应变体。通过这种全面的基准,Virtuoso对所有上述协议显示出优势,在NVIDIA Jetson Mobile GPU上的每一项效率水平上引领精度边界。具体而言,Virtuoso的准确性为63.9%,比一些流行的物体检测模型高于10%,51.1%,yolo为49.5%。
translated by 谷歌翻译
The ubiquity of camera-embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications. These applications often demand on-device processing of video streams to deliver real-time, high-quality services for privacy and robustness concerns. However, the performance of these applications is constrained by the raw video streams, which tend to be taken with small-aperture cameras of ubiquitous mobile platforms in dim light. Despite extensive low-light video enhancement solutions, they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets. In this paper, we propose AdaEnlight, an energy-aware low-light video stream enhancement system on mobile devices. It achieves real-time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform-imposed dynamic energy budgets. We report extensive experiments on diverse datasets, scenarios, and platforms and demonstrate the superiority of AdaEnlight compared with state-of-the-art low-light image and video enhancement solutions.
translated by 谷歌翻译
现代回顾性分析系统利用级联体系结构减轻瓶颈来计算深神经网络(DNNS)。但是,现有的级联反应有两个局限性:(1)解码瓶颈要么被忽视或规避,要支付重大的计算和存储成本以进行预处理; (2)系统专门用于时间查询,缺乏空间查询支持。本文介绍了COVA,这是一种新颖的级联体系结构,该结构将压缩域和像素域之间的级联计算分开以解决解码瓶颈,从而支持时间和空间查询。 COVA级联分析分为三个主要阶段,其中前两个阶段是在压缩域中执行的,而在像素域中的最后一个阶段。首先,COVA检测一组压缩帧(称为轨道)上移动对象(称为斑点)的出现。然后,使用轨道结果,Cova谨慎地选择了一组最小的帧以获取标签信息,并仅解码它们以计算完整的DNN,从而减轻了解码的瓶颈。最后,Cova将轨道与标签相结合,以产生最终分析结果,用户可以处理时间和空间查询。我们的实验表明,COVA对现代级联系统提供了4.8倍的吞吐量改进,同时施加了适度的精度损失。
translated by 谷歌翻译
最近,由于其优越的特征提取性能,深度神经网络(DNN)的应用在诸如计算机视觉(CV)和自然语言处理(NLP)之类的许多领域非常突出。但是,高维参数模型和大规模数学计算限制了执行效率,尤其是用于物联网(IoT)设备。与以前的云/边缘模式不同,为上行链路通信和仅用于设备的设备的巨大压力承担了无法实现的计算强度,我们突出了DNN模型的设备和边缘之间的协作计算,这可以实现良好的平衡通信负载和执行准确性。具体地,提出了一种系统的按需共引起框架来利用多分支结构,其中预先接受的alexNet通过\ emph {早期出口}右尺寸,并在中间DNN层划分。实施整数量化以进一步压缩传输位。结果,我们建立了一个新的深度加强学习(DRL)优化器 - 软演员 - 软件 - 软演员批评者,用于离散(SAC-D),它生成\ emph {退出点},\ emph {partition point},\ emph {压缩位通过软策略迭代。基于延迟和准确性意识奖励设计,这种优化器可以很好地适应动态无线信道等复杂环境和任意CPU处理,并且能够支持5G URLLC。 Raspberry PI 4和PC上的真实世界实验显示了所提出的解决方案的表现。
translated by 谷歌翻译
关键字斑点(kWs)是一个重要的功能,使我们的周围环境中许多无处不在的智能设备进行交互,可以通过唤醒词或直接作为人机界面激活它们。对于许多应用程序,KWS是我们与设备交互的进入点,因此,始终是ON工作负载。许多智能设备都是移动的,并且它们的电池寿命受到持续运行的服务受到严重影响。因此,KWS和类似的始终如一的服务是在优化整体功耗时重点。这项工作解决了低成本微控制器单元(MCU)的KWS节能。我们将模拟二元特征提取与二元神经网络相结合。通过用拟议的模拟前端取代数字预处理,我们表明数据采集和预处理所需的能量可以减少29倍,将其份额从主导的85%的份额削减到仅为我们的整体能源消耗的16%参考KWS应用程序。语音命令数据集的实验评估显示,所提出的系统分别优于最先进的准确性和能效,在10级数据集中分别在10级数据集上达到1%和4.3倍,同时提供令人信服的精度 - 能源折衷包括71倍能量减少2%的精度下降。
translated by 谷歌翻译
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.
translated by 谷歌翻译