尽管关键任务应用需要使用深神经网络(DNN),但它们在移动设备的连续执行导致能耗的显着增加。虽然边缘卸载可以降低能量消耗,但信道质量,网络和边缘服务器负载中的不稳定模式可能导致系统的关键操作严重中断。一种被称为分割计算的替代方法,在模型中生成压缩表示(称为“瓶颈”),以降低带宽使用和能量消耗。事先工作已经提出了引入额外层的方法,以损害能耗和潜伏期。因此,我们提出了一个名为BoleFit的新框架,除了有针对性的DNN架构修改之外,还包括一种新颖的培训策略,即使具有强大的压缩速率,即使具有强大的压缩速率也能实现高精度。我们在图像分类中施加瓶装装饰品,并显示瓶装装备在想象中数据集中实现了77.1%的数据压缩,高达0.6%的精度损耗,而诸如Spinn的最佳精度高达6%。我们通过实验测量在NVIDIA Jetson Nano板(基于GPU)和覆盆子PI板上运行的图像分类应用的功耗和等待时间(GPU - 更低)。我们表明,对于(W.R.T.)本地计算分别降低了高达49%和89%的功耗和延迟,局部计算和37%和55%W.r.t.t.边缘卸载。我们还比较了具有基于最先进的自动化器的方法的瓶装方法,并显示了(i)瓶子分别将功耗和执行时间降低了高达54%和44%,覆盆子上的40%和62% pi; (ii)在移动设备上执行的头部模型的大小为83倍。代码存储库将被公布以获得结果的完全可重复性。
translated by 谷歌翻译
诸如智能手机和自治车辆的移动设备越来越依赖深神经网络(DNN)来执行复杂的推理任务,例如图像分类和语音识别等。但是,在移动设备上连续执行整个DNN可以快速消耗其电池。虽然任务卸载到云/边缘服务器可能会降低移动设备的计算负担,但信道质量,网络和边缘服务器负载中的不稳定模式可能导致任务执行的显着延迟。最近,已经提出了基于分割计算(SC)的方法,其中DNN被分成在移动设备上和边缘服务器上执行的头部和尾模型。最终,这可能会降低带宽使用以及能量消耗。另一种叫做早期退出(EE)的方法,列车模型在架构中呈现多个“退出”,每个都提供越来越高的目标准确性。因此,可以根据当前条件或应用需求进行准确性和延迟之间的权衡。在本文中,我们通过呈现最相关方法的比较,对SC和EE策略进行全面的综合调查。我们通过提供一系列引人注目的研究挑战来结束论文。
translated by 谷歌翻译
拆分计算已成为实现基于DNN的AI工作负载的最新范例,其中DNN模型分为两个部分,其中一个是在移动/客户端设备上执行的,另一部分是在边缘服务器(或cloud)上执行的。 。数据压缩适用于需要传输的DNN的中间张量,以应对优化速率准确性复杂性权衡的挑战。现有的拆分计算方法采用基于ML的数据压缩,但要求将整个DNN模型的参数(或其中的大部分)用于不同的压缩级别。这会产生高的计算和存储负担:训练从头开始的完整DNN模型在计算上是要求的,维持DNN参数的多个副本会增加存储要求,并在推断期间切换全套权重增加内存带宽。在本文中,我们提出了一种解决所有这些挑战的方法。它涉及瓶颈单元的系统设计和训练 - 简单,低成本的神经网络 - 可以在分裂点插入。与现有方法相比,在训练和推理期间,在训练和推理期间,高效和储存额的一小部分,我们的方法都非常轻巧。
translated by 谷歌翻译
最近,使用卷积神经网络(CNNS)存在移动和嵌入式应用的爆炸性增长。为了减轻其过度的计算需求,开发人员传统上揭示了云卸载,突出了高基础设施成本以及对网络条件的强烈依赖。另一方面,强大的SOC的出现逐渐启用设备执行。尽管如此,低端和中层平台仍然努力充分运行最先进的CNN。在本文中,我们展示了Dyno,一种分布式推断框架,将两全其人的最佳框架结合起来解决了几个挑战,例如设备异质性,不同的带宽和多目标要求。启用这是其新的CNN特定数据包装方法,其在onloading计算时利用CNN的不同部分的精度需求的可变性以及其新颖的调度器,该调度器共同调谐分区点并在运行时传输数据精度适应其执行环境的推理。定量评估表明,Dyno优于当前最先进的,通过竞争对手的CNN卸载系统,在竞争对手的CNN卸载系统上提高吞吐量超过一个数量级,最高可达60倍的数据。
translated by 谷歌翻译
由于其计算资源有限,在物联网和移动设备上部署深层神经网络(DNN)是一项艰巨的任务。因此,苛刻的任务通常完全被卸载到可以加速推理的边缘服务器上,但是,这也会导致沟通成本并唤起隐私问题。此外,这种方法使端设备的计算能力未使用。拆分计算是一个范式,其中DNN分为两个部分。第一部分是在终点设备上执行的,并且输出将传输到执行最终部分的边缘服务器。在这里,我们介绍动态拆分计算,其中最佳拆分位置是根据通信通道的状态动态选择的。通过使用现代DNN体系结构中已经存在的天然瓶颈,动态拆分计算避免了再培训和超参数优化,并且对DNN的最终准确性没有任何负面影响。通过广泛的实验,我们表明动态拆分计算在数据速率和服务器负载随时间变化的边缘计算环境中的推断速度更快。
translated by 谷歌翻译
最近,通过协作推断部署深神经网络(DNN)模型,该推断将预训练的模型分为两个部分,并分别在用户设备(UE)和Edge Server上执行它们,从而变得有吸引力。但是,DNN的大型中间特征会阻碍灵活的脱钩,现有方法要么集中在单个UE方案上,要么只是在考虑所需的CPU周期的情况下定义任务,但忽略了单个DNN层的不可分割性。在本文中,我们研究了多代理协作推理方案,其中单个边缘服务器协调了多个UES的推理。我们的目标是为所有UES实现快速和节能的推断。为了实现这一目标,我们首先设计了一种基于自动编码器的轻型方法,以压缩大型中间功能。然后,我们根据DNN的推理开销定义任务,并将问题作为马尔可夫决策过程(MDP)。最后,我们提出了一种多代理混合近端策略优化(MAHPPO)算法,以解决混合动作空间的优化问题。我们对不同类型的网络进行了广泛的实验,结果表明,我们的方法可以降低56%的推理潜伏期,并节省多达72 \%的能源消耗。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
State-of-the-art performance for many emerging edge applications is achieved by deep neural networks (DNNs). Often, these DNNs are location and time sensitive, and the parameters of a specific DNN must be delivered from an edge server to the edge device rapidly and efficiently to carry out time-sensitive inference tasks. In this paper, we introduce AirNet, a novel training and transmission method that allows efficient wireless delivery of DNNs under stringent transmit power and latency constraints. We first train the DNN with noise injection to counter the wireless channel noise. Then we employ pruning to reduce the network size to the available channel bandwidth, and perform knowledge distillation from a larger model to achieve satisfactory performance, despite pruning. We show that AirNet achieves significantly higher test accuracy compared to digital alternatives under the same bandwidth and power constraints. The accuracy of the network at the receiver also exhibits graceful degradation with channel quality, which reduces the requirement for accurate channel estimation. We further improve the performance of AirNet by pruning the network below the available bandwidth, and using channel expansion to provide better robustness against channel noise. We also benefit from unequal error protection (UEP) by selectively expanding more important layers of the network. Finally, we develop an ensemble training approach, which trains a whole spectrum of DNNs, each of which can be used at different channel condition, resolving the impractical memory requirements.
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
本文提出了一种用于拆分计算的神经体系结构搜索(NAS)方法。拆分计算是一种新兴的机器学习推理技术,可解决在物联网系统中部署深度学习的隐私和延迟挑战。在拆分计算中,神经网络模型通过网络使用Edge服务器和IoT设备进行了分离和合作处理。因此,神经网络模型的体系结构显着影响通信有效载荷大小,模型准确性和计算负载。在本文中,我们解决了优化神经网络体系结构以进行拆分计算的挑战。为此,我们提出了NASC,该NASC共同探讨了最佳模型架构和一个拆分点,以达到延迟需求(即,计算和通信的总延迟较小,都比某个阈值较小)。 NASC采用单发NAS,不需要重复模型培训进行计算高效的体系结构搜索。我们使用硬件(HW) - 基准数据的NAS基础的绩效评估表明,拟议的NASC可以改善``通信潜伏期和模型准确性''的权衡,即,将延迟降低了约40-60%,从基线降低了约40-60%有轻微的精度降解。
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
最近,由于其优越的特征提取性能,深度神经网络(DNN)的应用在诸如计算机视觉(CV)和自然语言处理(NLP)之类的许多领域非常突出。但是,高维参数模型和大规模数学计算限制了执行效率,尤其是用于物联网(IoT)设备。与以前的云/边缘模式不同,为上行链路通信和仅用于设备的设备的巨大压力承担了无法实现的计算强度,我们突出了DNN模型的设备和边缘之间的协作计算,这可以实现良好的平衡通信负载和执行准确性。具体地,提出了一种系统的按需共引起框架来利用多分支结构,其中预先接受的alexNet通过\ emph {早期出口}右尺寸,并在中间DNN层划分。实施整数量化以进一步压缩传输位。结果,我们建立了一个新的深度加强学习(DRL)优化器 - 软演员 - 软件 - 软演员批评者,用于离散(SAC-D),它生成\ emph {退出点},\ emph {partition point},\ emph {压缩位通过软策略迭代。基于延迟和准确性意识奖励设计,这种优化器可以很好地适应动态无线信道等复杂环境和任意CPU处理,并且能够支持5G URLLC。 Raspberry PI 4和PC上的真实世界实验显示了所提出的解决方案的表现。
translated by 谷歌翻译
这项工作在拆分计算领域迈出了重大步骤,即如何拆分深神经网络以将其早期部分托管在嵌入式设备上,而其余则在服务器上。到目前为止,已经确定了潜在的分裂位置,以利用独特的建筑方面,即基于层尺寸。在此范式下,只有在执行分裂并重新训练整个管道后,才能评估分裂的疗效,从而对所有合理的分裂点在时间方面进行详尽的评估。在这里,我们表明,不仅层的结构确实很重要,而且其中包含的神经元的重要性也很重要。如果神经元相对于正确的班级决策,神经元很重要。因此,应在具有高密度的重要神经元的层后立即施加拆分,以保留流动的信息。根据这个想法,我们提出了可解释的拆分(i-split):通过提供有关该分型在分类准确性方面的表现,事先对其有效实现的可靠性,以确定最合适的分裂点的过程。作为I-Split的另一个重大贡献,我们表明,多类分类问题的分裂点的最佳选择还取决于网络必须处理的特定类别。详尽的实验已在两个网络(VGG16和Resnet-50)以及三个数据集(Tiny-Imagenet-200,Notmnist和胸部X射线肺炎)上进行。源代码可在https://github.com/vips4/i-split上获得。
translated by 谷歌翻译
In recent years, deep learning (DL) models have demonstrated remarkable achievements on non-trivial tasks such as speech recognition and natural language understanding. One of the significant contributors to its success is the proliferation of end devices that acted as a catalyst to provide data for data-hungry DL models. However, computing DL training and inference is the main challenge. Usually, central cloud servers are used for the computation, but it opens up other significant challenges, such as high latency, increased communication costs, and privacy concerns. To mitigate these drawbacks, considerable efforts have been made to push the processing of DL models to edge servers. Moreover, the confluence point of DL and edge has given rise to edge intelligence (EI). This survey paper focuses primarily on the fifth level of EI, called all in-edge level, where DL training and inference (deployment) are performed solely by edge servers. All in-edge is suitable when the end devices have low computing resources, e.g., Internet-of-Things, and other requirements such as latency and communication cost are important in mission-critical applications, e.g., health care. Firstly, this paper presents all in-edge computing architectures, including centralized, decentralized, and distributed. Secondly, this paper presents enabling technologies, such as model parallelism and split learning, which facilitate DL training and deployment at edge servers. Thirdly, model adaptation techniques based on model compression and conditional computation are described because the standard cloud-based DL deployment cannot be directly applied to all in-edge due to its limited computational resources. Fourthly, this paper discusses eleven key performance metrics to evaluate the performance of DL at all in-edge efficiently. Finally, several open research challenges in the area of all in-edge are presented.
translated by 谷歌翻译
移动设备通过深神经网络(DNN)越来越依赖对象检测(OD)来执行关键任务。由于它们的复杂性高,这些DNN的执行需要过度的时间和能量。低复杂性对象跟踪(OT)可以与OD一起使用,后者定期应用后,以生成“新鲜”的跟踪参考。然而,使用OD处理的帧产生大的延迟,这可以使参考延迟过时并降低跟踪质量。这里,我们建议在这种情况下使用边缘计算,并在对大OD延迟中建立并行OT(在移动设备上)和OD(处于边缘服务器)的进程。我们提出Katch-Up,一种新型跟踪机制,可提高系统弹性过度OD延迟。但是,虽然Katch-up显着提高了性能,但它也增加了移动设备的计算负荷。因此,我们设计SmartDet,基于深度加强学习(DRL)的低复杂性控制器,了解资源利用率和OD性能之间的权衡。 SmartDet作为输入上下文相关信息与当前视频内容相关的信息和当前网络条件,以优化OD卸载的频率和类型,以及Katch-Up利用率。我们在通过Wi-Fi链路连接的GTX 980 TI为移动设备和GTX 980 TI,广泛地评估SmartDet。实验结果表明,SmartDET在跟踪性能 - 平均召回(MAR)和资源使用之间实现了最佳平衡。关于具有完全Katch-Upusage和最大渠道使用的基线,我们仍然将MAR增加4%,同时使用50%的通道和与Katch-Up相关的30%电力资源。对于使用最小资源的固定策略,我们在使用katch-up在框架的1/3上时,我们将MAR增加20%。
translated by 谷歌翻译
在移动边缘网络上部署深神经网络(DNN)的主要挑战是如何分离DNN模型,以匹配网络架构以及所有节点的计算和通信容量。这基本上涉及两个高耦合程序:模型生成和模型分裂。在本文中,提出了一种联合模型分割和神经结构搜索(JMSNAS)框架以在移动边缘网络上自动生成和部署DNN模型。考虑到计算和通信资源约束,配制计算图形搜索问题以查找DNN模型的多分裂点,然后培训模型以满足一些精度要求。此外,通过正确设计目标函数来实现模型精度和完成延迟之间的权衡。实验结果证实了通过最先进的分机学习设计方法的提出框架的优越性。
translated by 谷歌翻译
One of the most efficient methods for model compression is hint distillation, where the student model is injected with information (hints) from several different layers of the teacher model. Although the selection of hint points can drastically alter the compression performance, conventional distillation approaches overlook this fact and use the same hint points as in the early studies. Therefore, we propose a clustering based hint selection methodology, where the layers of teacher model are clustered with respect to several metrics and the cluster centers are used as the hint points. Our method is applicable for any student network, once it is applied on a chosen teacher network. The proposed approach is validated in CIFAR-100 and ImageNet datasets, using various teacher-student pairs and numerous hint distillation methods. Our results show that hint points selected by our algorithm results in superior compression performance compared to state-of-the-art knowledge distillation algorithms on the same student models and datasets.
translated by 谷歌翻译
通过利用数据示例多样性,早期的exit网络最近成为一种突出的神经网络体系结构,以加速深度学习推断过程。但是,早期出口的中间分类器会引入其他计算开销,这对于资源约束的边缘人工智能(AI)不利。在本文中,我们提出了一种早期退出预测机制,以减少由早期EXIT网络支持的设备边缘共同指导系统中的设备计算开销。具体而言,我们设计了一个低复杂性模块,即出口预测指标,以指导一些明显的“硬”样品以绕过早期出口的计算。此外,考虑到不同的通信带宽,我们扩展了潜伏期感知的边缘推理的提前退出预测机制,该机制通过一些简单的回归模型适应了出口预测变量的预测阈值和早期EXEST网络的置信阈值。广泛的实验结果证明了退出预测因子在早期EXIT网络的准确性和设备计算开销之间取得更好的权衡。此外,与基线方法相比,在不同的带宽条件下,提出的延迟感知边缘推理的方法可以达到更高的推理精度。
translated by 谷歌翻译
Large transformer models can highly improve Answer Sentence Selection (AS2) tasks, but their high computational costs prevent their use in many real-world applications. In this paper, we explore the following research question: How can we make the AS2 models more accurate without significantly increasing their model complexity? To address the question, we propose a Multiple Heads Student architecture (named CERBERUS), an efficient neural network designed to distill an ensemble of large transformers into a single smaller model. CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members. The resulting model captures the knowledge of heterogeneous transformer models by using just a few extra parameters. We show the effectiveness of CERBERUS on three English datasets for AS2; our proposed approach outperforms all single-model distillations we consider, rivaling the state-of-the-art large AS2 models that have 2.7x more parameters and run 2.5x slower. Code for our model is available at https://github.com/amazon-research/wqa-cerberus
translated by 谷歌翻译
由于最近在ML和IoT中的突破,部署机器学习(ML)在MilliWatt-Scale-Scale-Scale-Scale Edge设备(Tinyml)上正在越来越受欢迎。但是,Tinyml的功能受到严格的功率和计算约束的限制。 Tinyml中的大多数当代研究都集中在模型压缩技术上,例如模型修剪和量化,以适合低端设备上的ML模型。然而,由于积极的压缩迅速缩小了模型能力和准确性,因此通过现有技术获得的能源消耗和推理时间的改善是有限的。在保留其模型容量的同时,改善推理时间和/或降低功率的另一种方法是通过早期筛选网络。这些网络将中间分类器沿基线神经网络放置,如果中间分类器对其预测表现出足够的信心,则可以促进神经网络计算的早期退出。早期效果网络的先前工作集中在大型网络上,超出了通常用于Tinyml应用程序的功能。在本文中,我们讨论了将早期外观添加到最先进的小型CNN中的挑战,并设计了一种早期筛选架构T-RECX,以解决这些挑战。此外,我们开发了一种方法来减轻在最终退出中通过利用早期外观学到的高级代表性来减轻网络过度思考的影响。我们从MLPERF微小的基准套件中评估了三个CNN的T-RECX,用于图像分类,关键字发现和视觉唤醒单词检测任务。我们的结果表明,T-RECX提高了基线网络的准确性,并显着减少了微小CNN的平均推理时间。 T-RECX达到了32.58%的平均拖鞋降低,以换取所有评估模型的1%精度。此外,我们的技术提高了我们评估的三个模型中的两个基线网络的准确性
translated by 谷歌翻译