在边缘云协作智能(CI)中,在执行推断的AI模型的信息路径中存在不可靠的传输信道。重要的是能够模拟CI系统对不完美信道的性能,以便理解系统行为并制定适当的错误控制策略。在本文中,我们提出了一个名为DFTS2的仿真框架,这使得研究人员能够在TensorFlow〜2中定义CI系统的组件,选择具有各种参数的基于分组的信道模型,并在各种信道条件下模拟系统行为和错误/丢失控制策略。使用DFTS2,我们还展示了迄今为止迄今为止用于协作图像分类模型的数据包丢失隐藏方法的最全面的研究。
translated by 谷歌翻译
分布式推理(DI)框架已经获得了牵引力作为用于实时应用的技术,用于在资源受限的内容(物联网)设备上的尖端深机学习(ML)。在DI中,计算任务通过IOT设备通过有损的物联网网络从物联网设备卸载到边缘服务器。然而,通常,在通信延迟和可靠性之间存在通信系统级权衡;因此,为了提供准确的DI结果,需要一种可靠和高等待的通信系统来调整,这导致DI的不可忽略的端到端潜伏期。这激励我们通过ML技术的努力来改善通信延迟与准确性之间的权衡。具体而言,我们提出了一种以通信为导向的模型调谐(ComTune),其旨在通过低延迟但不可靠的通信链路实现高度精确的DI。在Comtune中,关键的想法是通过应用辍学技术的应用来微调不可靠通信链路的效果。这使得DI系统能够针对不可靠的通信链路获得鲁棒性。我们的ML实验表明,ComTune使得能够以低延迟和有损网络在低延迟和损失网络下准确预测。
translated by 谷歌翻译
This paper aims to design robust Edge Intelligence using semantic communication for time-critical IoT applications. We systematically analyze the effect of image DCT coefficients on inference accuracy and propose the channel-agnostic effectiveness encoding for offloading by transmitting the most meaningful task data first. This scheme can well utilize all available communication resource and strike a balance between transmission latency and inference accuracy. Then, we design an effectiveness decoding by implementing a novel image augmentation process for convolutional neural network (CNN) training, through which an original CNN model is transformed into a Robust CNN model. We use the proposed training method to generate Robust MobileNet-v2 and Robust ResNet-50. The proposed Edge Intelligence framework consists of the proposed effectiveness encoding and effectiveness decoding. The experimental results show that the effectiveness decoding using the Robust CNN models perform consistently better under various image distortions caused by channel errors or limited communication resource. The proposed Edge Intelligence framework using semantic communication significantly outperforms the conventional approach under latency and data rate constraints, in particular, under ultra stringent deadlines and low data rate.
translated by 谷歌翻译
最近,使用卷积神经网络(CNNS)存在移动和嵌入式应用的爆炸性增长。为了减轻其过度的计算需求,开发人员传统上揭示了云卸载,突出了高基础设施成本以及对网络条件的强烈依赖。另一方面,强大的SOC的出现逐渐启用设备执行。尽管如此,低端和中层平台仍然努力充分运行最先进的CNN。在本文中,我们展示了Dyno,一种分布式推断框架,将两全其人的最佳框架结合起来解决了几个挑战,例如设备异质性,不同的带宽和多目标要求。启用这是其新的CNN特定数据包装方法,其在onloading计算时利用CNN的不同部分的精度需求的可变性以及其新颖的调度器,该调度器共同调谐分区点并在运行时传输数据精度适应其执行环境的推理。定量评估表明,Dyno优于当前最先进的,通过竞争对手的CNN卸载系统,在竞争对手的CNN卸载系统上提高吞吐量超过一个数量级,最高可达60倍的数据。
translated by 谷歌翻译
迄今为止,通信系统主要旨在可靠地交流位序列。这种方法提供了有效的工程设计,这些设计对消息的含义或消息交换所旨在实现的目标不可知。但是,下一代系统可以通过将消息语义和沟通目标折叠到其设计中来丰富。此外,可以使这些系统了解进行交流交流的环境,从而为新颖的设计见解提供途径。本教程总结了迄今为止的努力,从早期改编,语义意识和以任务为导向的通信开始,涵盖了基础,算法和潜在的实现。重点是利用信息理论提供基础的方法,以及学习在语义和任务感知通信中的重要作用。
translated by 谷歌翻译
State-of-the-art performance for many emerging edge applications is achieved by deep neural networks (DNNs). Often, these DNNs are location and time sensitive, and the parameters of a specific DNN must be delivered from an edge server to the edge device rapidly and efficiently to carry out time-sensitive inference tasks. In this paper, we introduce AirNet, a novel training and transmission method that allows efficient wireless delivery of DNNs under stringent transmit power and latency constraints. We first train the DNN with noise injection to counter the wireless channel noise. Then we employ pruning to reduce the network size to the available channel bandwidth, and perform knowledge distillation from a larger model to achieve satisfactory performance, despite pruning. We show that AirNet achieves significantly higher test accuracy compared to digital alternatives under the same bandwidth and power constraints. The accuracy of the network at the receiver also exhibits graceful degradation with channel quality, which reduces the requirement for accurate channel estimation. We further improve the performance of AirNet by pruning the network below the available bandwidth, and using channel expansion to provide better robustness against channel noise. We also benefit from unequal error protection (UEP) by selectively expanding more important layers of the network. Finally, we develop an ensemble training approach, which trains a whole spectrum of DNNs, each of which can be used at different channel condition, resolving the impractical memory requirements.
translated by 谷歌翻译
通过利用数据示例多样性,早期的exit网络最近成为一种突出的神经网络体系结构,以加速深度学习推断过程。但是,早期出口的中间分类器会引入其他计算开销,这对于资源约束的边缘人工智能(AI)不利。在本文中,我们提出了一种早期退出预测机制,以减少由早期EXIT网络支持的设备边缘共同指导系统中的设备计算开销。具体而言,我们设计了一个低复杂性模块,即出口预测指标,以指导一些明显的“硬”样品以绕过早期出口的计算。此外,考虑到不同的通信带宽,我们扩展了潜伏期感知的边缘推理的提前退出预测机制,该机制通过一些简单的回归模型适应了出口预测变量的预测阈值和早期EXEST网络的置信阈值。广泛的实验结果证明了退出预测因子在早期EXIT网络的准确性和设备计算开销之间取得更好的权衡。此外,与基线方法相比,在不同的带宽条件下,提出的延迟感知边缘推理的方法可以达到更高的推理精度。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
深神经网络(DNN)的成功在很大程度上取决于计算资源。虽然DNN经常在云服务器上使用,但在边缘设备上运行DNN的需求越来越大。边缘设备的计算资源通常受到限制,但是,通常将多个边缘设备部署在相同的环境中,并且可以可靠地相互通信。在这项工作中,我们建议通过允许多个用户在推理过程中协作以提高其准确性来促进DNN在优势上的应用。我们的机制(创造的机制)基于每个设备的各种预测因子,在推理过程中构成了模型集合。为了减轻通信开销,用户共享量化的功能,我们提出了一种将多个决策汇总到单个推论规则中的方法。我们分析了边缘合奏所引起的延迟,表明其性能提高是以在通信网络上的共同假设下的较小延迟成本为代价的。我们的实验表明,配备紧凑型DNN的Edge合奏的协作推断显着提高了让每个用户在本地推断出的精度,并且可以使用大于整体中所有网络的单个集中式DNN胜过。
translated by 谷歌翻译
本文研究了多个设备合作边缘推理的面向任务的通信,其中一组分布式的低端边缘设备将本地样品的提取功能传输到强大的边缘服务器以进行推理。尽管合作边缘推理可以克服单个设备的有限传感能力,但它大大增加了通信开销并可能产生过度延迟。为了启用低延迟合作推断,我们提出了一种基于学习的通信方案,该方案以面向任务的方式优化本地功能提取和分布式功能,即删除数据冗余和传输信息,这对于下游推断任务至关重要而不是重建边缘服务器上的数据示例。具体而言,我们利用信息瓶颈(IB)原理在每个边缘设备上提取与任务相关的功能,并采用分布式信息瓶颈(DIB)框架来形式化分布式特征的最佳速率 - 权利权限权衡的单字母表征。为了承认对通信开销的灵活控制,我们将DIB框架扩展到分布式确定性信息瓶颈(DDIB)目标,该目标明确合并了编码功能的代表性成本。由于基于IB的目标对高维数据的计算过敏性,因此我们采用各种近似值来使优化问题可处理。为了补偿由于变异近似而引起的潜在性能损失,我们还开发了选择性重传(SR)机制,以识别多个边缘设备的编码特征中的冗余,以实现额外的通信高架降低。广泛的实验证明,所提出的面向任务的交流方案比基线方法实现了更好的利率权衡权衡。
translated by 谷歌翻译
张量完成旨在通过利用其低级别结构来恢复部分观察到的张量的缺失条目,并已应用于视觉数据恢复。在数据依次到达(例如流视频完成)的应用程序中,需要以流式的方式动态恢复张量的缺失条目。传统的流张量完成算法将整个视觉数据视为张量,当沿时间尺寸的张量子空间发生巨大变化时,可能无法令人满意地工作,例如由于视频框架上的强劲运动。在本文中,我们开发了一种基于贴片跟踪的新型流张量张量环完成框架,以进行视觉数据恢复。给定一个新传入的框架,从上一个帧跟踪小补丁。同时,对于每个跟踪的补丁,通过从新框架中堆叠类似的贴片来构建一个补丁张量。然后,使用流张量环完成算法完成补丁张量,并使用完整的补丁张量恢复了传入框架。我们提出了一种新的补丁跟踪策略,可以通过缺少数据准确有效地跟踪补丁程序。此外,提出了一种新的流张量环完成算法,该算法可以有效,准确地更新潜在的核心张量并完成补丁张量的缺失条目。广泛的实验结果表明,与批处理和流媒体最新张量的完成方法相比,所提出的算法的表现出色。
translated by 谷歌翻译
随着人工智能(AI)的积极发展,基于深神经网络(DNN)的智能应用会改变人们的生活方式和生产效率。但是,从网络边缘生成的大量计算和数据成为主要的瓶颈,传统的基于云的计算模式无法满足实时处理任务的要求。为了解决上述问题,通过将AI模型训练和推理功能嵌入网络边缘,Edge Intelligence(EI)成为AI领域的尖端方向。此外,云,边缘和终端设备之间的协作DNN推断提供了一种有希望的方法来增强EI。然而,目前,以EI为导向的协作DNN推断仍处于早期阶段,缺乏对现有研究工作的系统分类和讨论。因此,我们已经对有关以EI为导向的协作DNN推断的最新研究进行了全面调查。在本文中,我们首先回顾了EI的背景和动机。然后,我们为EI分类了四个典型的DNN推理范例,并分析其特征和关键技术。最后,我们总结了协作DNN推断的当前挑战,讨论未来的发展趋势并提供未来的研究方向。
translated by 谷歌翻译
In recent years, image and video delivery systems have begun integrating deep learning super-resolution (SR) approaches, leveraging their unprecedented visual enhancement capabilities while reducing reliance on networking conditions. Nevertheless, deploying these solutions on mobile devices still remains an active challenge as SR models are excessively demanding with respect to workload and memory footprint. Despite recent progress on on-device SR frameworks, existing systems either penalize visual quality, lead to excessive energy consumption or make inefficient use of the available resources. This work presents NAWQ-SR, a novel framework for the efficient on-device execution of SR models. Through a novel hybrid-precision quantization technique and a runtime neural image codec, NAWQ-SR exploits the multi-precision capabilities of modern mobile NPUs in order to minimize latency, while meeting user-specified quality constraints. Moreover, NAWQ-SR selectively adapts the arithmetic precision at run time to equip the SR DNN's layers with wider representational power, improving visual quality beyond what was previously possible on NPUs. Altogether, NAWQ-SR achieves an average speedup of 7.9x, 3x and 1.91x over the state-of-the-art on-device SR systems that use heterogeneous processors (MobiSR), CPU (SplitSR) and NPU (XLSR), respectively. Furthermore, NAWQ-SR delivers an average of 3.2x speedup and 0.39 dB higher PSNR over status-quo INT8 NPU designs, but most importantly mitigates the negative effects of quantization on visual quality, setting a new state-of-the-art in the attainable quality of NPU-based SR.
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
尽管关键任务应用需要使用深神经网络(DNN),但它们在移动设备的连续执行导致能耗的显着增加。虽然边缘卸载可以降低能量消耗,但信道质量,网络和边缘服务器负载中的不稳定模式可能导致系统的关键操作严重中断。一种被称为分割计算的替代方法,在模型中生成压缩表示(称为“瓶颈”),以降低带宽使用和能量消耗。事先工作已经提出了引入额外层的方法,以损害能耗和潜伏期。因此,我们提出了一个名为BoleFit的新框架,除了有针对性的DNN架构修改之外,还包括一种新颖的培训策略,即使具有强大的压缩速率,即使具有强大的压缩速率也能实现高精度。我们在图像分类中施加瓶装装饰品,并显示瓶装装备在想象中数据集中实现了77.1%的数据压缩,高达0.6%的精度损耗,而诸如Spinn的最佳精度高达6%。我们通过实验测量在NVIDIA Jetson Nano板(基于GPU)和覆盆子PI板上运行的图像分类应用的功耗和等待时间(GPU - 更低)。我们表明,对于(W.R.T.)本地计算分别降低了高达49%和89%的功耗和延迟,局部计算和37%和55%W.r.t.t.边缘卸载。我们还比较了具有基于最先进的自动化器的方法的瓶装方法,并显示了(i)瓶子分别将功耗和执行时间降低了高达54%和44%,覆盆子上的40%和62% pi; (ii)在移动设备上执行的头部模型的大小为83倍。代码存储库将被公布以获得结果的完全可重复性。
translated by 谷歌翻译
本文提出了一种用于拆分计算的神经体系结构搜索(NAS)方法。拆分计算是一种新兴的机器学习推理技术,可解决在物联网系统中部署深度学习的隐私和延迟挑战。在拆分计算中,神经网络模型通过网络使用Edge服务器和IoT设备进行了分离和合作处理。因此,神经网络模型的体系结构显着影响通信有效载荷大小,模型准确性和计算负载。在本文中,我们解决了优化神经网络体系结构以进行拆分计算的挑战。为此,我们提出了NASC,该NASC共同探讨了最佳模型架构和一个拆分点,以达到延迟需求(即,计算和通信的总延迟较小,都比某个阈值较小)。 NASC采用单发NAS,不需要重复模型培训进行计算高效的体系结构搜索。我们使用硬件(HW) - 基准数据的NAS基础的绩效评估表明,拟议的NASC可以改善``通信潜伏期和模型准确性''的权衡,即,将延迟降低了约40-60%,从基线降低了约40-60%有轻微的精度降解。
translated by 谷歌翻译
拆分计算已成为实现基于DNN的AI工作负载的最新范例,其中DNN模型分为两个部分,其中一个是在移动/客户端设备上执行的,另一部分是在边缘服务器(或cloud)上执行的。 。数据压缩适用于需要传输的DNN的中间张量,以应对优化速率准确性复杂性权衡的挑战。现有的拆分计算方法采用基于ML的数据压缩,但要求将整个DNN模型的参数(或其中的大部分)用于不同的压缩级别。这会产生高的计算和存储负担:训练从头开始的完整DNN模型在计算上是要求的,维持DNN参数的多个副本会增加存储要求,并在推断期间切换全套权重增加内存带宽。在本文中,我们提出了一种解决所有这些挑战的方法。它涉及瓶颈单元的系统设计和训练 - 简单,低成本的神经网络 - 可以在分裂点插入。与现有方法相比,在训练和推理期间,在训练和推理期间,高效和储存额的一小部分,我们的方法都非常轻巧。
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
先进的可穿戴设备越来越多地利用高分辨率多摄像头系统。作为用于处理所得到的图像数据的最先进的神经网络是计算要求的,对于利用第五代(5G)无线连接和移动边缘计算,已经越来越感兴趣,以将该处理卸载到云。为了评估这种可能性,本文提出了一个详细的仿真和评估,用于5G无线卸载,用于对象检测,在一个名为Vis4ion的强大新型智能可穿戴物中,用于盲目损害(BVI)。目前的Vis4ion系统是一种具有高分辨率摄像机,视觉处理和触觉和音频反馈的仪表簿。本文认为将相机数据上载到移动边缘云以执行实时对象检测并将检测结果传输回可穿戴。为了确定视频要求,纸张评估视频比特率和分辨率对物体检测精度和范围的影响。利用与BVI导航相关的标记对象的新街道场景数据集进行分析。视觉评估与详细的全堆栈无线网络仿真结合,以确定吞吐量的分布和延迟,具有来自城市环境中的新高分辨率3D模型的实际导航路径和射线跟踪。为了比较,无线仿真考虑了标准的4G长期演进(LTE)载波和高速度5G毫米波(MMWAVE)载波。因此,该工作提供了对具有高带宽和低延迟要求的应用中的MMWAVE连接的边缘计算的彻底和现实评估。
translated by 谷歌翻译