Practical applications employing deep learning must guarantee inference quality. However, we found that the inference quality of state-of-the-art and state-of-the-practice in practical applications has a long tail distribution. In the real world, many tasks have strict requirements for the quality of deep learning inference, such as safety-critical and mission-critical tasks. The fluctuation of inference quality seriously affects its practical applications, and the quality at the tail may lead to severe consequences. State-of-the-art and state-of-the-practice with outstanding inference quality designed and trained under loose constraints still have poor inference quality under constraints with practical application significance. On the one hand, the neural network models must be deployed on complex systems with limited resources. On the other hand, safety-critical and mission-critical tasks need to meet more metric constraints while ensuring high inference quality. We coin a new term, ``tail quality,'' to characterize this essential requirement and challenge. We also propose a new metric, ``X-Critical-Quality,'' to measure the inference quality under certain constraints. This article reveals factors contributing to the failure of using state-of-the-art and state-of-the-practice algorithms and systems in real scenarios. Therefore, we call for establishing innovative methodologies and tools to tackle this enormous challenge.
translated by 谷歌翻译
深度神经网络(DNN)由于其高度的感知,决策和控制而被广泛用于自主驾驶中。在诸如自动驾驶之类的安全至关重要系统中,实时执行感测和感知等任务对于车辆的安全至关重要,这需要应用程序的执行时间才能预测。但是,在DNN推断中观察到不可忽略的时间变化。当前的DNN推理研究要么忽略时间变化问题,要么依靠调度程序来处理它。当前的工作都没有解释DNN推理时间变化的根本原因。了解DNN推理的时间变化成为自动驾驶实时计划的基本挑战。在这项工作中,我们从六个角度分析了DNN推断的时间变化:数据,I/O,模型,运行时,硬件和端到端感知系统。在理解DNN推断的时间变化方面得出了六个见解。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
深神经网络(DNN)已成为许多应用程序域(包括基于Web的服务)的重要组成部分。这些服务需要高吞吐量和(接近)实时功能,例如,对用户的请求做出反应或反应,或者按时处理传入数据流。但是,DNN设计的趋势是朝着具有许多层和参数的较大模型,以实现更准确的结果。尽管这些模型通常是预先训练的,但是在如此大的模型中,计算复杂性仍然相对显着,从而阻碍了低推断潜伏期。实施缓存机制是用于加速服务响应时间的典型系统工程解决方案。但是,传统的缓存通常不适合基于DNN的服务。在本文中,我们提出了一种端到端自动化解决方案,以根据其计算复杂性和推理延迟来提高基于DNN的服务的性能。我们的缓存方法采用了DNN模型和早期出口的自我介绍的思想。提出的解决方案是一种自动化的在线层缓存机制,如果提前出口之一中的高速缓存模型足够有信心,则可以在推理时间提早退出大型模型。本文的主要贡献之一是,我们将该想法实施为在线缓存,这意味着缓存模型不需要访问培训数据,并且仅根据运行时的传入数据执行,使其适用于应用程序使用预训练的模型。我们的实验在两个下游任务(面部和对象分类)上结果表明,平均而言,缓存可以将这些服务的计算复杂性降低到58 \%(就FLOPS计数而言),并将其推断潜伏期提高到46 \%精度低至零至零。
translated by 谷歌翻译
Following the development of digitization, a growing number of large Original Equipment Manufacturers (OEMs) are adapting computer vision or natural language processing in a wide range of applications such as anomaly detection and quality inspection in plants. Deployment of such a system is becoming an extremely important topic. Our work starts with the least-automated deployment technologies of machine learning systems includes several iterations of updates, and ends with a comparison of automated deployment techniques. The objective is, on the one hand, to compare the advantages and disadvantages of various technologies in theory and practice, so as to facilitate later adopters to avoid making the generalized mistakes when implementing actual use cases, and thereby choose a better strategy for their own enterprises. On the other hand, to raise awareness of the evaluation framework for the deployment of machine learning systems, to have more comprehensive and useful evaluation metrics (e.g. table 2), rather than only focusing on a single factor (e.g. company cost). This is especially important for decision-makers in the industry.
translated by 谷歌翻译
In recent years, deep learning (DL) models have demonstrated remarkable achievements on non-trivial tasks such as speech recognition and natural language understanding. One of the significant contributors to its success is the proliferation of end devices that acted as a catalyst to provide data for data-hungry DL models. However, computing DL training and inference is the main challenge. Usually, central cloud servers are used for the computation, but it opens up other significant challenges, such as high latency, increased communication costs, and privacy concerns. To mitigate these drawbacks, considerable efforts have been made to push the processing of DL models to edge servers. Moreover, the confluence point of DL and edge has given rise to edge intelligence (EI). This survey paper focuses primarily on the fifth level of EI, called all in-edge level, where DL training and inference (deployment) are performed solely by edge servers. All in-edge is suitable when the end devices have low computing resources, e.g., Internet-of-Things, and other requirements such as latency and communication cost are important in mission-critical applications, e.g., health care. Firstly, this paper presents all in-edge computing architectures, including centralized, decentralized, and distributed. Secondly, this paper presents enabling technologies, such as model parallelism and split learning, which facilitate DL training and deployment at edge servers. Thirdly, model adaptation techniques based on model compression and conditional computation are described because the standard cloud-based DL deployment cannot be directly applied to all in-edge due to its limited computational resources. Fourthly, this paper discusses eleven key performance metrics to evaluate the performance of DL at all in-edge efficiently. Finally, several open research challenges in the area of all in-edge are presented.
translated by 谷歌翻译
最近,使用卷积神经网络(CNNS)存在移动和嵌入式应用的爆炸性增长。为了减轻其过度的计算需求,开发人员传统上揭示了云卸载,突出了高基础设施成本以及对网络条件的强烈依赖。另一方面,强大的SOC的出现逐渐启用设备执行。尽管如此,低端和中层平台仍然努力充分运行最先进的CNN。在本文中,我们展示了Dyno,一种分布式推断框架,将两全其人的最佳框架结合起来解决了几个挑战,例如设备异质性,不同的带宽和多目标要求。启用这是其新的CNN特定数据包装方法,其在onloading计算时利用CNN的不同部分的精度需求的可变性以及其新颖的调度器,该调度器共同调谐分区点并在运行时传输数据精度适应其执行环境的推理。定量评估表明,Dyno优于当前最先进的,通过竞争对手的CNN卸载系统,在竞争对手的CNN卸载系统上提高吞吐量超过一个数量级,最高可达60倍的数据。
translated by 谷歌翻译
无线电接入网络(RAN)技术继续见证巨大的增长,开放式运行越来越最近的势头。在O-RAN规范中,RAN智能控制器(RIC)用作自动化主机。本文介绍了对O-RAN堆栈相关的机器学习(ML)的原则,特别是加强学习(RL)。此外,我们审查无线网络的最先进的研究,并将其投入到RAN框架和O-RAN架构的层次结构上。我们在整个开发生命周期中提供ML / RL模型面临的挑战的分类:从系统规范到生产部署(数据采集,模型设计,测试和管理等)。为了解决挑战,我们将一组现有的MLOPS原理整合,当考虑RL代理时,具有独特的特性。本文讨论了系统的生命周期模型开发,测试和验证管道,称为:RLOPS。我们讨论了RLOP的所有基本部分,包括:模型规范,开发和蒸馏,生产环境服务,运营监控,安全/安全和数据工程平台。根据这些原则,我们提出了最佳实践,以实现自动化和可重复的模型开发过程。
translated by 谷歌翻译
本文探讨了超线性增长趋势的环境影响,从整体角度来看,跨越数据,算法和系统硬件。我们通过在行业规模机器学习用例中检查模型开发周期来表征AI计算的碳足迹,同时考虑系统硬件的生命周期。进一步迈出一步,我们捕获AI计算的操作和制造碳足迹,并为硬件 - 软件设计和尺度优化的结束分析以及如何帮助降低AI的整体碳足迹。根据行业经验和经验教训,我们分享关键挑战,并在AI的许多方面上绘制了重要的发展方向。我们希望本文提出的关键信息和见解能够激发社区以环保的方式推进AI领域。
translated by 谷歌翻译
边缘计算是一个将数据处理服务转移到生成数据的网络边缘的范式。尽管这样的架构提供了更快的处理和响应,但除其他好处外,它还提出了必须解决的关键安全问题和挑战。本文讨论了从硬件层到系统层的边缘网络体系结构出现的安全威胁和漏洞。我们进一步讨论了此类网络中的隐私和法规合规性挑战。最后,我们认为需要一种整体方法来分析边缘网络安全姿势,该姿势必须考虑每一层的知识。
translated by 谷歌翻译
先进的可穿戴设备越来越多地利用高分辨率多摄像头系统。作为用于处理所得到的图像数据的最先进的神经网络是计算要求的,对于利用第五代(5G)无线连接和移动边缘计算,已经越来越感兴趣,以将该处理卸载到云。为了评估这种可能性,本文提出了一个详细的仿真和评估,用于5G无线卸载,用于对象检测,在一个名为Vis4ion的强大新型智能可穿戴物中,用于盲目损害(BVI)。目前的Vis4ion系统是一种具有高分辨率摄像机,视觉处理和触觉和音频反馈的仪表簿。本文认为将相机数据上载到移动边缘云以执行实时对象检测并将检测结果传输回可穿戴。为了确定视频要求,纸张评估视频比特率和分辨率对物体检测精度和范围的影响。利用与BVI导航相关的标记对象的新街道场景数据集进行分析。视觉评估与详细的全堆栈无线网络仿真结合,以确定吞吐量的分布和延迟,具有来自城市环境中的新高分辨率3D模型的实际导航路径和射线跟踪。为了比较,无线仿真考虑了标准的4G长期演进(LTE)载波和高速度5G毫米波(MMWAVE)载波。因此,该工作提供了对具有高带宽和低延迟要求的应用中的MMWAVE连接的边缘计算的彻底和现实评估。
translated by 谷歌翻译
在现实世界应用程序中部署深度学习(DL)的软件系统有所增加。通常,DL模型是使用具有自己的内部机制/格式来代表和训练DL模型的DL框架开发和培训的,通常这些格式无法通过其他框架识别。此外,训练有素的模型通常被部署在与开发的环境不同的环境中。为了解决互操作性问题并使DL模型与不同的框架/环境兼容,引入了一些交换格式,例如ONNX和Coreml等DL模型。但是,社区从未对ONNX和Coreml进行经验评估,以揭示其转换后的预测准确性,性能和稳健性。转换模型的准确性差或不稳定行为可能导致部署的基于DL的软件系统的质量差。在本文中,我们进行了第一项评估ONNX和Coreml的经验研究,以转换训练有素的DL模型。在我们的系统方法中,两个流行的DL框架Keras和Pytorch用于在三个流行数据集上训练五种广泛使用的DL模型。然后将训练有素的模型转换为ONNX和Coreml,并将其转移到待评估该格式的两个运行时环境中。我们研究转换之前和之后的预测准确性。我们的结果揭示了转换模型的预测准确性在相同的原始级别。也研究了转换模型的性能(时间成本和内存消耗)。转换后模型的大小减小,这可能导致基于DL的软件部署。通常将转换的模型评估为在相同级别的原始级别上。但是,获得的结果表明,与ONNX相比,Coreml模型更容易受到对抗攻击的影响。
translated by 谷歌翻译
由于其计算资源有限,在物联网和移动设备上部署深层神经网络(DNN)是一项艰巨的任务。因此,苛刻的任务通常完全被卸载到可以加速推理的边缘服务器上,但是,这也会导致沟通成本并唤起隐私问题。此外,这种方法使端设备的计算能力未使用。拆分计算是一个范式,其中DNN分为两个部分。第一部分是在终点设备上执行的,并且输出将传输到执行最终部分的边缘服务器。在这里,我们介绍动态拆分计算,其中最佳拆分位置是根据通信通道的状态动态选择的。通过使用现代DNN体系结构中已经存在的天然瓶颈,动态拆分计算避免了再培训和超参数优化,并且对DNN的最终准确性没有任何负面影响。通过广泛的实验,我们表明动态拆分计算在数据速率和服务器负载随时间变化的边缘计算环境中的推断速度更快。
translated by 谷歌翻译
深神经网络(DNNS)在各种机器学习(ML)应用程序中取得了巨大成功,在计算机视觉,自然语言处理和虚拟现实等中提供了高质量的推理解决方案。但是,基于DNN的ML应用程序也带来计算和存储要求的增加了很多,对于具有有限的计算/存储资源,紧张的功率预算和较小形式的嵌入式系统而言,这尤其具有挑战性。挑战还来自各种特定应用的要求,包括实时响应,高通量性能和可靠的推理准确性。为了应对这些挑战,我们介绍了一系列有效的设计方法,包括有效的ML模型设计,定制的硬件加速器设计以及硬件/软件共同设计策略,以启用嵌入式系统上有效的ML应用程序。
translated by 谷歌翻译
随着人工智能(AI)的积极发展,基于深神经网络(DNN)的智能应用会改变人们的生活方式和生产效率。但是,从网络边缘生成的大量计算和数据成为主要的瓶颈,传统的基于云的计算模式无法满足实时处理任务的要求。为了解决上述问题,通过将AI模型训练和推理功能嵌入网络边缘,Edge Intelligence(EI)成为AI领域的尖端方向。此外,云,边缘和终端设备之间的协作DNN推断提供了一种有希望的方法来增强EI。然而,目前,以EI为导向的协作DNN推断仍处于早期阶段,缺乏对现有研究工作的系统分类和讨论。因此,我们已经对有关以EI为导向的协作DNN推断的最新研究进行了全面调查。在本文中,我们首先回顾了EI的背景和动机。然后,我们为EI分类了四个典型的DNN推理范例,并分析其特征和关键技术。最后,我们总结了协作DNN推断的当前挑战,讨论未来的发展趋势并提供未来的研究方向。
translated by 谷歌翻译
The optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final, and much-anticipated, cornerstone of AI regulation in the EU. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels effect in AI regulation, with significant consequences for the US and other countries. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).
translated by 谷歌翻译
自动化驾驶系统(广告)开辟了汽车行业的新领域,为未来的运输提供了更高的效率和舒适体验的新可能性。然而,在恶劣天气条件下的自主驾驶已经存在,使自动车辆(AVS)长时间保持自主车辆(AVS)或更高的自主权。本文评估了天气在分析和统计方式中为广告传感器带来的影响和挑战,并对恶劣天气条件进行了解决方案。彻底报道了关于对每种天气的感知增强的最先进技术。外部辅助解决方案如V2X技术,当前可用的数据集,模拟器和天气腔室的实验设施中的天气条件覆盖范围明显。通过指出各种主要天气问题,自主驾驶场目前正在面临,近年来审查硬件和计算机科学解决方案,这项调查概述了在不利的天气驾驶条件方面的障碍和方向的障碍和方向。
translated by 谷歌翻译
随着物联网(IoT),边缘计算和云计算的普及,正在开发越来越多的流分析应用程序,包括在物联网传感数据之上的实时趋势预测和对象检测。一种流行的流分析类型是基于重复的神经网络(RNN)基于深度学习模型的时间序列或序列数据预测和预测。与假设数据提前可用并且不会更改的传统分析不同,流分析涉及正在连续生成的数据,并且数据趋势/分布可能会发生变化(又称概念漂移),这将导致预测/预测准确性下降时间。另一个挑战是为流分析找到最佳的资源提供,以达到良好的总体延迟。在本文中,我们研究了如何使用称为长期记忆(LSTM)的RNN模型来最佳利用边缘和云资源,以获得更好的准确性和流式分析。我们为混合流分析提出了一个新颖的边缘云集成框架,该框架支持云上边缘和高容量训练的低潜伏期推断。为了实现灵活的部署,我们研究了部署混合学习框架的不同方法,包括以边缘为中心,以云为中心和边缘云集成。此外,我们的混合学习框架可以根据历史数据进行预训练的LSTM模型,并根据最新数据定期重新训练LSTM模型的推理结果。使用现实世界和模拟流数据集,我们的实验表明,在延迟方面,提出的Edge-Cloud部署是所有三种部署类型中最好的。为了准确性,实验表明我们的动态学习方法在所有三种概念漂移方案的所有学习方法中都表现出最好的作用。
translated by 谷歌翻译
机器学习〜(ML)近年来在不同的应用和域上提供了令人鼓舞的结果。但是,在许多情况下,需要确保可靠性甚至安全性等质量。为此,一个重要方面是确定是否在适合其应用程序范围的情况下部署了ML组件。对于其环境开放且可变的组件,例如在自动驾驶汽车中发现的组件,因此,重要的是要监视其操作情况,以确定其与ML组件训练有素的范围的距离。如果认为该距离太大,则应用程序可以选择考虑ML组件结果不可靠并切换到替代方案,例如改用人类操作员输入。 SAFEML是一种基于培训和操作数据集的统计测试的距离测量,用于执行此类监视的模型无形方法。正确设置Safeml的限制包括缺乏用于确定给定应用程序的系统方法,需要多少个操作样本来产生可靠的距离信息以及确定适当的距离阈值。在这项工作中,我们通过提供实用方法来解决这些限制,并证明其在众所周知的交通标志识别问题中的用途,并在一个使用Carla开源汽车模拟器的示例中解决了这些局限性。
translated by 谷歌翻译
比较不同的汽车框架是具有挑战性的,并且经常做错了。我们引入了一个开放且可扩展的基准测试,该基准遵循最佳实践,并在比较自动框架时避免常见错误。我们对71个分类和33项回归任务进行了9个著名的自动框架进行了详尽的比较。通过多面分析,评估模型的准确性,与推理时间的权衡以及框架失败,探索了自动框架之间的差异。我们还使用Bradley-terry树来发现相对自动框架排名不同的任务子集。基准配备了一个开源工具,该工具与许多自动框架集成并自动化经验评估过程端到端:从框架安装和资源分配到深入评估。基准测试使用公共数据集,可以轻松地使用其他Automl框架和任务扩展,并且具有最新结果的网站。
translated by 谷歌翻译