协作推断已获得机器学习的重大研究兴趣,作为分发计算负载,减少延迟以及解决通信中隐私保护的工具。最近的协作推理框架采用了动态推理方法,例如早期外观和神经网络的运行时间分配。但是,随着机器学习框架的扩展,例如在监视应用中,需要考虑与设备故障相关的容错。本文介绍了基于正式定义的计算模型建立的Edge-Prune分布式计算框架,该框架为错误的耐受性协作推断提供了灵活的基础架构。这项工作的实验部分显示了通过协作推理可节省的推理时间的结果,呈现容错的系统拓扑,并在执行时间开销方面分析其成本。
translated by 谷歌翻译
随着人工智能(AI)的积极发展,基于深神经网络(DNN)的智能应用会改变人们的生活方式和生产效率。但是,从网络边缘生成的大量计算和数据成为主要的瓶颈,传统的基于云的计算模式无法满足实时处理任务的要求。为了解决上述问题,通过将AI模型训练和推理功能嵌入网络边缘,Edge Intelligence(EI)成为AI领域的尖端方向。此外,云,边缘和终端设备之间的协作DNN推断提供了一种有希望的方法来增强EI。然而,目前,以EI为导向的协作DNN推断仍处于早期阶段,缺乏对现有研究工作的系统分类和讨论。因此,我们已经对有关以EI为导向的协作DNN推断的最新研究进行了全面调查。在本文中,我们首先回顾了EI的背景和动机。然后,我们为EI分类了四个典型的DNN推理范例,并分析其特征和关键技术。最后,我们总结了协作DNN推断的当前挑战,讨论未来的发展趋势并提供未来的研究方向。
translated by 谷歌翻译
最近,使用卷积神经网络(CNNS)存在移动和嵌入式应用的爆炸性增长。为了减轻其过度的计算需求,开发人员传统上揭示了云卸载,突出了高基础设施成本以及对网络条件的强烈依赖。另一方面,强大的SOC的出现逐渐启用设备执行。尽管如此,低端和中层平台仍然努力充分运行最先进的CNN。在本文中,我们展示了Dyno,一种分布式推断框架,将两全其人的最佳框架结合起来解决了几个挑战,例如设备异质性,不同的带宽和多目标要求。启用这是其新的CNN特定数据包装方法,其在onloading计算时利用CNN的不同部分的精度需求的可变性以及其新颖的调度器,该调度器共同调谐分区点并在运行时传输数据精度适应其执行环境的推理。定量评估表明,Dyno优于当前最先进的,通过竞争对手的CNN卸载系统,在竞争对手的CNN卸载系统上提高吞吐量超过一个数量级,最高可达60倍的数据。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
分布式培训已成为培训大型神经网络(NN)模型的普遍性和有效的方法,该模型加工大规模数据。然而,满足来自各种NN模型,多样化计算资源的要求以及在培训工作期间的动态变化是非常挑战的。在这项研究中,我们在系统的端到端视图中设计了我们的分布式训练框架,以提供不同场景的内置自适应能力,特别是对于工业应用和生产环境,通过完全考虑资源分配,模型分区,任务放置和分布式执行。基于统一的分布式图和统一群集对象,我们的自适应框架配备了全球成本模型和全局计划者,可以实现任意并行,资源感知的放置,多模式执行,容错和弹性分布式。训练。实验表明,我们的框架可以满足应用程序的多样性和资源的异质性满足各种要求和具有竞争力的性能。具有260亿参数的Ernie语言模型在数千个AI处理器上有效地培训,可扩展性较弱的91.7%。通过采用异质管道异步执行,从推荐系统的模型的吞吐量可以分别增加到2.1倍,仅增加了GPU和CPU培训的3.3倍。此外,容错和弹性分布式培训已成功应用于在线工业应用,这减少了长期培训工作的数量,增加了34.49%,并在全球调度效率增加了33.91%生产环境。
translated by 谷歌翻译
TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. Tensor-Flow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, generalpurpose GPUs, and custom-designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous "parameter server" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with a focus on training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model and demonstrate the compelling performance that Tensor-Flow achieves for several real-world applications.
translated by 谷歌翻译
受到深入学习的巨大成功通过云计算和边缘芯片的快速发展的影响,人工智能研究(AI)的研究已经转移到计算范例,即云计算和边缘计算。近年来,我们目睹了在云服务器上开发更高级的AI模型,以超越传统的深度学习模型,以造成模型创新(例如,变压器,净化家庭),训练数据爆炸和飙升的计算能力。但是,边缘计算,尤其是边缘和云协同计算,仍然在其初期阶段,因为由于资源受限的IOT场景,因此由于部署了非常有限的算法而导致其成功。在本调查中,我们对云和边缘AI进行系统审查。具体而言,我们是第一个设置云和边缘建模的协作学习机制,通过彻底的审查使能够实现这种机制的架构。我们还讨论了一些正在进行的先进EDGE AI主题的潜在和实践经验,包括预先训练模型,图形神经网络和加强学习。最后,我们讨论了这一领域的有希望的方向和挑战。
translated by 谷歌翻译
由于其计算资源有限,在物联网和移动设备上部署深层神经网络(DNN)是一项艰巨的任务。因此,苛刻的任务通常完全被卸载到可以加速推理的边缘服务器上,但是,这也会导致沟通成本并唤起隐私问题。此外,这种方法使端设备的计算能力未使用。拆分计算是一个范式,其中DNN分为两个部分。第一部分是在终点设备上执行的,并且输出将传输到执行最终部分的边缘服务器。在这里,我们介绍动态拆分计算,其中最佳拆分位置是根据通信通道的状态动态选择的。通过使用现代DNN体系结构中已经存在的天然瓶颈,动态拆分计算避免了再培训和超参数优化,并且对DNN的最终准确性没有任何负面影响。通过广泛的实验,我们表明动态拆分计算在数据速率和服务器负载随时间变化的边缘计算环境中的推断速度更快。
translated by 谷歌翻译
由于机器学习(ML)技术和应用正在迅速改变许多计算领域,以及与ML相关的安全问题也在出现。在系统安全领域中,已经进行了许多努力,以确保ML模型和数据机密性。ML计算通常不可避免地在不受信任的环境中执行,并因此需要复杂的多方安全要求。因此,研究人员利用可信任的执行环境(TEES)来构建机密ML计算系统。本文通过在不受信任的环境中分类攻击向量和缓解攻击载体和缓解来进行系统和全面的调查,分析多方ML安全要求,并讨论相关工程挑战。
translated by 谷歌翻译
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.
translated by 谷歌翻译
对将AI功能从云上的数据中心转移到边缘或最终设备的需求越来越大,这是由在智能手机,AR/VR设备,自动驾驶汽车和各种汽车上运行的快速实时AI的应用程序举例说明的。物联网设备。然而,由于DNN计算需求与边缘或最终设备上的计算能力之间的较大增长差距,这种转变受到了严重的阻碍。本文介绍了XGEN的设计,这是DNN的优化框架,旨在弥合差距。 XGEN将横切共同设计作为其一阶考虑。它的全栈AI面向AI的优化包括在DNN软件堆栈的各个层的许多创新优化,所有这些优化都以合作的方式设计。独特的技术使XGEN能够优化各种DNN,包括具有极高深度的DNN(例如Bert,GPT,其他变形金刚),并生成代码比现有DNN框架中的代码快几倍,同时提供相同的准确性水平。
translated by 谷歌翻译
The recent breakthroughs in machine learning (ML) and deep learning (DL) have enabled many new capabilities across plenty of application domains. While most existing machine learning models require large memory and computing power, efforts have been made to deploy some models on resource-constrained devices as well. There are several systems that perform inference on the device, while direct training on the device still remains a challenge. On-device training, however, is attracting more and more interest because: (1) it enables training models on local data without needing to share data over the cloud, thus enabling privacy preserving computation by design; (2) models can be refined on devices to provide personalized services and cope with model drift in order to adapt to the changes of the real-world environment; and (3) it enables the deployment of models in remote, hardly accessible locations or places without stable internet connectivity. We summarize and analyze the-state-of-art systems research to provide the first survey of on-device training from a systems perspective.
translated by 谷歌翻译
联合学习(FL)作为边缘设备的有希望的技术,以协作学习共享预测模型,同时保持其训练数据,从而解耦了从需要存储云中的数据的机器学习的能力。然而,在规模和系统异质性方面,FL难以现实地实现。虽然有许多用于模拟FL算法的研究框架,但它们不支持在异构边缘设备上进行可扩展的流程。在本文中,我们呈现花 - 一种全面的FL框架,通过提供新的设施来执行大规模的FL实验并考虑丰富的异构流程来区分现有平台。我们的实验表明花卉可以仅使用一对高端GPU在客户尺寸下进行FL实验。然后,研究人员可以将实验无缝地迁移到真实设备中以检查设计空间的其他部分。我们认为花卉为社区提供了一个批判性的新工具,用于研究和发展。
translated by 谷歌翻译
In recent years, the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks. Such challenges can be potentially overcome by integrating communication, computing, caching, and control (i4C) technologies. In this survey, we first give a snapshot of different aspects of the i4C, comprising background, motivation, leading technological enablers, potential applications, and use cases. Next, we describe different models of communication, computing, caching, and control (4C) to lay the foundation of the integration approach. We review current state-of-the-art research efforts related to the i4C, focusing on recent trends of both conventional and artificial intelligence (AI)-based integration approaches. We also highlight the need for intelligence in resources integration. Then, we discuss integration of sensing and communication (ISAC) and classify the integration approaches into various classes. Finally, we propose open challenges and present future research directions for beyond 5G networks, such as 6G.
translated by 谷歌翻译
Federated learning (FL) on deep neural networks facilitates new applications at the edge, especially for wearable and Internet-of-Thing devices. Such devices capture a large and diverse amount of data, but they have memory, compute, power, and connectivity constraints which hinder their participation in FL. We propose Centaur, a multitier FL framework, enabling ultra-constrained devices to efficiently participate in FL on large neural nets. Centaur combines two major ideas: (i) a data selection scheme to choose a portion of samples that accelerates the learning, and (ii) a partition-based training algorithm that integrates both constrained and powerful devices owned by the same user. Evaluations, on four benchmark neural nets and three datasets, show that Centaur gains ~10% higher accuracy than local training on constrained devices with ~58% energy saving on average. Our experimental results also demonstrate the superior efficiency of Centaur when dealing with imbalanced data, client participation heterogeneity, and various network connection probabilities.
translated by 谷歌翻译
最近,通过协作推断部署深神经网络(DNN)模型,该推断将预训练的模型分为两个部分,并分别在用户设备(UE)和Edge Server上执行它们,从而变得有吸引力。但是,DNN的大型中间特征会阻碍灵活的脱钩,现有方法要么集中在单个UE方案上,要么只是在考虑所需的CPU周期的情况下定义任务,但忽略了单个DNN层的不可分割性。在本文中,我们研究了多代理协作推理方案,其中单个边缘服务器协调了多个UES的推理。我们的目标是为所有UES实现快速和节能的推断。为了实现这一目标,我们首先设计了一种基于自动编码器的轻型方法,以压缩大型中间功能。然后,我们根据DNN的推理开销定义任务,并将问题作为马尔可夫决策过程(MDP)。最后,我们提出了一种多代理混合近端策略优化(MAHPPO)算法,以解决混合动作空间的优化问题。我们对不同类型的网络进行了广泛的实验,结果表明,我们的方法可以降低56%的推理潜伏期,并节省多达72 \%的能源消耗。
translated by 谷歌翻译
能量收集(EH)间歇性地运行的IOT设备,与深神经网络(DNN)的进步相结合,为实现可持续智能应用开辟了新的机会。然而,由于有限的资源和间歇电源导致频繁故障的挑战,实现了EH设备上的那些计算和内存密集型智能算法非常困难。为了解决这些挑战,本文提出了一种方法,使得具有用于微小能量收集装置的低能量加速器的超快速深度学习。我们首先提出了一种资源感知结构化DNN训练框架,它采用块循环矩阵与ADMM实现高压缩和模型量化,以利用各种矢量操作加速器的优点。然后提出了一种DNN实现方法,即采用低能量加速器来利用具有较小能耗的最大性能的低能量加速器。最后,我们进一步设计Flex,系统支持在能量收集情况下间歇性计算。来自三种不同DNN模型的实验结果表明RAD,ACE和FLEX可以对能源收集设备进行超快速和正确的推断,该设备可降低高达4.26倍的运行时间,高达7.7倍的能量降低,高精度在最高的状态下艺术。
translated by 谷歌翻译
In recent years, deep learning (DL) models have demonstrated remarkable achievements on non-trivial tasks such as speech recognition and natural language understanding. One of the significant contributors to its success is the proliferation of end devices that acted as a catalyst to provide data for data-hungry DL models. However, computing DL training and inference is the main challenge. Usually, central cloud servers are used for the computation, but it opens up other significant challenges, such as high latency, increased communication costs, and privacy concerns. To mitigate these drawbacks, considerable efforts have been made to push the processing of DL models to edge servers. Moreover, the confluence point of DL and edge has given rise to edge intelligence (EI). This survey paper focuses primarily on the fifth level of EI, called all in-edge level, where DL training and inference (deployment) are performed solely by edge servers. All in-edge is suitable when the end devices have low computing resources, e.g., Internet-of-Things, and other requirements such as latency and communication cost are important in mission-critical applications, e.g., health care. Firstly, this paper presents all in-edge computing architectures, including centralized, decentralized, and distributed. Secondly, this paper presents enabling technologies, such as model parallelism and split learning, which facilitate DL training and deployment at edge servers. Thirdly, model adaptation techniques based on model compression and conditional computation are described because the standard cloud-based DL deployment cannot be directly applied to all in-edge due to its limited computational resources. Fourthly, this paper discusses eleven key performance metrics to evaluate the performance of DL at all in-edge efficiently. Finally, several open research challenges in the area of all in-edge are presented.
translated by 谷歌翻译
随着自动机器人解决方案无处不在的越来越多,对它们的连通性和多机器人系统中的合作的兴趣正在上升。当前研究问题的两个方面是机器人安全性和对拜占庭代理商的确保多机器人协作。已提出了区块链和其他分布式分类帐技术(DLT)来应对两个领域的挑战。但是,一些关键挑战包括现实世界网络中的可扩展性和部署。本文提出了一种集成IOTA和ROS 2的方法,以实现更可扩展的基于DLT的机器人系统,同时允许部署后进行网络分区耐受性。据我们所知,这是机器人系统IOTA智能合约的首次实施,以及与ROS2的首次集成设计,这与依赖以太坊的绝大多数文献相比。我们提出了一般的IOTA+ROS 2体系结构,导致耐隔离的决策过程,该过程也从嵌入式区块链结构中继承了拜占庭式公差属性。我们证明了在具有间歇性网络连接的系统中进行合作映射应用程序的拟议框架的有效性。在存在网络分区的情况下,我们在以太坊方面表现出了卓越的性能,在计算资源利用方面的影响很小。这些结果为分布式机器人系统中的区块链解决方案更广泛地集成开辟了道路,其连接性和计算要求较少。
translated by 谷歌翻译
最近,由于其优越的特征提取性能,深度神经网络(DNN)的应用在诸如计算机视觉(CV)和自然语言处理(NLP)之类的许多领域非常突出。但是,高维参数模型和大规模数学计算限制了执行效率,尤其是用于物联网(IoT)设备。与以前的云/边缘模式不同,为上行链路通信和仅用于设备的设备的巨大压力承担了无法实现的计算强度,我们突出了DNN模型的设备和边缘之间的协作计算,这可以实现良好的平衡通信负载和执行准确性。具体地,提出了一种系统的按需共引起框架来利用多分支结构,其中预先接受的alexNet通过\ emph {早期出口}右尺寸,并在中间DNN层划分。实施整数量化以进一步压缩传输位。结果,我们建立了一个新的深度加强学习(DRL)优化器 - 软演员 - 软件 - 软演员批评者,用于离散(SAC-D),它生成\ emph {退出点},\ emph {partition point},\ emph {压缩位通过软策略迭代。基于延迟和准确性意识奖励设计,这种优化器可以很好地适应动态无线信道等复杂环境和任意CPU处理,并且能够支持5G URLLC。 Raspberry PI 4和PC上的真实世界实验显示了所提出的解决方案的表现。
translated by 谷歌翻译