Edge Computing通过同时且连续执行延迟敏感的机器学习(ML)应用程序来启用智能物联网的系统。这些基于边缘的机器学习系统通常是电池供电的(即能量限制的)。他们使用具有不同计算性能的异质资源(例如CPU,GPU和/或FPGA)来满足ML应用程序的延迟约束。面临的挑战是,就这些系统的能量和延迟约束分配了在异质边缘计算系统(HEC)上对不同ML应用程序的请求。为此,我们研究和分析资源分配解决方案,这些解决方案可以在考虑能量限制的同时增加准时任务完成率。重要的是,我们研究了边缘友好的(轻巧)多目标映射启发式方法,这些启发式启发式方法不会偏向于特定的应用程序类型以实现目标;取而代之的是,启发式方法在其映射决策中考虑了同一ML应用程序中的“公平性”。绩效评估表明,根据潜伏期和能源目标,尤其是在低至中等请求的到达率方面,提出的启发式胜诉率优于异质系统中广泛使用的启发式方法。我们观察到准时任务完成率提高了8.9%,节能提高了12.6%,而没有在边缘系统上施加任何明显的开销。
translated by 谷歌翻译
Emerging real-time multi-model ML (RTMM) workloads such as AR/VR and drone control often involve dynamic behaviors in various levels; task, model, and layers (or, ML operators) within a model. Such dynamic behaviors are new challenges to the system software in an ML system because the overall system load is unpredictable unlike traditional ML workloads. Also, the real-time processing requires to meet deadlines, and multi-model workloads involve highly heterogeneous models. As RTMM workloads often run on resource-constrained devices (e.g., VR headset), developing an effective scheduler is an important research problem. Therefore, we propose a new scheduler, SDRM3, that effectively handles various dynamicity in RTMM style workloads targeting multi-accelerator systems. To make scheduling decisions, SDRM3 quantifies the unique requirements for RTMM workloads and utilizes the quantified scores to drive scheduling decisions, considering the current system load and other inference jobs on different models and input frames. SDRM3 has tunable parameters that provide fast adaptivity to dynamic workload changes based on a gradient descent-like online optimization, which typically converges within five steps for new workloads. In addition, we also propose a method to exploit model level dynamicity based on Supernet for exploiting the trade-off between the scheduling effectiveness and model performance (e.g., accuracy), which dynamically selects a proper sub-network in a Supernet based on the system loads. In our evaluation on five realistic RTMM workload scenarios, SDRM3 reduces the overall UXCost, which is a energy-delay-product (EDP)-equivalent metric for real-time applications defined in the paper, by 37.7% and 53.2% on geometric mean (up to 97.6% and 97.1%) compared to state-of-the-art baselines, which shows the efficacy of our scheduling methodology.
translated by 谷歌翻译
工作流程调度是一个并行和分布式计算(PDC)的长期研究,旨在有效地利用计算资源来满足用户的服务要求。最近提出的调度方法利用边缘计算平台的低响应时间来优化服务质量(QoS)。然而,由于计算异质性,移动设备的延迟以及工作负载资源要求的挥发性,因此由于计算异质性而挑战,在移动边缘云系统中的调度工作流程应用是具有挑战性的。为了克服这些困难,它是必不可少的,但同时具有挑战性,开发一种有效地模拟QoS目标的长视力优化方案。在这项工作中,我们提出了MCDS:Monte Carlo学习使用Deep代理模型来有效地安排移动边缘云计算系统中的工作流程应用。 MCD是一种基于人工智能(AI)的调度方法,它使用基于树的搜索策略和基于深度神经网络的代理模型来估计即时动作的长期QoS影响,以实现调度决策的鲁棒优化。物理和模拟边缘云试验台的实验表明,MCD在能耗,响应时间,SLA违规方面可以改善最先进的方法,违规和成本分别至少为6.13,4.56,45.09和30.71%。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
培训深神经网络(DNNS)在企业和云数据中心都广受欢迎。现有的DNN培训调度程序将GPU视为主要资源,并分配其他资源,例如CPU和内存与作业要求的GPU数量成正比。不幸的是,这些调度程序不考虑作业对CPU,内存和存储资源分配的敏感性的影响。在这项工作中,我们提出了Synergy,这是一种对共享GPU群集的资源敏感调度程序。通过乐观的分析,协同作用侵犯了DNN对不同资源的敏感性;某些工作可能会从GPU育儿分配中受益更多,而某些工作可能不会受到GPU育儿分配的影响。 Synergy使用新的近乎最佳的在线算法在共享的多租户集群上安排的一组作业进行了多余的工作量感知作业。我们的实验表明,与传统的GPU育儿计划相比,工作量感知的CPU和内存分配可以提高平均JCT高达3.4倍。
translated by 谷歌翻译
智能物联网环境(iiote)由可以协作执行半自动的IOT应用的异构装置,其示例包括高度自动化的制造单元或自主交互收获机器。能量效率是这种边缘环境中的关键,因为它们通常基于由无线和电池运行设备组成的基础设施,例如电子拖拉机,无人机,自动引导车辆(AGV)S和机器人。总能源消耗从多种技术技术汲取贡献,使得能够实现边缘计算和通信,分布式学习以及分布式分区和智能合同。本文提供了本技术的最先进的概述,并说明了它们的功能和性能,特别关注资源,延迟,隐私和能源消耗之间的权衡。最后,本文提供了一种在节能IIOTE和路线图中集成这些能力技术的愿景,以解决开放的研究挑战
translated by 谷歌翻译
航空基站(ABS)允许智能农场从物联网(IoT)设备的ABS卸载复杂任务的处理责任。 IoT设备的能源和计算资源有限,因此需要为需要ABS支持的系统提供高级解决方案。本文介绍了一种新型的基于多进取的风险敏感的增强学习方法,用于用于智能农业的ABS任务计划。该问题被定义为任务卸载,并在其截止日期之前完成IoT任务的严格条件。此外,该算法还必须考虑ABS的能量能力有限。结果表明,我们提出的方法的表现优于几种启发式方法和经典的Q学习方法。此外,我们提供了混合整数线性编程解决方案,以确定性能的下限,并阐明我们的风险敏感解决方案与最佳解决方案之间的差距。比较证明了我们的广泛仿真结果表明,我们的方法是一种有前途的方法,可以为智能农场中的物联网任务提供保证的任务处理服务,同时增加了该农场中ABS的悬停时间。
translated by 谷歌翻译
边缘计算(EC)的出现是一种有希望的范式,可提供靠近数据源的多个计算和分析功能,为新型应用开辟了新的途径。但是,EC节点的计算能力有限,并期望确保任务执行期间高水平的QoS对创新管理方法施加了严格的要求。由于需要在EC节点运行期间保持最低QoS水平的动机,我们阐述了针对任务调度的分布式和智能的决策方法。我们的目标是增强EC节点的行为,使其能够确保高QoS水平。我们建议节点连续监视QoS级别,并系统地评估违反它们以主动决定某些任务以将其卸载到同行节点或云的概率。我们通过多种实验场景介绍,描述和评估所提出的方案,揭示其性能以及在EC等非常动态的环境中服务处理请求时所设想的监视机制的好处。
translated by 谷歌翻译
In recent years, the exponential proliferation of smart devices with their intelligent applications poses severe challenges on conventional cellular networks. Such challenges can be potentially overcome by integrating communication, computing, caching, and control (i4C) technologies. In this survey, we first give a snapshot of different aspects of the i4C, comprising background, motivation, leading technological enablers, potential applications, and use cases. Next, we describe different models of communication, computing, caching, and control (4C) to lay the foundation of the integration approach. We review current state-of-the-art research efforts related to the i4C, focusing on recent trends of both conventional and artificial intelligence (AI)-based integration approaches. We also highlight the need for intelligence in resources integration. Then, we discuss integration of sensing and communication (ISAC) and classify the integration approaches into various classes. Finally, we propose open challenges and present future research directions for beyond 5G networks, such as 6G.
translated by 谷歌翻译
通过将云资源转换为用户的邻近来减轻云计算所拥有的限制来引入雾计算。雾环境使其有限的资源可用于大量用户部署其无服务器的应用程序,由多个无服务器功能组成。引入迷雾环境背后的主要意图是通过其有限的资源来满足延迟和位置敏感无服务器应用程序的需求。最近的研究主要侧重于将最大资源分配给来自FOG节点的这些应用程序,而不是充分利用云环境。这引入了在将资源提供给最大连接用户的负面影响。为了解决此问题,在本文中,我们调查了用户请求的最佳百分比,该请求应由雾和云实现。因此,我们提出了Def-Driel,系统地部署了使用深度增强学习的雾和云环境中无服务器功能,使用若干现实生活参数,例如来自附近FOG节点,用户的优先级的用户的距离和延迟,与最近的相关算法相比,无服务器应用程序的优先级及其资源需求等。从模拟和比较结果,可以清楚地观察到其对其他算法的优势及其对现实生活场景的适用性。
translated by 谷歌翻译
随着人工智能(AI)的积极发展,基于深神经网络(DNN)的智能应用会改变人们的生活方式和生产效率。但是,从网络边缘生成的大量计算和数据成为主要的瓶颈,传统的基于云的计算模式无法满足实时处理任务的要求。为了解决上述问题,通过将AI模型训练和推理功能嵌入网络边缘,Edge Intelligence(EI)成为AI领域的尖端方向。此外,云,边缘和终端设备之间的协作DNN推断提供了一种有希望的方法来增强EI。然而,目前,以EI为导向的协作DNN推断仍处于早期阶段,缺乏对现有研究工作的系统分类和讨论。因此,我们已经对有关以EI为导向的协作DNN推断的最新研究进行了全面调查。在本文中,我们首先回顾了EI的背景和动机。然后,我们为EI分类了四个典型的DNN推理范例,并分析其特征和关键技术。最后,我们总结了协作DNN推断的当前挑战,讨论未来的发展趋势并提供未来的研究方向。
translated by 谷歌翻译
基于深度强化学习(DRL)的神经调度程序已经显示出巨大的解决现实世界资源分配问题的潜力,因为它们在集群计算领域表现出显着的性能增长。在本文中,我们通过广泛的实验和与非神经,启发式调度程序进行比较,调查了神经调度程序对芯片(SOC)资源分配的域(SOC)资源域的可行性。关键发现是三倍。首先,由于i)SOC计算资源的异质性和ii)由传入工作中的随机性引起的可变动作集,因此为群集计算域而设计的神经调度程序对SOC无法正常工作。其次,我们的新型神经调度程序技术,折衷的相互作用匹配(EIM)克服了上述挑战,从而显着改善了现有的神经调度程序。具体而言,我们合理化了基于EIM的神经调度程序的性能增长背后的根本原因。第三,我们发现平均处理元件(PE)切换延迟和平均PE计算时间的比率也会显着影响神经SOC调度程序的性能,即使使用EIM。因此,未来的神经SOC调度程序设计必须考虑该指标及其实施开销,以实施实用性。
translated by 谷歌翻译
Edge Federation是一种新的计算范式,无缝地互连多个边缘服务提供商的资源。此类系统中的一个关键挑战是在受约束设备中部署基于延迟和AI的资源密集型应用程序。为了应对这一挑战,我们提出了一种新型的基于记忆有效的深度学习模型,即生成优化网络(GON)。与甘斯不同,成人使用单个网络既区分输入又生成样本,从而大大降低了它们的内存足迹。利用奇数的低内存足迹,我们提出了一种称为Dragon的分散性故障耐受性方法,该方法运行模拟(按照数字建模双胞胎)来快速预测和优化边缘联邦的性能。在多个基于Raspberry-Pi的联合边缘配置上使用现实世界边缘计算基准测试的广泛实验表明,龙可以胜过故障检测和服务质量(QOS)指标的基线方法。具体而言,所提出的方法给出了与最佳深度学习方法(DL)方法更高的F1分数,而与启发式方法相比,记忆力较低。这使得违反能源消耗,响应时间和服务水平协议分别提高了74%,63%和82%。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
第五世代和第六代无线通信网络正在启用工具,例如物联网设备,无人驾驶汽车(UAV)和人工智能,以使用设备网络来改善农业景观,以自动监视农田。对大面积进行调查需要在特定时间段内执行许多图像分类任务,以防止发生事件发生的情况,例如火灾或洪水。无人机具有有限的能量和计算能力,并且可能无法在本地和适当的时间内执行所有强烈的图像分类任务。因此,假定无人机能够部分将其工作量分开到附近的多访问边缘计算设备。无人机需要一种决策算法,该算法将决定将执行任务的位置,同时还考虑网络中其他无人机的时间限制和能量级别。在本文中,我们介绍了一种深入的Q学习方法(DQL)来解决这个多目标问题。将所提出的方法与Q学习和三个启发式基线进行了比较,模拟结果表明,我们提出的基于DQL的方法在涉及无人机的剩余电池电量和违规截止日期的百分比时可相当。此外,我们的方法能够比Q学习快13倍。
translated by 谷歌翻译
随着数据生成越来越多地在没有连接连接的设备上进行,因此与机器学习(ML)相关的流量将在无线网络中无处不在。许多研究表明,传统的无线协议高效或不可持续以支持ML,这创造了对新的无线通信方法的需求。在这项调查中,我们对最先进的无线方法进行了详尽的审查,这些方法是专门设计用于支持分布式数据集的ML服务的。当前,文献中有两个明确的主题,模拟的无线计算和针对ML优化的数字无线电资源管理。这项调查对这些方法进行了全面的介绍,回顾了最重要的作品,突出了开放问题并讨论了应用程序方案。
translated by 谷歌翻译
The deployment flexibility and maneuverability of Unmanned Aerial Vehicles (UAVs) increased their adoption in various applications, such as wildfire tracking, border monitoring, etc. In many critical applications, UAVs capture images and other sensory data and then send the captured data to remote servers for inference and data processing tasks. However, this approach is not always practical in real-time applications due to the connection instability, limited bandwidth, and end-to-end latency. One promising solution is to divide the inference requests into multiple parts (layers or segments), with each part being executed in a different UAV based on the available resources. Furthermore, some applications require the UAVs to traverse certain areas and capture incidents; thus, planning their paths becomes critical particularly, to reduce the latency of making the collaborative inference process. Specifically, planning the UAVs trajectory can reduce the data transmission latency by communicating with devices in the same proximity while mitigating the transmission interference. This work aims to design a model for distributed collaborative inference requests and path planning in a UAV swarm while respecting the resource constraints due to the computational load and memory usage of the inference requests. The model is formulated as an optimization problem and aims to minimize latency. The formulated problem is NP-hard so finding the optimal solution is quite complex; thus, this paper introduces a real-time and dynamic solution for online applications using deep reinforcement learning. We conduct extensive simulations and compare our results to the-state-of-the-art studies demonstrating that our model outperforms the competing models.
translated by 谷歌翻译
智能EHealth应用程序通过遥感,连续监控和数据分析为客户提供个性化和预防性的数字医疗服务。智能EHealth应用程序从多种模态感知输入数据,将数据传输到边缘和/或云节点,并使用计算密集型机器学习(ML)算法处理数据。连续的嘈杂输入数据,不可靠的网络连接,ML算法的计算要求以及传感器 - 边缘云层之间的计算放置选择会影响ML驱动的EHEADH应用程序的效率。在本章中,我们介绍了以优化的计算放置,准确性绩效权衡的探索以及用于ML驱动的EHEADH应用程序的跨层次感觉的合作式化的技术。我们通过传感器 - 边缘云框架进行客观疼痛评估案例研究,证明了在日常设置中智能eHealth应用程序的实际用例。
translated by 谷歌翻译
最近,使用卷积神经网络(CNNS)存在移动和嵌入式应用的爆炸性增长。为了减轻其过度的计算需求,开发人员传统上揭示了云卸载,突出了高基础设施成本以及对网络条件的强烈依赖。另一方面,强大的SOC的出现逐渐启用设备执行。尽管如此,低端和中层平台仍然努力充分运行最先进的CNN。在本文中,我们展示了Dyno,一种分布式推断框架,将两全其人的最佳框架结合起来解决了几个挑战,例如设备异质性,不同的带宽和多目标要求。启用这是其新的CNN特定数据包装方法,其在onloading计算时利用CNN的不同部分的精度需求的可变性以及其新颖的调度器,该调度器共同调谐分区点并在运行时传输数据精度适应其执行环境的推理。定量评估表明,Dyno优于当前最先进的,通过竞争对手的CNN卸载系统,在竞争对手的CNN卸载系统上提高吞吐量超过一个数量级,最高可达60倍的数据。
translated by 谷歌翻译
Machine learning (ML) models can leak information about users, and differential privacy (DP) provides a rigorous way to bound that leakage under a given budget. This DP budget can be regarded as a new type of compute resource in workloads of multiple ML models training on user data. Once it is used, the DP budget is forever consumed. Therefore, it is crucial to allocate it most efficiently to train as many models as possible. This paper presents the scheduler for privacy that optimizes for efficiency. We formulate privacy scheduling as a new type of multidimensional knapsack problem, called privacy knapsack, which maximizes DP budget efficiency. We show that privacy knapsack is NP-hard, hence practical algorithms are necessarily approximate. We develop an approximation algorithm for privacy knapsack, DPK, and evaluate it on microbenchmarks and on a new, synthetic private-ML workload we developed from the Alibaba ML cluster trace. We show that DPK: (1) often approaches the efficiency-optimal schedule, (2) consistently schedules more tasks compared to a state-of-the-art privacy scheduling algorithm that focused on fairness (1.3-1.7x in Alibaba, 1.0-2.6x in microbenchmarks), but (3) sacrifices some level of fairness for efficiency. Therefore, using DPK, DP ML operators should be able to train more models on the same amount of user data while offering the same privacy guarantee to their users.
translated by 谷歌翻译