由于边缘设备的不可靠性以及现代应用的严格的服务截止日期,构建一个容错的边缘系统可以快速地对节点过载或故障发生的挑战是具有挑战性的。此外,不必要的任务迁移可能会强调系统网络,从而强调需要智能和解析故障恢复方案。现有方法通常无法适应高度挥发性的工作量或准确地检测和诊断故障以获得最佳修复。因此,需要一种坚固且主动的容错机制来满足服务级别目标。在这项工作中,我们提出了一种使用生成的对冲网络(GaN)的复合AI模型来预测集装箱边缘部署中的主动容错的抢占迁移决策。 Pregan使用串联的共同模拟与GaN一起学习几次异常的分类器,并主动预测可靠计算的迁移决策。基于Raspberry-PI的边缘环境的广泛实验表明,Pregan可以在故障检测,诊断和分类中优于最先进的基线方法,从而实现高质量的服务。与所考虑的基线中的最佳方法相比,Pregan完成了5.1%的准确故障检测,更高的诊断得分和23.8%的开销。
translated by 谷歌翻译
The emergence of latency-critical AI applications has been supported by the evolution of the edge computing paradigm. However, edge solutions are typically resource-constrained, posing reliability challenges due to heightened contention for compute and communication capacities and faulty application behavior in the presence of overload conditions. Although a large amount of generated log data can be mined for fault prediction, labeling this data for training is a manual process and thus a limiting factor for automation. Due to this, many companies resort to unsupervised fault-tolerance models. Yet, failure models of this kind can incur a loss of accuracy when they need to adapt to non-stationary workloads and diverse host characteristics. To cope with this, we propose a novel modeling approach, called DeepFT, to proactively avoid system overloads and their adverse effects by optimizing the task scheduling and migration decisions. DeepFT uses a deep surrogate model to accurately predict and diagnose faults in the system and co-simulation based self-supervised learning to dynamically adapt the model in volatile settings. It offers a highly scalable solution as the model size scales by only 3 and 1 percent per unit increase in the number of active tasks and hosts. Extensive experimentation on a Raspberry-Pi based edge cluster with DeFog benchmarks shows that DeepFT can outperform state-of-the-art baseline methods in fault-detection and QoS metrics. Specifically, DeepFT gives the highest F1 scores for fault-detection, reducing service deadline violations by up to 37\% while also improving response time by up to 9%.
translated by 谷歌翻译
Edge Federation是一种新的计算范式,无缝地互连多个边缘服务提供商的资源。此类系统中的一个关键挑战是在受约束设备中部署基于延迟和AI的资源密集型应用程序。为了应对这一挑战,我们提出了一种新型的基于记忆有效的深度学习模型,即生成优化网络(GON)。与甘斯不同,成人使用单个网络既区分输入又生成样本,从而大大降低了它们的内存足迹。利用奇数的低内存足迹,我们提出了一种称为Dragon的分散性故障耐受性方法,该方法运行模拟(按照数字建模双胞胎)来快速预测和优化边缘联邦的性能。在多个基于Raspberry-Pi的联合边缘配置上使用现实世界边缘计算基准测试的广泛实验表明,龙可以胜过故障检测和服务质量(QOS)指标的基线方法。具体而言,所提出的方法给出了与最佳深度学习方法(DL)方法更高的F1分数,而与启发式方法相比,记忆力较低。这使得违反能源消耗,响应时间和服务水平协议分别提高了74%,63%和82%。
translated by 谷歌翻译
工作流程调度是一个并行和分布式计算(PDC)的长期研究,旨在有效地利用计算资源来满足用户的服务要求。最近提出的调度方法利用边缘计算平台的低响应时间来优化服务质量(QoS)。然而,由于计算异质性,移动设备的延迟以及工作负载资源要求的挥发性,因此由于计算异质性而挑战,在移动边缘云系统中的调度工作流程应用是具有挑战性的。为了克服这些困难,它是必不可少的,但同时具有挑战性,开发一种有效地模拟QoS目标的长视力优化方案。在这项工作中,我们提出了MCDS:Monte Carlo学习使用Deep代理模型来有效地安排移动边缘云计算系统中的工作流程应用。 MCD是一种基于人工智能(AI)的调度方法,它使用基于树的搜索策略和基于深度神经网络的代理模型来估计即时动作的长期QoS影响,以实现调度决策的鲁棒优化。物理和模拟边缘云试验台的实验表明,MCD在能耗,响应时间,SLA违规方面可以改善最先进的方法,违规和成本分别至少为6.13,4.56,45.09和30.71%。
translated by 谷歌翻译
最近,已经提出了使用代理模型的智能调度方法,以便在异构雾环境中有效地分配易失性任务。确定性代理模型,深神经网络(DNN)和基于梯度的优化等进步允许达到低能量消耗和响应时间。然而,确定估计优化的客观值的确定性代理模型,不考虑可以导致高服务级别协议(SLA)违规率的服务质量(QoS)目标函数的不确定性。此外,DNN训练的脆性性质,防止这些模型达到最小的能量或响应时间。为了克服这些困难,我们提出了一种新的调度程序:GOSH I.E.使用二阶衍生物和异源塑料深层代理模型的梯度优化。 GOSH使用二阶梯度基于基于梯度的优化方法来获得更好的QoS并减少迭代的次数,以收敛到调度决定,随后降低调度时间。 GOSH而不是Vanilla DNN,使用自然参数网络来近似客观分数。此外,较低的置信度优化方法可以通过采用基于误差的探索来在贪婪最小化和不确定性降低之间找到最佳权衡。因此,GOSH及其共模的扩展GOSH *可以快速调整并达到比基线方法更好的客观评分。我们表明GOSH *达到比GOSH更好的客观分数,但它仅适用于高资源可用性设置,而GOSH则适用于有限的资源设置。 GOSH和GOSH的真实系统实验*在能源消耗,响应时间和SLA分别违反最多18,27和82%的情况下,对最先进的技术进行了显着改善。
translated by 谷歌翻译
Time series anomaly detection has applications in a wide range of research fields and applications, including manufacturing and healthcare. The presence of anomalies can indicate novel or unexpected events, such as production faults, system defects, or heart fluttering, and is therefore of particular interest. The large size and complex patterns of time series have led researchers to develop specialised deep learning models for detecting anomalous patterns. This survey focuses on providing structured and comprehensive state-of-the-art time series anomaly detection models through the use of deep learning. It providing a taxonomy based on the factors that divide anomaly detection models into different categories. Aside from describing the basic anomaly detection technique for each category, the advantages and limitations are also discussed. Furthermore, this study includes examples of deep anomaly detection in time series across various application domains in recent years. It finally summarises open issues in research and challenges faced while adopting deep anomaly detection models.
translated by 谷歌翻译
近年来,随着传感器和智能设备的广泛传播,物联网(IoT)系统的数据生成速度已大大增加。在物联网系统中,必须经常处理,转换和分析大量数据,以实现各种物联网服务和功能。机器学习(ML)方法已显示出其物联网数据分析的能力。但是,将ML模型应用于物联网数据分析任务仍然面临许多困难和挑战,特别是有效的模型选择,设计/调整和更新,这给经验丰富的数据科学家带来了巨大的需求。此外,物联网数据的动态性质可能引入概念漂移问题,从而导致模型性能降解。为了减少人类的努力,自动化机器学习(AUTOML)已成为一个流行的领域,旨在自动选择,构建,调整和更新机器学习模型,以在指定任务上实现最佳性能。在本文中,我们对Automl区域中模型选择,调整和更新过程中的现有方法进行了审查,以识别和总结将ML算法应用于IoT数据分析的每个步骤的最佳解决方案。为了证明我们的发现并帮助工业用户和研究人员更好地实施汽车方法,在这项工作中提出了将汽车应用于IoT异常检测问题的案例研究。最后,我们讨论并分类了该领域的挑战和研究方向。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
计算机架构和系统已优化了很长时间,以便高效执行机器学习(ML)模型。现在,是时候重新考虑ML和系统之间的关系,并让ML转换计算机架构和系统的设计方式。这有一个双重含义:改善设计师的生产力,以及完成良性周期。在这篇论文中,我们对应用ML进行计算机架构和系统设计的工作进行了全面的审查。首先,我们考虑ML技术在架构/系统设计中的典型作用,即快速预测建模或设计方法,我们执行高级分类学。然后,我们总结了通过ML技术解决的计算机架构/系统设计中的常见问题,并且所用典型的ML技术来解决它们中的每一个。除了在狭义中强调计算机架构外,我们采用数据中心可被认为是仓库规模计算机的概念;粗略的计算机系统中提供粗略讨论,例如代码生成和编译器;我们还注意ML技术如何帮助和改造设计自动化。我们进一步提供了对机会和潜在方向的未来愿景,并设想应用ML的计算机架构和系统将在社区中蓬勃发展。
translated by 谷歌翻译
现代工业设施在生产过程中生成大量的原始传感器数据。该数据用于监视和控制过程,可以分析以检测和预测过程异常。通常,数据必须由专家注释,以进一步用于预测建模。当今的大多数研究都集中在需要手动注释数据的无监督异常检测算法或监督方法上。这些研究通常是使用过程模拟器生成的狭窄事件类别的数据进行的,并且在公开可用的数据集上很少验证建议的算法。在本文中,我们提出了一种新型的方法,用于用于工业化学传感器数据的无监督故障检测和诊断。我们根据具有各种故障类型的田纳西州伊士曼进程的两个公开数据集证明了我们的模型性能。结果表明,我们的方法显着优于现有方法(固定FPR的+0.2-0.3 TPR),并在不使用专家注释的情况下检测大多数过程故障。此外,我们进行了实验,以证明我们的方法适用于未提前不知道故障类型数量的现实世界应用。
translated by 谷歌翻译
随着人工智能(AI)的积极发展,基于深神经网络(DNN)的智能应用会改变人们的生活方式和生产效率。但是,从网络边缘生成的大量计算和数据成为主要的瓶颈,传统的基于云的计算模式无法满足实时处理任务的要求。为了解决上述问题,通过将AI模型训练和推理功能嵌入网络边缘,Edge Intelligence(EI)成为AI领域的尖端方向。此外,云,边缘和终端设备之间的协作DNN推断提供了一种有希望的方法来增强EI。然而,目前,以EI为导向的协作DNN推断仍处于早期阶段,缺乏对现有研究工作的系统分类和讨论。因此,我们已经对有关以EI为导向的协作DNN推断的最新研究进行了全面调查。在本文中,我们首先回顾了EI的背景和动机。然后,我们为EI分类了四个典型的DNN推理范例,并分析其特征和关键技术。最后,我们总结了协作DNN推断的当前挑战,讨论未来的发展趋势并提供未来的研究方向。
translated by 谷歌翻译
机器学习的进步为低端互联网节点(例如微控制器)带来了新的机会,将情报带入了情报。传统的机器学习部署具有较高的记忆力,并计算足迹阻碍了其在超资源约束的微控制器上的直接部署。本文强调了为MicroController类设备启用机载机器学习的独特要求。研究人员为资源有限的应用程序使用专门的模型开发工作流程,以确保计算和延迟预算在设备限制之内,同时仍保持所需的性能。我们表征了微控制器类设备的机器学习模型开发的广泛适用的闭环工作流程,并表明几类应用程序采用了它的特定实例。我们通过展示多种用例,将定性和数值见解介绍到模型开发的不同阶段。最后,我们确定了开放的研究挑战和未解决的问题,要求仔细考虑前进。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
我们提出了一种新的四管齐下的方法,在文献中首次建立消防员的情境意识。我们构建了一系列深度学习框架,彼此之叠,以提高消防员在紧急首次响应设置中进行的救援任务的安全性,效率和成功完成。首先,我们使用深度卷积神经网络(CNN)系统,以实时地分类和识别来自热图像的感兴趣对象。接下来,我们将此CNN框架扩展了对象检测,跟踪,分割与掩码RCNN框架,以及具有多模级自然语言处理(NLP)框架的场景描述。第三,我们建立了一个深入的Q学习的代理,免受压力引起的迷失方向和焦虑,能够根据现场消防环境中观察和存储的事实来制定明确的导航决策。最后,我们使用了一种低计算无监督的学习技术,称为张量分解,在实时对异常检测进行有意义的特征提取。通过这些临时深度学习结构,我们建立了人工智能系统的骨干,用于消防员的情境意识。要将设计的系统带入消防员的使用,我们设计了一种物理结构,其中处理后的结果被用作创建增强现实的投入,这是一个能够建议他们所在地的消防员和周围的关键特征,这对救援操作至关重要在手头,以及路径规划功能,充当虚拟指南,以帮助迷彩的第一个响应者恢复安全。当组合时,这四种方法呈现了一种新颖的信息理解,转移和综合方法,这可能会大大提高消防员响应和功效,并降低寿命损失。
translated by 谷歌翻译
医学事物互联网(IOMT)允许使用传感器收集生理数据,然后将其传输到远程服务器,这使医生和卫生专业人员可以连续,永久地分析这些数据,并在早期阶段检测疾病。但是,使用无线通信传输数据将其暴露于网络攻击中,并且该数据的敏感和私人性质可能代表了攻击者的主要兴趣。在存储和计算能力有限的设备上使用传统的安全方法无效。另一方面,使用机器学习进行入侵检测可以对IOMT系统的要求提供适应性的安全响应。在这种情况下,对基于机器学习(ML)的入侵检测系统如何解决IOMT系统中的安全性和隐私问题的全面调查。为此,提供了IOMT的通用三层体系结构以及IOMT系统的安全要求。然后,出现了可能影响IOMT安全性的各种威胁,并确定基于ML的每个解决方案中使用的优势,缺点,方法和数据集。最后,讨论了在IOMT的每一层中应用ML的一些挑战和局限性,这些挑战和局限性可以用作未来的研究方向。
translated by 谷歌翻译
非侵入性负载监控(NILM)是将总功率消耗分为单个子组件的任务。多年来,已经合并了信号处理和机器学习算法以实现这一目标。关于最先进的方法,进行了许多出版物和广泛的研究工作,以涉及最先进的方法。科学界最初使用机器学习工具的尼尔姆问题制定和描述的最初兴趣已经转变为更实用的尼尔姆。如今,我们正处于成熟的尼尔姆时期,在现实生活中的应用程序方案中尝试使用尼尔姆。因此,算法的复杂性,可转移性,可靠性,实用性和普遍的信任度是主要的关注问题。这篇评论缩小了早期未成熟的尼尔姆时代与成熟的差距。特别是,本文仅对住宅电器的尼尔姆方法提供了全面的文献综述。本文分析,总结并介绍了大量最近发表的学术文章的结果。此外,本文讨论了这些方法的亮点,并介绍了研究人员应考虑的研究困境,以应用尼尔姆方法。最后,我们表明需要将传统分类模型转移到一个实用且值得信赖的框架中。
translated by 谷歌翻译
时空系统中有效,准确的事件预测对于最大程度地减少服务停机时间和优化性能至关重要。这项工作旨在利用历史数据来使用时空预测来预测和诊断事件。我们考虑道路交通系统的特定用例,事件采取异常事件的形式,例如事故或破碎的车辆。为了解决这个问题,我们开发了一种称为RADNET的神经模型,该模型预测系统参数,例如未来时间段的平均车辆速度。由于这种系统在很大程度上遵循每日或每周的周期性,因此我们将Radnet的预测与历史平均值进行比较与标记事件进行比较。与先前的工作不同,radnet在两个排列中渗透了空间和时间趋势,最后在预测之前结合了密集表示。这促进了知情推理和更准确的事件检测。具有两个公开可用和一个新的道路交通数据集的实验表明,与最先进的方法相比,所提出的模型的预测F1得分高达8%。
translated by 谷歌翻译
Emerging real-time multi-model ML (RTMM) workloads such as AR/VR and drone control often involve dynamic behaviors in various levels; task, model, and layers (or, ML operators) within a model. Such dynamic behaviors are new challenges to the system software in an ML system because the overall system load is unpredictable unlike traditional ML workloads. Also, the real-time processing requires to meet deadlines, and multi-model workloads involve highly heterogeneous models. As RTMM workloads often run on resource-constrained devices (e.g., VR headset), developing an effective scheduler is an important research problem. Therefore, we propose a new scheduler, SDRM3, that effectively handles various dynamicity in RTMM style workloads targeting multi-accelerator systems. To make scheduling decisions, SDRM3 quantifies the unique requirements for RTMM workloads and utilizes the quantified scores to drive scheduling decisions, considering the current system load and other inference jobs on different models and input frames. SDRM3 has tunable parameters that provide fast adaptivity to dynamic workload changes based on a gradient descent-like online optimization, which typically converges within five steps for new workloads. In addition, we also propose a method to exploit model level dynamicity based on Supernet for exploiting the trade-off between the scheduling effectiveness and model performance (e.g., accuracy), which dynamically selects a proper sub-network in a Supernet based on the system loads. In our evaluation on five realistic RTMM workload scenarios, SDRM3 reduces the overall UXCost, which is a energy-delay-product (EDP)-equivalent metric for real-time applications defined in the paper, by 37.7% and 53.2% on geometric mean (up to 97.6% and 97.1%) compared to state-of-the-art baselines, which shows the efficacy of our scheduling methodology.
translated by 谷歌翻译
Due to the issue that existing wireless sensor network (WSN)-based anomaly detection methods only consider and analyze temporal features, in this paper, a self-supervised learning-based anomaly node detection method based on an autoencoder is designed. This method integrates temporal WSN data flow feature extraction, spatial position feature extraction and intermodal WSN correlation feature extraction into the design of the autoencoder to make full use of the spatial and temporal information of the WSN for anomaly detection. First, a fully connected network is used to extract the temporal features of nodes by considering a single mode from a local spatial perspective. Second, a graph neural network (GNN) is used to introduce the WSN topology from a global spatial perspective for anomaly detection and extract the spatial and temporal features of the data flows of nodes and their neighbors by considering a single mode. Then, the adaptive fusion method involving weighted summation is used to extract the relevant features between different models. In addition, this paper introduces a gated recurrent unit (GRU) to solve the long-term dependence problem of the time dimension. Eventually, the reconstructed output of the decoder and the hidden layer representation of the autoencoder are fed into a fully connected network to calculate the anomaly probability of the current system. Since the spatial feature extraction operation is advanced, the designed method can be applied to the task of large-scale network anomaly detection by adding a clustering operation. Experiments show that the designed method outperforms the baselines, and the F1 score reaches 90.6%, which is 5.2% higher than those of the existing anomaly detection methods based on unsupervised reconstruction and prediction. Code and model are available at https://github.com/GuetYe/anomaly_detection/GLSL
translated by 谷歌翻译
我们展示了一个端到端框架,以提高人造系统对不可预见的事件的弹性。该框架基于基于物理的数字双胞胎模型和三个负责实时故障诊断,预后和重新配置的模块。故障诊断模块使用基于模型的诊断算法来检测和分离断层,并在系统中产生干预措施,以消除不确定的诊断解决方案。我们通过使用基于物理学的数字双胞胎的平行化和替代模型来扩展故障诊断算法为所需的实时性能。预后模块跟踪故障进度,并训练在线退化模型,以计算系统组件的剩余使用寿命。此外,我们使用降解模型来评估断层进程对操作要求的影响。重新配置模块使用基于PDDL的计划,并带有语义附件来调整系统控件,从而最大程度地减少了对系统操作的故障影响。我们定义一个弹性度量,并以燃料系统模型的示例来说明该指标如何通过我们的框架改进。
translated by 谷歌翻译