Semiconductor lasers, one of the key components for optical communication systems, have been rapidly evolving to meet the requirements of next generation optical networks with respect to high speed, low power consumption, small form factor etc. However, these demands have brought severe challenges to the semiconductor laser reliability. Therefore, a great deal of attention has been devoted to improving it and thereby ensuring reliable transmission. In this paper, a predictive maintenance framework using machine learning techniques is proposed for real-time heath monitoring and prognosis of semiconductor laser and thus enhancing its reliability. The proposed approach is composed of three stages: i) real-time performance degradation prediction, ii) degradation detection, and iii) remaining useful life (RUL) prediction. First of all, an attention based gated recurrent unit (GRU) model is adopted for real-time prediction of performance degradation. Then, a convolutional autoencoder is used to detect the degradation or abnormal behavior of a laser, given the predicted degradation performance values. Once an abnormal state is detected, a RUL prediction model based on attention-based deep learning is utilized. Afterwards, the estimated RUL is input for decision making and maintenance planning. The proposed framework is validated using experimental data derived from accelerated aging tests conducted for semiconductor tunable lasers. The proposed approach achieves a very good degradation performance prediction capability with a small root mean square error (RMSE) of 0.01, a good anomaly detection accuracy of 94.24% and a better RUL estimation capability compared to the existing ML-based laser RUL prediction models.
translated by 谷歌翻译
Semiconductor lasers have been rapidly evolving to meet the demands of next-generation optical networks. This imposes much more stringent requirements on the laser reliability, which are dominated by degradation mechanisms (e.g., sudden degradation) limiting the semiconductor laser lifetime. Physics-based approaches are often used to characterize the degradation behavior analytically, yet explicit domain knowledge and accurate mathematical models are required. Building such models can be very challenging due to a lack of a full understanding of the complex physical processes inducing the degradation under various operating conditions. To overcome the aforementioned limitations, we propose a new data-driven approach, extracting useful insights from the operational monitored data to predict the degradation trend without requiring any specific knowledge or using any physical model. The proposed approach is based on an unsupervised technique, a conditional variational autoencoder, and validated using vertical-cavity surface-emitting laser (VCSEL) and tunable edge emitting laser reliability data. The experimental results confirm that our model (i) achieves a good degradation prediction and generalization performance by yielding an F1 score of 95.3%, (ii) outperforms several baseline ML based anomaly detection techniques, and (iii) helps to shorten the aging tests by early predicting the failed devices before the end of the test and thereby saving costs
translated by 谷歌翻译
Time series anomaly detection has applications in a wide range of research fields and applications, including manufacturing and healthcare. The presence of anomalies can indicate novel or unexpected events, such as production faults, system defects, or heart fluttering, and is therefore of particular interest. The large size and complex patterns of time series have led researchers to develop specialised deep learning models for detecting anomalous patterns. This survey focuses on providing structured and comprehensive state-of-the-art time series anomaly detection models through the use of deep learning. It providing a taxonomy based on the factors that divide anomaly detection models into different categories. Aside from describing the basic anomaly detection technique for each category, the advantages and limitations are also discussed. Furthermore, this study includes examples of deep anomaly detection in time series across various application domains in recent years. It finally summarises open issues in research and challenges faced while adopting deep anomaly detection models.
translated by 谷歌翻译
作为自然现象的地震,历史上不断造成伤害和人类生活的损失。地震预测是任何社会计划的重要方面,可以增加公共准备,并在很大程度上减少损坏。然而,由于地震的随机特征以及实现了地震预测的有效和可靠模型的挑战,迄今为止努力一直不足,需要新的方法来解决这个问题。本文意识到​​这些问题,提出了一种基于注意机制(AM),卷积神经网络(CNN)和双向长短期存储器(BILSTM)模型的新型预测方法,其可以预测数量和最大幅度中国大陆各地区的地震为基于该地区的地震目录。该模型利用LSTM和CNN具有注意机制,以更好地关注有效的地震特性并产生更准确的预测。首先,将零阶保持技术应用于地震数据上的预处理,使得模型的输入数据更适当。其次,为了有效地使用空间信息并减少输入数据的维度,CNN用于捕获地震数据之间的空间依赖性。第三,使用Bi-LSTM层来捕获时间依赖性。第四,引入了AM层以突出其重要的特征来实现更好的预测性能。结果表明,该方法具有比其他预测方法更好的性能和概括能力。
translated by 谷歌翻译
多元时间序列的异常检测对于系统行为监测有意义。本文提出了一种基于无监督的短期和长期面具表示学习(SLMR)的异常检测方法。主要思想是分别使用多尺度的残余卷积和门控复发单元(GRU)提取多元时间序列的短期局部依赖模式和长期全球趋势模式。此外,我们的方法可以通过结合时空掩盖的自我监督表示和序列分裂来理解时间上下文和特征相关性。它认为功能的重要性是不同的,我们介绍了注意机制以调整每个功能的贡献。最后,将基于预测的模型和基于重建的模型集成在一起,以关注单时间戳预测和时间序列的潜在表示。实验表明,我们方法的性能优于三个现实世界数据集上的其他最先进的模型。进一步的分析表明,我们的方法擅长可解释性。
translated by 谷歌翻译
存在几种数据驱动方法,使我们的模型时间序列数据能够包括传统的基于回归的建模方法(即,Arima)。最近,在时间序列分析和预测的背景下介绍和探索了深度学习技术。询问的主要研究问题是在预测时间序列数据中的深度学习技术中的这些变化的性能。本文比较了两个突出的深度学习建模技术。比较了经常性的神经网络(RNN)长的短期记忆(LSTM)和卷积神经网络(CNN)基于基于TCN的时间卷积网络(TCN),并报告了它们的性能和训练时间。根据我们的实验结果,两个建模技术都表现了相当具有基于TCN的模型优于LSTM略微。此外,基于CNN的TCN模型比基于RNN的LSTM模型更快地构建了稳定的模型。
translated by 谷歌翻译
无监督的异常检测旨在通过在正常数据上训练来建立模型以有效地检测看不见的异常。尽管以前的基于重建的方法取得了富有成效的进展,但由于两个危急挑战,他们的泛化能力受到限制。首先,训练数据集仅包含正常模式,这限制了模型泛化能力。其次,现有模型学到的特征表示通常缺乏代表性,妨碍了保持正常模式的多样性的能力。在本文中,我们提出了一种称为自适应存储器网络的新方法,具有自我监督的学习(AMSL)来解决这些挑战,并提高无监督异常检测中的泛化能力。基于卷积的AutoEncoder结构,AMSL包含一个自我监督的学习模块,以学习一般正常模式和自适应内存融合模块来学习丰富的特征表示。四个公共多变量时间序列数据集的实验表明,与其他最先进的方法相比,AMSL显着提高了性能。具体而言,在具有9亿个样本的最大帽睡眠阶段检测数据集上,AMSL以精度和F1分数\ TextBF {4} \%+优于第二个最佳基线。除了增强的泛化能力之外,AMSL还针对输入噪声更加强大。
translated by 谷歌翻译
Accurate traffic flow prediction, a hotspot for intelligent transportation research, is the prerequisite for mastering traffic and making travel plans. The speed of traffic flow can be affected by roads condition, weather, holidays, etc. Furthermore, the sensors to catch the information about traffic flow will be interfered with by environmental factors such as illumination, collection time, occlusion, etc. Therefore, the traffic flow in the practical transportation system is complicated, uncertain, and challenging to predict accurately. This paper proposes a deep encoder-decoder prediction framework based on variational Bayesian inference. A Bayesian neural network is constructed by combining variational inference with gated recurrent units (GRU) and used as the deep neural network unit of the encoder-decoder framework to mine the intrinsic dynamics of traffic flow. Then, the variational inference is introduced into the multi-head attention mechanism to avoid noise-induced deterioration of prediction accuracy. The proposed model achieves superior prediction performance on the Guangzhou urban traffic flow dataset over the benchmarks, particularly when the long-term prediction.
translated by 谷歌翻译
在智能交通系统中,交通拥堵异常检测至关重要。运输机构的目标有两个方面:监视感兴趣领域的一般交通状况,并在异常拥堵状态下定位道路细分市场。建模拥塞模式可以实现这些目标,以实现全市道路的目标,相当于学习多元时间序列(MTS)的分布。但是,现有作品要么不可伸缩,要么无法同时捕获MTS中的空间信息。为此,我们提出了一个由数据驱动的生成方法组成的原则性和全面的框架,该方法可以执行可拖动的密度估计来检测流量异常。我们的方法在特征空间中的第一群段段,然后使用条件归一化流以在无监督的设置下在群集级别识别异常的时间快照。然后,我们通过在异常群集上使用内核密度估计器来识别段级别的异常。关于合成数据集的广泛实验表明,我们的方法在召回和F1得分方面显着优于几种最新的拥塞异常检测和诊断方法。我们还使用生成模型来采样标记的数据,该数据可以在有监督的环境中训练分类器,从而减轻缺乏在稀疏设置中进行异常检测的标记数据。
translated by 谷歌翻译
异常检测涉及广泛的应用,如故障检测,系统监控和事件检测。识别从智能计量系统获得的计量数据的异常是提高电力系统的可靠性,稳定性和效率的关键任务。本文介绍了异常检测过程,以发现在智能计量系统中观察到的异常值。在所提出的方法中,使用双向长短期存储器(BILSTM)的AutoEncoder并找到异常数据点。它通过具有非异常数据的AutoEncoder计算重建错误,并且将分类为异常的异常值通过预定义的阈值与非异常数据分离。基于Bilstm AutoEncoder的异常检测方法用来自985户家庭收集的4种能源电力/水/加热/热水的计量数据进行测试。
translated by 谷歌翻译
As the number of heterogenous IP-connected devices and traffic volume increase, so does the potential for security breaches. The undetected exploitation of these breaches can bring severe cybersecurity and privacy risks. Anomaly-based \acp{IDS} play an essential role in network security. In this paper, we present a practical unsupervised anomaly-based deep learning detection system called ARCADE (Adversarially Regularized Convolutional Autoencoder for unsupervised network anomaly DEtection). With a convolutional \ac{AE}, ARCADE automatically builds a profile of the normal traffic using a subset of raw bytes of a few initial packets of network flows so that potential network anomalies and intrusions can be efficiently detected before they cause more damage to the network. ARCADE is trained exclusively on normal traffic. An adversarial training strategy is proposed to regularize and decrease the \ac{AE}'s capabilities to reconstruct network flows that are out-of-the-normal distribution, thereby improving its anomaly detection capabilities. The proposed approach is more effective than state-of-the-art deep learning approaches for network anomaly detection. Even when examining only two initial packets of a network flow, ARCADE can effectively detect malware infection and network attacks. ARCADE presents 20 times fewer parameters than baselines, achieving significantly faster detection speed and reaction time.
translated by 谷歌翻译
随着深度学习生成模型的最新进展,它在时间序列领域的出色表现并没有花费很长时间。用于与时间序列合作的深度神经网络在很大程度上取决于培训中使用的数据集的广度和一致性。这些类型的特征通常在现实世界中不丰富,在现实世界中,它们通常受到限制,并且通常具有必须保证的隐私限制。因此,一种有效的方法是通过添加噪声或排列并生成新的合成数据来使用\ gls {da}技术增加数据数。它正在系统地审查该领域的当前最新技术,以概述所有可用的算法,并提出对最相关研究的分类法。将评估不同变体的效率;作为过程的重要组成部分,将分析评估性能的不同指标以及有关每个模型的主要问题。这项研究的最终目的是摘要摘要,这些领域的进化和性能会产生更好的结果,以指导该领域的未来研究人员。
translated by 谷歌翻译
时间序列分析已在网络安全,环境监测和医学信息学等不同应用中取得了巨大成功。在不同时间序列之间学习相似性是一个关键问题,因为它是下游分析(例如聚类和异常检测)的基础。由于事件触发的传感产生的时间序列的复杂时间动态,通常不清楚哪种距离度量适合相似性学习,这在各种应用中很常见,包括自动驾驶,交互式医疗保健和智能家庭自动化。本文的总体目标是开发一个无监督的学习框架,该框架能够在未标记的事件触发时间序列中学习任务感知的相似性。从机器学习有利位置,提出的框架可以利用层次多尺度序列自动编码器和高斯混合模型(GMM)的功能,以有效地学习时间序列的低维表示。最后,可以轻松地将获得的相似性度量可视化以进行解释。拟议的框架渴望提供一块垫脚石,从而产生一种系统的模型方法,以在许多事件触发的时间序列中学习相似之处。通过广泛的定性和定量实验,揭示了所提出的方法的表现大大优于最先进的方法。
translated by 谷歌翻译
单个设备负载和能量消耗反馈是追求用户节省住宅能源的重要方法之一。这可以帮助在未使用时通过设备识别错误的设备并通过设备浪费能量。主要挑战是身份和估计每个设备上没有侵入式传感器的单个设备的能耗。非侵入性负荷监测(尼芯)或能量分解,是一种盲源分离问题,需要一个系统来估计来自聚合的家庭能量消耗的单个设备的电力使用。在本文中,我们提出了一种基于深度神经网络的基于深度神经网络的方法,用于在居住户口获得的低频电力数据上进行负载分解。我们将一系列一维卷积神经网络和长短期存储器(1D CNN-LSTM)组合以提取可以识别主动设备的特征,并给出聚合的家庭功率值的功耗。我们使用CNN在给定的时间帧中从主读取中提取特征,然后使用这些功能来分类给定设备在该时间段内是否有效。在此之后,提取的功能用于使用LSTM来模拟生成问题。我们训练LSTM以产生特定设备的分列的能耗。我们的神经网络能够产生需求方的详细反馈,为最终用户提供了重要的洞察力。该算法设计用于低功耗离线设备,如ESP32。实证计算表明,我们的模型优于参考能量分类数据集(REDD)的最先进。
translated by 谷歌翻译
“轨迹”是指由地理空间中的移动物体产生的迹线,通常由一系列按时间顺序排列的点表示,其中每个点由地理空间坐标集和时间戳组成。位置感应和无线通信技术的快速进步使我们能够收集和存储大量的轨迹数据。因此,许多研究人员使用轨迹数据来分析各种移动物体的移动性。在本文中,我们专注于“城市车辆轨迹”,这是指城市交通网络中车辆的轨迹,我们专注于“城市车辆轨迹分析”。城市车辆轨迹分析提供了前所未有的机会,可以了解城市交通网络中的车辆运动模式,包括以用户为中心的旅行经验和系统范围的时空模式。城市车辆轨迹数据的时空特征在结构上相互关联,因此,许多先前的研究人员使用了各种方法来理解这种结构。特别是,由于其强大的函数近似和特征表示能力,深度学习模型是由于许多研究人员的注意。因此,本文的目的是开发基于深度学习的城市车辆轨迹分析模型,以更好地了解城市交通网络的移动模式。特别是,本文重点介绍了两项研究主题,具有很高的必要性,重要性和适用性:下一个位置预测,以及合成轨迹生成。在这项研究中,我们向城市车辆轨迹分析提供了各种新型模型,使用深度学习。
translated by 谷歌翻译
Remaining Useful Life (RUL) estimation plays a critical role in Prognostics and Health Management (PHM). Traditional machine health maintenance systems are often costly, requiring sufficient prior expertise, and are difficult to fit into highly complex and changing industrial scenarios. With the widespread deployment of sensors on industrial equipment, building the Industrial Internet of Things (IIoT) to interconnect these devices has become an inexorable trend in the development of the digital factory. Using the device's real-time operational data collected by IIoT to get the estimated RUL through the RUL prediction algorithm, the PHM system can develop proactive maintenance measures for the device, thus, reducing maintenance costs and decreasing failure times during operation. This paper carries out research into the remaining useful life prediction model for multi-sensor devices in the IIoT scenario. We investigated the mainstream RUL prediction models and summarized the basic steps of RUL prediction modeling in this scenario. On this basis, a data-driven approach for RUL estimation is proposed in this paper. It employs a Multi-Head Attention Mechanism to fuse the multi-dimensional time-series data output from multiple sensors, in which the attention on features is used to capture the interactions between features and attention on sequences is used to learn the weights of time steps. Then, the Long Short-Term Memory Network is applied to learn the features of time series. We evaluate the proposed model on two benchmark datasets (C-MAPSS and PHM08), and the results demonstrate that it outperforms the state-of-art models. Moreover, through the interpretability of the multi-head attention mechanism, the proposed model can provide a preliminary explanation of engine degradation. Therefore, this approach is promising for predictive maintenance in IIoT scenarios.
translated by 谷歌翻译
在能源系统的数字化中,传感器和智能电表越来越多地用于监视生产,运行和需求。基于智能电表数据的异常检测对于在早期阶段识别潜在的风险和异常事件至关重要,这可以作为及时启动适当动作和改善管理的参考。但是,来自能源系统的智能电表数据通常缺乏标签,并且包含噪声和各种模式,而没有明显的周期性。同时,在不同的能量场景中对异常的模糊定义和高度复杂的时间相关性对异常检测构成了巨大的挑战。许多传统的无监督异常检测算法(例如基于群集或基于距离的模型)对噪声不强大,也不完全利用时间序列中的时间依赖性以及在多个变量(传感器)中的其他依赖关系。本文提出了一种基于带有注意机制的变异复发自动编码器的无监督异常检测方法。凭借来自智能电表的“肮脏”数据,我们的方法预示了缺失的值和全球异常,以在训练中缩小其贡献。本文与基于VAE的基线方法和其他四种无监督的学习方法进行了定量比较,证明了其有效性和优势。本文通过一项实际案例研究进一步验证了所提出的方法,该研究方法是检测工业加热厂的供水温度异常。
translated by 谷歌翻译
REED继电器是功能测试的基本组成部分,与电子产品的成功质量检查密切相关。为了为REED继电器提供准确的剩余使用寿命(RUL)估计,根据以下三个考虑,提出了具有降解模式聚类的混合深度学习网络。首先,对于REED继电器,观察到多种降解行为,因此提供了基于动态的$ K $ -MEANS聚类,以区分彼此的退化模式。其次,尽管适当的功能选择具有重要意义,但很少有研究可以指导选择。提出的方法建议进行操作规则,以实施轻松实施。第三,提出了用于剩余使用寿命估计的神经网络(RULNET),以解决卷积神经网络(CNN)在捕获顺序数据的时间信息中的弱点,该信息在卷积操作的高级特征表示后结合了时间相关能力。通过这种方式,lulnet的三种变体由健康指标,具有自组织地图的功能或具有曲线拟合的功能构建。最终,将提出的混合模型与典型的基线模型(包括CNN和长期记忆网络(LSTM))进行了比较,该模型通过具有两个不同不同降级方式的实用REED继电器数据集进行了比较。两种降解案例的结果表明,所提出的方法在索引均方根误差方面优于CNN和LSTM。
translated by 谷歌翻译
With the evolution of power systems as it is becoming more intelligent and interactive system while increasing in flexibility with a larger penetration of renewable energy sources, demand prediction on a short-term resolution will inevitably become more and more crucial in designing and managing the future grid, especially when it comes to an individual household level. Projecting the demand for electricity for a single energy user, as opposed to the aggregated power consumption of residential load on a wide scale, is difficult because of a considerable number of volatile and uncertain factors. This paper proposes a customized GRU (Gated Recurrent Unit) and Long Short-Term Memory (LSTM) architecture to address this challenging problem. LSTM and GRU are comparatively newer and among the most well-adopted deep learning approaches. The electricity consumption datasets were obtained from individual household smart meters. The comparison shows that the LSTM model performs better for home-level forecasting than alternative prediction techniques-GRU in this case. To compare the NN-based models with contrast to the conventional statistical technique-based model, ARIMA based model was also developed and benchmarked with LSTM and GRU model outcomes in this study to show the performance of the proposed model on the collected time series data.
translated by 谷歌翻译
轴承是容易出乎意料断层的旋转机的重要组成部分之一。因此,轴承诊断和状况监测对于降低众多行业的运营成本和停机时间至关重要。在各种生产条件下,轴承可以在一系列载荷和速度下进行操作,这会导致与每种故障类型相关的不同振动模式。正常数据很足够,因为系统通常在所需条件下工作。另一方面,故障数据很少见,在许多情况下,没有记录故障类别的数据。访问故障数据对于开发数据驱动的故障诊断工具至关重要,该工具可以提高操作的性能和安全性。为此,引入了基于条件生成对抗网络(CGAN)的新型算法。该算法对任何实际故障条件的正常和故障数据进行培训,从目标条件的正常数据中生成故障数据。所提出的方法在现实世界中的数据集上进行了验证,并为不同条件生成故障数据。实施了几种最先进的分类器和可视化模型,以评估合成数据的质量。结果证明了所提出的算法的功效。
translated by 谷歌翻译