使用最后一英里无线连接的终端设备的数量随着智能基础设施的上升而大大增加,并且需要可靠的功能来支持平滑和高效的业务流程。为了有效地管理此类大规模无线网络,需要更先进和准确的网络监控和故障检测解决方案。在本文中,我们使用复制图和克朗尼亚角场进行无线异常检测的基于图像的表示技术的第一次分析,并提出了一种启用精确异常检测的新的深度学习架构。我们详细阐述了开发资源意识架构的设计考虑因素,并使用时间序列提出新模型以使用复制图来实现图像转换。我们表明,所提出的模型a)以最多14个百分点的基于语法角字段优异的型号,b)使用动态时间翘曲高达24个百分点,c)优于24个百分点的典型ML模型,C)优于或与主流架构相表现出如AlexNet和VGG11的同时具有<10倍的权重和高达$ \其计算复杂度的8倍,而d)优于各个应用面积的最新状态高达55个百分点。最后,我们还在随机选择的示例上解释了分类器如何决定。
translated by 谷歌翻译
能源管理系统(EMS)依靠(非)感知负载监控(N)ILM来监视和管理设备,并帮助居民更加节能,因此更节俭。由于对相对有限的数据进行了训练和评估,因此(n)ILM最有前途的机器学习解决方案的普遍性以及转移潜力尚未完全理解。在本文中,我们提出了一种基于时间序列和转移学习的维度扩展的建筑EMS(BEM)(BEM)的新方法。我们对5个不同的低频数据集进行了广泛的评估。提出的使用视频转换和深度学习体系结构的特征维度扩展可在数据集中获得29个设备的平均加权F1得分为0.88,并且与最先进的图像相比,计算效率高达6倍。研究%的跨数据库转移学习方法的适用性,我们发现1)我们的方法的平均加权F1得分为0.80,而与非转移方法相比,模型训练的时期较少3倍,2 )只有230个数据样本即可达到0.75的F1得分,3)我们的转移方法的优于最先进的精确降低,最多可在未见电器上降低12个百分点
translated by 谷歌翻译
存在几种数据驱动方法,使我们的模型时间序列数据能够包括传统的基于回归的建模方法(即,Arima)。最近,在时间序列分析和预测的背景下介绍和探索了深度学习技术。询问的主要研究问题是在预测时间序列数据中的深度学习技术中的这些变化的性能。本文比较了两个突出的深度学习建模技术。比较了经常性的神经网络(RNN)长的短期记忆(LSTM)和卷积神经网络(CNN)基于基于TCN的时间卷积网络(TCN),并报告了它们的性能和训练时间。根据我们的实验结果,两个建模技术都表现了相当具有基于TCN的模型优于LSTM略微。此外,基于CNN的TCN模型比基于RNN的LSTM模型更快地构建了稳定的模型。
translated by 谷歌翻译
Unsupervised anomaly detection in time-series has been extensively investigated in the literature. Notwithstanding the relevance of this topic in numerous application fields, a complete and extensive evaluation of recent state-of-the-art techniques is still missing. Few efforts have been made to compare existing unsupervised time-series anomaly detection methods rigorously. However, only standard performance metrics, namely precision, recall, and F1-score are usually considered. Essential aspects for assessing their practical relevance are therefore neglected. This paper proposes an original and in-depth evaluation study of recent unsupervised anomaly detection techniques in time-series. Instead of relying solely on standard performance metrics, additional yet informative metrics and protocols are taken into account. In particular, (1) more elaborate performance metrics specifically tailored for time-series are used; (2) the model size and the model stability are studied; (3) an analysis of the tested approaches with respect to the anomaly type is provided; and (4) a clear and unique protocol is followed for all experiments. Overall, this extensive analysis aims to assess the maturity of state-of-the-art time-series anomaly detection, give insights regarding their applicability under real-world setups and provide to the community a more complete evaluation protocol.
translated by 谷歌翻译
The detection of anomalies in time series data is crucial in a wide range of applications, such as system monitoring, health care or cyber security. While the vast number of available methods makes selecting the right method for a certain application hard enough, different methods have different strengths, e.g. regarding the type of anomalies they are able to find. In this work, we compare six unsupervised anomaly detection methods with different complexities to answer the questions: Are the more complex methods usually performing better? And are there specific anomaly types that those method are tailored to? The comparison is done on the UCR anomaly archive, a recent benchmark dataset for anomaly detection. We compare the six methods by analyzing the experimental results on a dataset- and anomaly type level after tuning the necessary hyperparameter for each method. Additionally we examine the ability of individual methods to incorporate prior knowledge about the anomalies and analyse the differences of point-wise and sequence wise features. We show with broad experiments, that the classical machine learning methods show a superior performance compared to the deep learning methods across a wide range of anomaly types.
translated by 谷歌翻译
现代高性能计算(HPC)系统的复杂性日益增加,需要引入自动化和数据驱动的方法,以支持系统管理员为增加系统可用性的努力。异常检测是改善可用性不可或缺的一部分,因为它减轻了系统管理员的负担,并减少了异常和解决方案之间的时间。但是,对当前的最新检测方法进行了监督和半监督,因此它们需要具有异常的人体标签数据集 - 在生产HPC系统中收集通常是不切实际的。基于聚类的无监督异常检测方法,旨在减轻准确的异常数据的需求,到目前为止的性能差。在这项工作中,我们通过提出RUAD来克服这些局限性,RUAD是一种新型的无监督异常检测模型。 Ruad比当前的半监督和无监督的SOA方法取得了更好的结果。这是通过考虑数据中的时间依赖性以及在模型体系结构中包括长短期限内存单元的实现。提出的方法是根据tier-0系统(带有980个节点的Cineca的Marconi100的完整历史)评估的。 RUAD在半监督训练中达到曲线(AUC)下的区域(AUC)为0.763,在无监督的训练中达到了0.767的AUC,这改进了SOA方法,在半监督训练中达到0.747的AUC,无需训练的AUC和0.734的AUC在无处不在的AUC中提高了AUC。训练。它还大大优于基于聚类的当前SOA无监督的异常检测方法,其AUC为0.548。
translated by 谷歌翻译
自动日志文件分析可以尽早发现相关事件,例如系统故障。特别是,自我学习的异常检测技术在日志数据中捕获模式,随后向系统操作员报告意外的日志事件事件,而无需提前提供或手动对异常情况进行建模。最近,已经提出了越来越多的方法来利用深度学习神经网络为此目的。与传统的机器学习技术相比,这些方法证明了出色的检测性能,并同时解决了不稳定数据格式的问题。但是,有许多不同的深度学习体系结构,并且编码由神经网络分析的原始和非结构化日志数据是不平凡的。因此,我们进行了系统的文献综述,概述了部署的模型,数据预处理机制,异常检测技术和评估。该调查没有定量比较现有方法,而是旨在帮助读者了解不同模型体系结构的相关方面,并强调未来工作的开放问题。
translated by 谷歌翻译
The Internet of Things (IoT) is a system that connects physical computing devices, sensors, software, and other technologies. Data can be collected, transferred, and exchanged with other devices over the network without requiring human interactions. One challenge the development of IoT faces is the existence of anomaly data in the network. Therefore, research on anomaly detection in the IoT environment has become popular and necessary in recent years. This survey provides an overview to understand the current progress of the different anomaly detection algorithms and how they can be applied in the context of the Internet of Things. In this survey, we categorize the widely used anomaly detection machine learning and deep learning techniques in IoT into three types: clustering-based, classification-based, and deep learning based. For each category, we introduce some state-of-the-art anomaly detection methods and evaluate the advantages and limitations of each technique.
translated by 谷歌翻译
作为在Internet交换路由到达性信息的默认协议,边界网关协议(BGP)的流量异常行为与互联网异常事件密切相关。 BGP异常检测模型通过其实时监控和警报功能确保互联网上的稳定路由服务。以前的研究要么专注于特征选择问题或数据中的内存特征,同时忽略特征之间的关系和特征中的精确时间相关(无论是长期还是短期依赖性)。在本文中,我们提出了一种用于捕获来自BGP更新流量的异常行为的多视图模型,其中使用黄土(STL)方法的季节性和趋势分解来减少原始时间序列数据中的噪声和图表网络中的噪声(GAT)用于分别发现功能中的特征关系和时间相关性。我们的结果优于异常检测任务的最先进的方法,平均F1分别在平衡和不平衡数据集上得分高达96.3%和93.2%。同时,我们的模型可以扩展以对多个异常进行分类并检测未知事件。
translated by 谷歌翻译
Time series anomaly detection has applications in a wide range of research fields and applications, including manufacturing and healthcare. The presence of anomalies can indicate novel or unexpected events, such as production faults, system defects, or heart fluttering, and is therefore of particular interest. The large size and complex patterns of time series have led researchers to develop specialised deep learning models for detecting anomalous patterns. This survey focuses on providing structured and comprehensive state-of-the-art time series anomaly detection models through the use of deep learning. It providing a taxonomy based on the factors that divide anomaly detection models into different categories. Aside from describing the basic anomaly detection technique for each category, the advantages and limitations are also discussed. Furthermore, this study includes examples of deep anomaly detection in time series across various application domains in recent years. It finally summarises open issues in research and challenges faced while adopting deep anomaly detection models.
translated by 谷歌翻译
As the number of heterogenous IP-connected devices and traffic volume increase, so does the potential for security breaches. The undetected exploitation of these breaches can bring severe cybersecurity and privacy risks. Anomaly-based \acp{IDS} play an essential role in network security. In this paper, we present a practical unsupervised anomaly-based deep learning detection system called ARCADE (Adversarially Regularized Convolutional Autoencoder for unsupervised network anomaly DEtection). With a convolutional \ac{AE}, ARCADE automatically builds a profile of the normal traffic using a subset of raw bytes of a few initial packets of network flows so that potential network anomalies and intrusions can be efficiently detected before they cause more damage to the network. ARCADE is trained exclusively on normal traffic. An adversarial training strategy is proposed to regularize and decrease the \ac{AE}'s capabilities to reconstruct network flows that are out-of-the-normal distribution, thereby improving its anomaly detection capabilities. The proposed approach is more effective than state-of-the-art deep learning approaches for network anomaly detection. Even when examining only two initial packets of a network flow, ARCADE can effectively detect malware infection and network attacks. ARCADE presents 20 times fewer parameters than baselines, achieving significantly faster detection speed and reaction time.
translated by 谷歌翻译
本文提出了一个低成本且高度准确的ECG监测系统,用于针对可穿戴移动传感器的个性化早期心律不齐检测。对个性化心电图监测的早期监督方法需要异常和正常的心跳来训练专用分类器。但是,在真实的情况下,个性化算法嵌入了可穿戴设备中,这种训练数据不适合没有心脏障碍史的健康人。在这项研究中,(i)我们对通过稀疏字典学习获得的健康信号空间进行了无空间分析,并研究了如何简单的无效空间投影或基于最小二乘的规范性分类方法可以降低计算复杂性,而无需牺牲牺牲计算的复杂性。与基于稀疏表示的分类相比,检测准确性。 (ii)然后,我们引入了基于稀疏表示的域适应技术,以便将其他现有用户的异常和正常信号投射到新用户的信号空间上,使我们能够训练专用的分类器而无需​​新用户的任何异常心跳。因此,无需合成异常的心跳产生,可以实现零射学习。在基准MIT-BIH ECG数据集上执行的一组大量实验表明,当该基于域的基于域的训练数据生成器与简单的1-D CNN分类器一起使用时,该方法以明显的差距优于先前的工作。 (iii)然后,通过组合(i)和(ii),我们提出了一个整体分类器,以进一步提高性能。这种零射门心律失常检测的方法的平均准确性水平为98.2%,F1得分为92.8%。最后,使用上述创新提出了一个个性化的节能ECG监测计划。
translated by 谷歌翻译
Machine Learning (ML) approaches have been used to enhance the detection capabilities of Network Intrusion Detection Systems (NIDSs). Recent work has achieved near-perfect performance by following binary- and multi-class network anomaly detection tasks. Such systems depend on the availability of both (benign and malicious) network data classes during the training phase. However, attack data samples are often challenging to collect in most organisations due to security controls preventing the penetration of known malicious traffic to their networks. Therefore, this paper proposes a Deep One-Class (DOC) classifier for network intrusion detection by only training on benign network data samples. The novel one-class classification architecture consists of a histogram-based deep feed-forward classifier to extract useful network data features and use efficient outlier detection. The DOC classifier has been extensively evaluated using two benchmark NIDS datasets. The results demonstrate its superiority over current state-of-the-art one-class classifiers in terms of detection and false positive rates.
translated by 谷歌翻译
Multi-class ensemble classification remains a popular focus of investigation within the research community. The popularization of cloud services has sped up their adoption due to the ease of deploying large-scale machine-learning models. It has also drawn the attention of the industrial sector because of its ability to identify common problems in production. However, there are challenges to conform an ensemble classifier, namely a proper selection and effective training of the pool of classifiers, the definition of a proper architecture for multi-class classification, and uncertainty quantification of the ensemble classifier. The robustness and effectiveness of the ensemble classifier lie in the selection of the pool of classifiers, as well as in the learning process. Hence, the selection and the training procedure of the pool of classifiers play a crucial role. An (ensemble) classifier learns to detect the classes that were used during the supervised training. However, when injecting data with unknown conditions, the trained classifier will intend to predict the classes learned during the training. To this end, the uncertainty of the individual and ensemble classifier could be used to assess the learning capability. We present a novel approach for novel detection using ensemble classification and evidence theory. A pool selection strategy is presented to build a solid ensemble classifier. We present an architecture for multi-class ensemble classification and an approach to quantify the uncertainty of the individual classifiers and the ensemble classifier. We use uncertainty for the anomaly detection approach. Finally, we use the benchmark Tennessee Eastman to perform experiments to test the ensemble classifier's prediction and anomaly detection capabilities.
translated by 谷歌翻译
近年来,随着传感器和智能设备的广泛传播,物联网(IoT)系统的数据生成速度已大大增加。在物联网系统中,必须经常处理,转换和分析大量数据,以实现各种物联网服务和功能。机器学习(ML)方法已显示出其物联网数据分析的能力。但是,将ML模型应用于物联网数据分析任务仍然面临许多困难和挑战,特别是有效的模型选择,设计/调整和更新,这给经验丰富的数据科学家带来了巨大的需求。此外,物联网数据的动态性质可能引入概念漂移问题,从而导致模型性能降解。为了减少人类的努力,自动化机器学习(AUTOML)已成为一个流行的领域,旨在自动选择,构建,调整和更新机器学习模型,以在指定任务上实现最佳性能。在本文中,我们对Automl区域中模型选择,调整和更新过程中的现有方法进行了审查,以识别和总结将ML算法应用于IoT数据分析的每个步骤的最佳解决方案。为了证明我们的发现并帮助工业用户和研究人员更好地实施汽车方法,在这项工作中提出了将汽车应用于IoT异常检测问题的案例研究。最后,我们讨论并分类了该领域的挑战和研究方向。
translated by 谷歌翻译
第五代(5G)网络和超越设想巨大的东西互联网(物联网)推出,以支持延长现实(XR),增强/虚拟现实(AR / VR),工业自动化,自主驾驶和智能所有带来的破坏性应用一起占用射频(RF)频谱的大规模和多样化的IOT设备。随着频谱嘎嘎和吞吐量挑战,这种大规模的无线设备暴露了前所未有的威胁表面。 RF指纹识别是预约的作为候选技术,可以与加密和零信任安全措施相结合,以确保无线网络中的数据隐私,机密性和完整性。在未来的通信网络中,在这项工作中,在未来的通信网络中的相关性,我们对RF指纹识别方法进行了全面的调查,从传统观点到最近的基于深度学习(DL)的算法。现有的调查大多专注于无线指纹方法的受限制呈现,然而,许多方面仍然是不可能的。然而,在这项工作中,我们通过解决信号智能(SIGINT),应用程序,相关DL算法,RF指纹技术的系统文献综述来缓解这一点,跨越过去二十年的RF指纹技术的系统文献综述,对数据集和潜在研究途径的讨论 - 必须以百科全书的方式阐明读者的必要条件。
translated by 谷歌翻译
X-ray imaging technology has been used for decades in clinical tasks to reveal the internal condition of different organs, and in recent years, it has become more common in other areas such as industry, security, and geography. The recent development of computer vision and machine learning techniques has also made it easier to automatically process X-ray images and several machine learning-based object (anomaly) detection, classification, and segmentation methods have been recently employed in X-ray image analysis. Due to the high potential of deep learning in related image processing applications, it has been used in most of the studies. This survey reviews the recent research on using computer vision and machine learning for X-ray analysis in industrial production and security applications and covers the applications, techniques, evaluation metrics, datasets, and performance comparison of those techniques on publicly available datasets. We also highlight some drawbacks in the published research and give recommendations for future research in computer vision-based X-ray analysis.
translated by 谷歌翻译
异常检测是机器学习中的重要问题。应用领域包括网络安全,保健,欺诈检测等,涉及高维数据集。典型的异常检测系统始终面临不同类别的样本大小的巨大差异的类别不平衡问题。他们通常有课堂重叠问题。本研究使用了胶囊网络进行异常检测任务。据我们所知,这是第一实例,其中在高维非图像复杂数据设置中分析了对异常检测任务的胶囊网络的实例。我们还处理相关的新颖性和异常值检测问题。胶囊网络的架构适用于二进制分类任务。胶囊网络由于在内部胶囊架构中捕获的预测中捕获的观点不变性的效果,因此提供了一种良好的选择。我们使用了六分层的完整的AutoEncoder架构,其中包含胶囊的第二层和第三层。使用动态路由算法训练胶囊。我们从原始MNIST DataSet创建了10美元的高价数据集,并使用5美元的基准模型进行了胶囊网络的性能。我们的领先的测试设定措施是少数民族阶级和ROC曲线下的F1分数。我们发现胶囊网络通过仅使用10个时期进行训练和不使用任何其他数据级别和算法级别方法,胶囊网络在异常检测任务上表现出对异常检测任务的所有其他基线模型。因此,我们得出结论,胶囊网络在为异常检测任务进行建模复杂的高维不平衡数据集。
translated by 谷歌翻译
在智能交通系统中,交通拥堵异常检测至关重要。运输机构的目标有两个方面:监视感兴趣领域的一般交通状况,并在异常拥堵状态下定位道路细分市场。建模拥塞模式可以实现这些目标,以实现全市道路的目标,相当于学习多元时间序列(MTS)的分布。但是,现有作品要么不可伸缩,要么无法同时捕获MTS中的空间信息。为此,我们提出了一个由数据驱动的生成方法组成的原则性和全面的框架,该方法可以执行可拖动的密度估计来检测流量异常。我们的方法在特征空间中的第一群段段,然后使用条件归一化流以在无监督的设置下在群集级别识别异常的时间快照。然后,我们通过在异常群集上使用内核密度估计器来识别段级别的异常。关于合成数据集的广泛实验表明,我们的方法在召回和F1得分方面显着优于几种最新的拥塞异常检测和诊断方法。我们还使用生成模型来采样标记的数据,该数据可以在有监督的环境中训练分类器,从而减轻缺乏在稀疏设置中进行异常检测的标记数据。
translated by 谷歌翻译
为了允许机器学习算法从原始数据中提取知识,必须首先清除,转换,并将这些数据置于适当的形式。这些通常很耗时的阶段被称为预处理。预处理阶段的一个重要步骤是特征选择,其目的通过减少数据集的特征量来更好地执行预测模型。在这些数据集中,不同事件的实例通常是不平衡的,这意味着某些正常事件被超出,而其他罕见事件非常有限。通常,这些罕见的事件具有特殊的兴趣,因为它们具有比正常事件更具辨别力。这项工作的目的是过滤提供给这些罕见实例的特征选择方法的实例,从而积极影响特征选择过程。在这项工作过程中,我们能够表明这种过滤对分类模型的性能以及异常值检测方法适用于该过滤。对于某些数据集,所产生的性能增加仅为百分点,但对于其他数据集,我们能够实现高达16%的性能的增加。这项工作应导致预测模型的改进以及在预处理阶段的过程中的特征选择更好的可解释性。本着公开科学的精神,提高了我们的研究领域的透明度,我们已经在公开的存储库中提供了我们的所有源代码和我们的实验结果。
translated by 谷歌翻译