Cross-domain graph anomaly detection (CD-GAD) describes the problem of detecting anomalous nodes in an unlabelled target graph using auxiliary, related source graphs with labelled anomalous and normal nodes. Although it presents a promising approach to address the notoriously high false positive issue in anomaly detection, little work has been done in this line of research. There are numerous domain adaptation methods in the literature, but it is difficult to adapt them for GAD due to the unknown distributions of the anomalies and the complex node relations embedded in graph data. To this end, we introduce a novel domain adaptation approach, namely Anomaly-aware Contrastive alignmenT (ACT), for GAD. ACT is designed to jointly optimise: (i) unsupervised contrastive learning of normal representations of nodes in the target graph, and (ii) anomaly-aware one-class alignment that aligns these contrastive node representations and the representations of labelled normal nodes in the source graph, while enforcing significant deviation of the representations of the normal nodes from the labelled anomalous nodes in the source graph. In doing so, ACT effectively transfers anomaly-informed knowledge from the source graph to learn the complex node relations of the normal class for GAD on the target graph without any specification of the anomaly distributions. Extensive experiments on eight CD-GAD settings demonstrate that our approach ACT achieves substantially improved detection performance over 10 state-of-the-art GAD methods. Code is available at https://github.com/QZ-WANG/ACT.
translated by 谷歌翻译
无监督的时间序列异常检测对各种域中目标系统的潜在故障有助于。当前的最新时间序列异常检测器主要集中于设计高级神经网络结构和新的重建/预测学习目标,以尽可能准确地学习数据正常(正常模式和行为)。但是,这些单级学习方法可以被训练数据中未知异常(即异常污染)所欺骗。此外,他们的正常学习也缺乏对感兴趣异常的知识。因此,他们经常学习一个有偏见的,不准确的正态边界。本文提出了一种新型的单级学习方法,称为校准的一级分类,以解决此问题。我们的单级分类器以两种方式进行校准:(1)通过适应性地惩罚不确定的预测,这有助于消除异常污染的影响,同时强调单级模型对一级模型有信心的预测,并通过区分正常情况来确定(2)来自本机异常示例的样本,这些样本是根据原始数据基于原始数据模拟真实时间序列异常行为的。这两个校准导致耐污染的,异常的单级学习,从而产生了显着改善的正态性建模。对六个现实世界数据集进行的广泛实验表明,我们的模型大大优于12个最先进的竞争对手,并获得了6%-31%的F1分数提高。源代码可在\ url {https://github.com/xuhongzuo/couta}中获得。
translated by 谷歌翻译
孤立森林(Iforest)近年来已经成为最受欢迎的异常检测器。它迭代地在树结构中执行轴平行的数据空间分区,以将偏差的数据对象与其他数据隔离,并且定义为异常得分的对象的隔离难度。 iForest在流行的数据集基准中显示出有效的性能,但其基于轴平行的线性数据分区无效地处理高维/非线性数据空间中的硬异常,甚至更糟糕的是,它导致了臭名昭著的算法偏见。为人工制品区域分配了出乎意料的较大的异常得分。有几个扩展的Iforest,但它们仍然专注于线性数据分区,无法有效地隔离这些硬异常。本文介绍了iforest,深层隔离森林的新型扩展。我们的方法提供了一种综合的隔离方法,可以在任何大小的子空间上任意将数据任意划分数据,从而有效地避免了线性分区中的算法偏置。此外,它仅需要随机初始化的神经网络(即,我们的方法中不需要优化)来确保分区的自由。这样一来,可以完全利用基于网络的随机表示和基于随机分区的隔离的所需随机性和多样性,以显着增强基于隔离集合的异常检测。此外,我们的方法还提供了数据型 - 敏捷的异常检测解决方案。通过简单地插入功能映射中的随机初始化的神经网络来检测不同类型数据中的异常。大量现实数据集的广泛经验结果表明,我们的模型对基于最新的隔离和基于非异常的异常检测模型有了显着改善。
translated by 谷歌翻译
与其他图表相比,图形级异常检测(GAD)描述了检测其结构和/或其节点特征的图表的问题。GAD中的一个挑战是制定图表表示,该图表示能够检测本地和全局 - 异常图,即它们的细粒度(节点级)或整体(图级)属性异常的图形,分别。为了解决这一挑战,我们介绍了一种新的深度异常检测方法,用于通过图表和节点表示的联合随机蒸馏学习丰富的全球和局部正常模式信息。通过训练一个GNN来实现随机初始化网络权重的另一GNN来实现随机蒸馏。来自各种域的16个真实图形数据集的广泛实验表明,我们的模型显着优于七种最先进的模型。代码和数据集可以在https://git.io/llocalkd中获得。
translated by 谷歌翻译
最先进的(SOTA)复杂城市驾驶场景的异常分割方法探索从异常曝光或外部重建模型中了解的像素明智的分类不确定性。然而,之前将高不确定性直接对异常关联的不确定性方法有时可能导致不正确的异常预测,外部重建模型对于实时自动驾驶嵌入式系统往往是过低的。在本文中,我们提出了一种新的异常分段方法,命名为像素 - 明智的能量偏置的弃权学习(PEBAL),探讨了与学习自适应像素级异常类的模型的像素 - 方向弃权学习(AL),以及基于能量的模型(EBM),了解了Inlier像素分布。更具体地说,PEBAL基于EBM和A1的非琐碎的关节训练,其中EBM培训以输出用于异常像素的高能(来自异常曝光),并且培训AL,使得这些高能量像素接受自适应低罚款被纳入异常课程。我们广泛评估PEBAL对抗SOTA,并表明它可以实现四个基准的最佳性能。代码可在https://github.com/tianyu0207/pebal上获得。
translated by 谷歌翻译
无监督的异常检测(UAD)只需要正常(健康)训练图像是实现医学图像分析(MIA)应用的重要工具,例如疾病筛查,因为通常难以收集和注释异常(或疾病)MIA中的图像。然而,严重依赖于正常图像可能导致模型训练过度填写正常类。自我监督的预训练是对这个问题的有效解决方案。遗憾的是,从计算机视觉调整的当前自我监督方法是MIA应用的次优,因为它们不探索设计借口任务或培训过程的MIA域知识。在本文中,我们提出了一种为MIA应用设计的UAD的新的自我监督的预训练方法,通过对比学习(MSACL)命名为多级强大增强。 MSACL基于新颖的优化,以对比正常和多种合成的异常图像,每个类在欧几里德距离和余弦相似度方面强制形成紧密和密集的聚类,其中通过模拟变化数量的病变形成异常图像在正常图像中的不同尺寸和外观。在实验中,我们表明,我们的MSACL预培训使用结肠镜检查,眼底筛选和Covid-19胸部X射线数据集来提高SOTA UAD方法的准确性。
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Forecasts by the European Centre for Medium-Range Weather Forecasts (ECMWF; EC for short) can provide a basis for the establishment of maritime-disaster warning systems, but they contain some systematic biases.The fifth-generation EC atmospheric reanalysis (ERA5) data have high accuracy, but are delayed by about 5 days. To overcome this issue, a spatiotemporal deep-learning method could be used for nonlinear mapping between EC and ERA5 data, which would improve the quality of EC wind forecast data in real time. In this study, we developed the Multi-Task-Double Encoder Trajectory Gated Recurrent Unit (MT-DETrajGRU) model, which uses an improved double-encoder forecaster architecture to model the spatiotemporal sequence of the U and V components of the wind field; we designed a multi-task learning loss function to correct wind speed and wind direction simultaneously using only one model. The study area was the western North Pacific (WNP), and real-time rolling bias corrections were made for 10-day wind-field forecasts released by the EC between December 2020 and November 2021, divided into four seasons. Compared with the original EC forecasts, after correction using the MT-DETrajGRU model the wind speed and wind direction biases in the four seasons were reduced by 8-11% and 9-14%, respectively. In addition, the proposed method modelled the data uniformly under different weather conditions. The correction performance under normal and typhoon conditions was comparable, indicating that the data-driven mode constructed here is robust and generalizable.
translated by 谷歌翻译
Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods. In light of these results, it remains unclear for practitioners how to use these methods and choose between them in a principled way. In this paper, we show that for even moderately rich model classes (easily satisfied by neural networks), any feature attribution method that is complete and linear--for example, Integrated Gradients and SHAP--can provably fail to improve on random guessing for inferring model behaviour. Our results apply to common end-tasks such as identifying local model behaviour, spurious feature identification, and algorithmic recourse. One takeaway from our work is the importance of concretely defining end-tasks. In particular, we show that once such an end-task is defined, a simple and direct approach of repeated model evaluations can outperform many other complex feature attribution methods.
translated by 谷歌翻译
Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA.
translated by 谷歌翻译