在过去的几年中,已经引入了许多基于输入数据扰动的解释方法,以提高我们对黑盒模型做出的决策的理解。这项工作的目的是引入一种新颖的扰动方案,以便可以获得更忠实和强大的解释。我们的研究重点是扰动方向对数据拓扑的影响。我们表明,在对离散的Gromov-Hausdorff距离的最坏情况分析以及通过持久的同源性的平均分析中,沿输入歧管的正交方向的扰动更好地保留了数据拓扑。从这些结果中,我们引入EMAP算法,实现正交扰动方案。我们的实验表明,EMAP不仅改善了解释者的性能,而且还可以帮助他们克服最近对基于扰动的方法的攻击。
translated by 谷歌翻译
本地解释性方法 - 由于需要从业者将其模型输出合理化,因此寻求为每次预测产生解释的人越来越普遍。然而,比较本地解释性方法很难,因为它们每个都会在各种尺度和尺寸中产生输出。此外,由于一些可解释性方法的随机性质,可以不同地运行方法以产生给定观察的矛盾解释。在本文中,我们提出了一种基于拓扑的框架来从一组本地解释中提取简化的表示。我们通过首先为标量函数设计解释空间和模型预测之间的关系来实现。然后,我们计算这个功能的拓扑骨架。这种拓扑骨架作为这样的功能的签名,我们用于比较不同的解释方法。我们证明我们的框架不仅可以可靠地识别可解释性技术之间的差异,而且提供稳定的表示。然后,我们展示了我们的框架如何用于标识本地解释性方法的适当参数。我们的框架很简单,不需要复杂的优化,并且可以广泛应用于大多数本地解释方法。我们认为,我们的方法的实用性和多功能性将有助于促进基于拓扑的方法作为理解和比较解释方法的工具。
translated by 谷歌翻译
As machine learning black boxes are increasingly being deployed in domains such as healthcare and criminal justice, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes. In this paper, we demonstrate that post hoc explanations techniques that rely on input perturbations, such as LIME and SHAP, are not reliable. Specifically, we propose a novel scaffolding technique that effectively hides the biases of any given classifier by allowing an adversarial entity to craft an arbitrary desired explanation. Our approach can be used to scaffold any biased classifier in such a way that its predictions on the input data distribution still remain biased, but the post hoc explanations of the scaffolded classifier look innocuous. Using extensive evaluation with multiple real world datasets (including COMPAS), we demonstrate how extremely biased (racist) classifiers crafted by our framework can easily fool popular explanation techniques such as LIME and SHAP into generating innocuous explanations which do not reflect the underlying biases. CCS CONCEPTS• Computing methodologies → Machine learning; Supervised learning by classification; • Human-centered computing → Interactive systems and tools.
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译
适当地表示数据库中的元素,以便可以准确匹配查询是信息检索的核心任务;最近,通过使用各种指标将数据库的图形结构嵌入层次结构的方式中来实现。持久性同源性是一种在拓扑数据分析中常用的工具,能够严格地以其层次结构和连接结构来表征数据库。计算各种嵌入式数据集上的持续同源性表明,一些常用的嵌入式无法保留连接性。我们表明,那些成功保留数据库拓扑的嵌入通过引入两种扩张不变的比较措施来捕获这种效果,尤其是解决了对流形的度量扭曲问题。我们为它们的计算提供了一种算法,该算法大大降低了现有方法的时间复杂性。我们使用这些措施来执行基于拓扑的信息检索的第一个实例,并证明了其在持久同源性的标准瓶颈距离上的性能提高。我们在不同数据品种的数据库中展示了我们的方法,包括文本,视频和医学图像。
translated by 谷歌翻译
Post-hoc explanation methods have become increasingly depended upon for understanding black-box classifiers in high-stakes applications, precipitating a need for reliable explanations. While numerous explanation methods have been proposed, recent works have shown that many existing methods can be inconsistent or unstable. In addition, high-performing classifiers are often highly nonlinear and can exhibit complex behavior around the decision boundary, leading to brittle or misleading local explanations. Therefore, there is an impending need to quantify the uncertainty of such explanation methods in order to understand when explanations are trustworthy. We introduce a novel uncertainty quantification method parameterized by a Gaussian Process model, which combines the uncertainty approximation of existing methods with a novel geodesic-based similarity which captures the complexity of the target black-box decision boundary. The proposed framework is highly flexible; it can be used with any black-box classifier and feature attribution method to amortize uncertainty estimates for explanations. We show theoretically that our proposed geodesic-based kernel similarity increases with the complexity of the decision boundary. Empirical results on multiple tabular and image datasets show that our decision boundary-aware uncertainty estimate improves understanding of explanations as compared to existing methods.
translated by 谷歌翻译
Monumental advancements in artificial intelligence (AI) have lured the interest of doctors, lenders, judges, and other professionals. While these high-stakes decision-makers are optimistic about the technology, those familiar with AI systems are wary about the lack of transparency of its decision-making processes. Perturbation-based post hoc explainers offer a model agnostic means of interpreting these systems while only requiring query-level access. However, recent work demonstrates that these explainers can be fooled adversarially. This discovery has adverse implications for auditors, regulators, and other sentinels. With this in mind, several natural questions arise - how can we audit these black box systems? And how can we ascertain that the auditee is complying with the audit in good faith? In this work, we rigorously formalize this problem and devise a defense against adversarial attacks on perturbation-based explainers. We propose algorithms for the detection (CAD-Detect) and defense (CAD-Defend) of these attacks, which are aided by our novel conditional anomaly detection approach, KNN-CAD. We demonstrate that our approach successfully detects whether a black box system adversarially conceals its decision-making process and mitigates the adversarial attack on real-world data for the prevalent explainers, LIME and SHAP.
translated by 谷歌翻译
Interpretability provides a means for humans to verify aspects of machine learning (ML) models and empower human+ML teaming in situations where the task cannot be fully automated. Different contexts require explanations with different properties. For example, the kind of explanation required to determine if an early cardiac arrest warning system is ready to be integrated into a care setting is very different from the type of explanation required for a loan applicant to help determine the actions they might need to take to make their application successful. Unfortunately, there is a lack of standardization when it comes to properties of explanations: different papers may use the same term to mean different quantities, and different terms to mean the same quantity. This lack of a standardized terminology and categorization of the properties of ML explanations prevents us from both rigorously comparing interpretable machine learning methods and identifying what properties are needed in what contexts. In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties. In doing so, we enable more informed selection of task-appropriate formulations of explanation properties as well as standardization for future work in interpretable machine learning.
translated by 谷歌翻译
A critical problem in post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when. In this work, we begin to address these challenges by unifying eight popular post hoc explanation methods (LIME, C-LIME, SHAP, Occlusion, Vanilla Gradients, Gradients x Input, SmoothGrad, and Integrated Gradients). We show that these methods all perform local function approximation of the black-box model, differing only in the neighbourhood and loss function used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods which demonstrates that no single method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on faithfulness to the black-box model. We empirically validate these theoretical results using various real-world datasets, model classes, and prediction tasks. By bringing diverse explanation methods into a common framework, this work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective, properties, and relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones.
translated by 谷歌翻译
最近有一项激烈的活动在嵌入非常高维和非线性数据结构的嵌入中,其中大部分在数据科学和机器学习文献中。我们分四部分调查这项活动。在第一部分中,我们涵盖了非线性方法,例如主曲线,多维缩放,局部线性方法,ISOMAP,基于图形的方法和扩散映射,基于内核的方法和随机投影。第二部分与拓扑嵌入方法有关,特别是将拓扑特性映射到持久图和映射器算法中。具有巨大增长的另一种类型的数据集是非常高维网络数据。第三部分中考虑的任务是如何将此类数据嵌入中等维度的向量空间中,以使数据适合传统技术,例如群集和分类技术。可以说,这是算法机器学习方法与统计建模(所谓的随机块建模)之间的对比度。在论文中,我们讨论了两种方法的利弊。调查的最后一部分涉及嵌入$ \ mathbb {r}^ 2 $,即可视化中。提出了三种方法:基于第一部分,第二和第三部分中的方法,$ t $ -sne,UMAP和大节。在两个模拟数据集上进行了说明和比较。一个由嘈杂的ranunculoid曲线组成的三胞胎,另一个由随机块模型和两种类型的节点产生的复杂性的网络组成。
translated by 谷歌翻译
在本文中,我们对在表格数据的情况下进行了详尽的理论分析。我们证明,在较大的样本限制中,可以按照算法参数的函数以及与黑框模型相关的一些期望计算来计算表格石灰提供的可解释系数。当要解释的函数具有一些不错的代数结构(根据坐标的子集,线性,乘法或稀疏)时,我们的分析提供了对Lime提供的解释的有趣见解。这些可以应用于一系列机器学习模型,包括高斯内核或卡车随机森林。例如,对于线性函数,我们表明Lime具有理想的属性,可以提供与函数系数成正比的解释,以解释并忽略该函数未使用的坐标来解释。对于基于分区的回归器,另一方面,我们表明石灰会产生可能提供误导性解释的不希望的人工制品。
translated by 谷歌翻译
Explainable artificial intelligence is proposed to provide explanations for reasoning performed by an Artificial Intelligence. There is no consensus on how to evaluate the quality of these explanations, since even the definition of explanation itself is not clear in the literature. In particular, for the widely known Local Linear Explanations, there are qualitative proposals for the evaluation of explanations, although they suffer from theoretical inconsistencies. The case of image is even more problematic, where a visual explanation seems to explain a decision while detecting edges is what it really does. There are a large number of metrics in the literature specialized in quantitatively measuring different qualitative aspects so we should be able to develop metrics capable of measuring in a robust and correct way the desirable aspects of the explanations. In this paper, we propose a procedure called REVEL to evaluate different aspects concerning the quality of explanations with a theoretically coherent development. This procedure has several advances in the state of the art: it standardizes the concepts of explanation and develops a series of metrics not only to be able to compare between them but also to obtain absolute information regarding the explanation itself. The experiments have been carried out on image four datasets as benchmark where we show REVEL's descriptive and analytical power.
translated by 谷歌翻译
In this paper, we present the findings of various methodologies for measuring the similarity of synthetic data generated from tabular data samples. We particularly apply our research to the case where the synthetic data has many more samples than the real data. This task has a special complexity: validating the reliability of this synthetically generated data with a much higher number of samples than the original. We evaluated the most commonly used global metrics found in the literature. We introduced a novel approach based on the data's topological signature analysis. Topological data analysis has several advantages in addressing this latter challenge. The study of qualitative geometric information focuses on geometric properties while neglecting quantitative distance function values. This is especially useful with high-dimensional synthetic data where the sample size has been significantly increased. It is comparable to introducing new data points into the data space within the limits set by the original data. Then, in large synthetic data spaces, points will be much more concentrated than in the original space, and their analysis will become much more sensitive to both the metrics used and noise. Instead, the concept of "closeness" between points is used for qualitative geometric information. Finally, we suggest an approach based on data Eigen vectors for evaluating the level of noise in synthetic data. This approach can also be used to assess the similarity of original and synthetic data.
translated by 谷歌翻译
持续的同源性(PH)是拓扑数据分析中最流行的方法之一。尽管PH已用于许多不同类型的应用程序中,但其成功背后的原因仍然难以捉摸。特别是,尚不知道哪种类别的问题最有效,或者在多大程度上可以检测几何或拓扑特征。这项工作的目的是确定pH在数据分析中比其他方法更好甚至更好的问题。我们考虑三个基本形状分析任务:从形状采样的2D和3D点云中检测孔数,曲率和凸度。实验表明,pH在这些任务中取得了成功,超过了几个基线,包括PointNet,这是一个精确地受到点云的属性启发的体系结构。此外,我们观察到,pH对于有限的计算资源和有限的培训数据以及分布外测试数据,包括各种数据转换和噪声,仍然有效。
translated by 谷歌翻译
由于黑匣子的解释越来越多地用于在高赌注设置中建立模型可信度,重要的是确保这些解释准确可靠。然而,事先工作表明,最先进的技术产生的解释是不一致的,不稳定的,并且提供了对它们的正确性和可靠性的极少了解。此外,这些方法也在计算上效率低下,并且需要显着的超参数调谐。在本文中,我们通过开发一种新的贝叶斯框架来涉及用于产生当地解释以及相关的不确定性来解决上述挑战。我们将本框架实例化以获取贝叶斯版本的石灰和kernelshap,其为特征重要性输出可靠的间隔,捕获相关的不确定性。由此产生的解释不仅使我们能够对其质量进行具体推论(例如,有95%的几率是特征重要性在给定范围内),但也是高度一致和稳定的。我们执行了一个详细的理论分析,可以利用上述不确定性来估计对样品的扰动有多少,以及如何进行更快的收敛。这项工作首次尝试在一次拍摄中通过流行的解释方法解决几个关键问题,从而以计算上有效的方式产生一致,稳定和可靠的解释。具有多个真实世界数据集和用户研究的实验评估表明,提出的框架的功效。
translated by 谷歌翻译
尽管深度神经网络(DNN)在感知和控制任务中表现出令人难以置信的性能,但几个值得信赖的问题仍然是开放的。其中一个最讨论的主题是存在对抗扰动的存在,它在能够量化给定输入的稳健性的可提供技术上开辟了一个有趣的研究线。在这方面,来自分类边界的输入的欧几里德距离表示良好被证明的鲁棒性评估,作为最小的经济适用的逆势扰动。不幸的是,由于NN的非凸性质,计算如此距离非常复杂。尽管已经提出了几种方法来解决这个问题,但据我们所知,没有提出可证明的结果来估计和绑定承诺的错误。本文通过提出两个轻量级策略来寻找最小的对抗扰动来解决这个问题。不同于现有技术,所提出的方法允许与理论上的近似距离的误差估计理论配制。最后,据报道,据报道了大量实验来评估算法的性能并支持理论发现。所获得的结果表明,该策略近似于靠近分类边界的样品的理论距离,导致可提供对任何对抗攻击的鲁棒性保障。
translated by 谷歌翻译
机器学习方法在做出预测方面越来越好,但与此同时,它们也变得越来越复杂和透明。结果,通常依靠解释器为这些黑框预测模型提供可解释性。作为关键的诊断工具,重要的是这些解释者本身是可靠的。在本文中,我们关注可靠性的一个特定方面,即解释器应为类似的数据输入提供类似的解释。我们通过引入和定义解释者的敏锐度,类似于分类器的敏锐性来形式化这个概念。我们的形式主义灵感来自概率lipchitzness的概念,该概念捕捉了功能局部平滑度的可能性。对于各种解释者(例如,摇摆,上升,CXPLAIN),我们为这些解释者的敏锐性提供了下限保证,鉴于预测函数的lipschitzness。这些理论上的结果表明,局部平滑的预测函数为局部强大的解释提供了自身。我们对模拟和真实数据集进行经验评估这些结果。
translated by 谷歌翻译
尽管在最近的文献中提出了几种类型的事后解释方法(例如,特征归因方法),但在系统地以有效且透明的方式进行系统基准测试这些方法几乎没有工作。在这里,我们介绍了OpenXai,这是一个全面且可扩展的开源框架,用于评估和基准测试事后解释方法。 OpenXAI由以下关键组件组成:(i)灵活的合成数据生成器以及各种现实世界数据集,预训练的模型和最新功能属性方法的集合,(ii)开源实现22个定量指标,用于评估忠诚,稳定性(稳健性)和解释方法的公平性,以及(iii)有史以来第一个公共XAI XAI排行榜对基准解释。 OpenXAI很容易扩展,因为用户可以轻松地评估自定义说明方法并将其纳入我们的排行榜。总体而言,OpenXAI提供了一种自动化的端到端管道,该管道不仅简化并标准化了事后解释方法的评估,而且还促进了基准这些方法的透明度和可重复性。 OpenXAI数据集和数据加载程序,最先进的解释方法的实现和评估指标以及排行榜,可在https://open-xai.github.io/上公开获得。
translated by 谷歌翻译
我们考虑了$ d $维图像的新拓扑效率化,该图像通过在计算持久性之前与各种过滤器进行卷积。将卷积滤波器视为图像中的图案,结果卷积的持久图描述了图案在整个图像中分布的方式。我们称之为卷积持久性的管道扩展了拓扑结合图像数据中模式的能力。的确,我们证明(通常说)对于任何两个图像,人们都可以找到某些过滤器,它们会为其产生不同的持久图,以便给定图像的所有可能的卷积持久性图的收集是一个不变的不变性。通过表现出卷积的持久性是另一种拓扑不变的持续性副学变换的特殊情况,这证明了这一点。卷积持久性的其他优势是提高噪声的稳定性和鲁棒性,对数据依赖性矢量化的更大灵活性以及对具有较大步幅向量的卷积的计算复杂性降低。此外,我们还有一套实验表明,即使人们使用随机过滤器并通过仅记录其总持久性,卷积大大提高了持久性的预测能力,即使一个人使用随机过滤器并将结果图进行量化。
translated by 谷歌翻译
不服从统计学习理论的古典智慧,即使它们通常包含数百万参数,现代深度神经网络也概括了井。最近,已经表明迭代优化算法的轨迹可以具有分形结构,并且它们的泛化误差可以与这种分形的复杂性正式连接。这种复杂性由分形的内在尺寸测量,通常比网络中的参数数量小得多。尽管这种透视提供了对为什么跨分层化的网络不会过度装备的解释,但计算内在尺寸(例如,在训练期间进行监测泛化)是一种臭名昭着的困难任务,即使在中等环境维度中,现有方法也通常失败。在这项研究中,我们考虑了从拓扑数据分析(TDA)的镜头上的这个问题,并开发了一个基于严格的数学基础的通用计算工具。通过在学习理论和TDA之间进行新的联系,我们首先说明了泛化误差可以在称为“持久同源维度”(PHD)的概念中,与先前工作相比,我们的方法不需要关于培训动态的任何额外几何或统计假设。然后,通过利用最近建立的理论结果和TDA工具,我们开发了一种高效的算法来估计现代深度神经网络的规模中的博士,并进一步提供可视化工具,以帮助理解深度学习中的概括。我们的实验表明,所提出的方法可以有效地计算网络的内在尺寸,这些设置在各种设置中,这是预测泛化误差的。
translated by 谷歌翻译