As machine learning black boxes are increasingly being deployed in domains such as healthcare and criminal justice, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes. In this paper, we demonstrate that post hoc explanations techniques that rely on input perturbations, such as LIME and SHAP, are not reliable. Specifically, we propose a novel scaffolding technique that effectively hides the biases of any given classifier by allowing an adversarial entity to craft an arbitrary desired explanation. Our approach can be used to scaffold any biased classifier in such a way that its predictions on the input data distribution still remain biased, but the post hoc explanations of the scaffolded classifier look innocuous. Using extensive evaluation with multiple real world datasets (including COMPAS), we demonstrate how extremely biased (racist) classifiers crafted by our framework can easily fool popular explanation techniques such as LIME and SHAP into generating innocuous explanations which do not reflect the underlying biases. CCS CONCEPTS• Computing methodologies → Machine learning; Supervised learning by classification; • Human-centered computing → Interactive systems and tools.
translated by 谷歌翻译
Monumental advancements in artificial intelligence (AI) have lured the interest of doctors, lenders, judges, and other professionals. While these high-stakes decision-makers are optimistic about the technology, those familiar with AI systems are wary about the lack of transparency of its decision-making processes. Perturbation-based post hoc explainers offer a model agnostic means of interpreting these systems while only requiring query-level access. However, recent work demonstrates that these explainers can be fooled adversarially. This discovery has adverse implications for auditors, regulators, and other sentinels. With this in mind, several natural questions arise - how can we audit these black box systems? And how can we ascertain that the auditee is complying with the audit in good faith? In this work, we rigorously formalize this problem and devise a defense against adversarial attacks on perturbation-based explainers. We propose algorithms for the detection (CAD-Detect) and defense (CAD-Defend) of these attacks, which are aided by our novel conditional anomaly detection approach, KNN-CAD. We demonstrate that our approach successfully detects whether a black box system adversarially conceals its decision-making process and mitigates the adversarial attack on real-world data for the prevalent explainers, LIME and SHAP.
translated by 谷歌翻译
尽管在最近的文献中提出了几种类型的事后解释方法(例如,特征归因方法),但在系统地以有效且透明的方式进行系统基准测试这些方法几乎没有工作。在这里,我们介绍了OpenXai,这是一个全面且可扩展的开源框架,用于评估和基准测试事后解释方法。 OpenXAI由以下关键组件组成:(i)灵活的合成数据生成器以及各种现实世界数据集,预训练的模型和最新功能属性方法的集合,(ii)开源实现22个定量指标,用于评估忠诚,稳定性(稳健性)和解释方法的公平性,以及(iii)有史以来第一个公共XAI XAI排行榜对基准解释。 OpenXAI很容易扩展,因为用户可以轻松地评估自定义说明方法并将其纳入我们的排行榜。总体而言,OpenXAI提供了一种自动化的端到端管道,该管道不仅简化并标准化了事后解释方法的评估,而且还促进了基准这些方法的透明度和可重复性。 OpenXAI数据集和数据加载程序,最先进的解释方法的实现和评估指标以及排行榜,可在https://open-xai.github.io/上公开获得。
translated by 谷歌翻译
由于黑匣子的解释越来越多地用于在高赌注设置中建立模型可信度,重要的是确保这些解释准确可靠。然而,事先工作表明,最先进的技术产生的解释是不一致的,不稳定的,并且提供了对它们的正确性和可靠性的极少了解。此外,这些方法也在计算上效率低下,并且需要显着的超参数调谐。在本文中,我们通过开发一种新的贝叶斯框架来涉及用于产生当地解释以及相关的不确定性来解决上述挑战。我们将本框架实例化以获取贝叶斯版本的石灰和kernelshap,其为特征重要性输出可靠的间隔,捕获相关的不确定性。由此产生的解释不仅使我们能够对其质量进行具体推论(例如,有95%的几率是特征重要性在给定范围内),但也是高度一致和稳定的。我们执行了一个详细的理论分析,可以利用上述不确定性来估计对样品的扰动有多少,以及如何进行更快的收敛。这项工作首次尝试在一次拍摄中通过流行的解释方法解决几个关键问题,从而以计算上有效的方式产生一致,稳定和可靠的解释。具有多个真实世界数据集和用户研究的实验评估表明,提出的框架的功效。
translated by 谷歌翻译
由于事后解释方法越来越多地被利用以在高风险环境中解释复杂的模型,因此确保在包括少数群体在内的各个种群亚组中,所得解释的质量始终高。例如,与与其他性别相关的实例(例如,女性)相关的实例(例如,女性)的说明不应该是与其他性别相关的解释。但是,几乎没有研究能够评估通过最先进的解释方法在输出的解释质量上是否存在这种基于群体的差异。在这项工作中,我们通过启动确定基于群体的解释质量差异的研究来解决上述差距。为此,我们首先概述了构成解释质量以及差异尤其有问题的关键属性。然后,我们利用这些属性提出了一个新的评估框架,该框架可以通过最新方法定量测量解释质量的差异。使用此框架,我们进行了严格的经验分析,以了解是否出现了解释质量的基于小组的差异。我们的结果表明,当所解释的模型复杂且高度非线性时,这种差异更可能发生。此外,我们还观察到某些事后解释方法(例如,综合梯度,外形)更有可能表现出上述差异。据我们所知,这项工作是第一个强调和研究解释质量差异的问题。通过这样做,我们的工作阐明了以前未开发的方式,其中解释方法可能在现实世界决策中引入不公平。
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译
可解释的人工智能(XAI)是提高机器学习(ML)管道透明度的有前途解决方案。我们将开发和利用XAI方法用于防御和进攻性网络安全任务的研究越来越多(但分散的)缩影。我们确定3个网络安全利益相关者,即模型用户,设计师和对手,将XAI用于ML管道中的5个不同目标,即1)启用XAI的决策支持,2)将XAI应用于安全任务,3)3)通过模型验证通过模型验证xai,4)解释验证和鲁棒性,以及5)对解释的进攻使用。我们进一步分类文献W.R.T.目标安全域。我们对文献的分析表明,许多XAI应用程序的设计都几乎没有了解如何将其集成到分析师工作流程中 - 仅在14%的情况下进行了解释评估的用户研究。文献也很少解开各种利益相关者的角色。特别是,在安全文献中将模型设计师的作用最小化。为此,我们提出了一个说明性用例,突显了模型设计师的作用。我们证明了XAI可以帮助模型验证和可能导致错误结论的案例。系统化和用例使我们能够挑战几个假设,并提出可以帮助塑造网络安全XAI未来的开放问题
translated by 谷歌翻译
Post-hoc explanations of machine learning models are crucial for people to understand and act on algorithmic predictions. An intriguing class of explanations is through counterfactuals, hypothetical examples that show people how to obtain a different prediction. We posit that effective counterfactual explanations should satisfy two properties: feasibility of the counterfactual actions given user context and constraints, and diversity among the counterfactuals presented. To this end, we propose a framework for generating and evaluating a diverse set of counterfactual explanations based on determinantal point processes. To evaluate the actionability of counterfactuals, we provide metrics that enable comparison of counterfactual-based methods to other local explanation methods. We further address necessary tradeoffs and point to causal implications in optimizing for counterfactuals. Our experiments on four real-world datasets show that our framework can generate a set of counterfactuals that are diverse and well approximate local decision boundaries, outperforming prior approaches to generating diverse counterfactuals. We provide an implementation of the framework at https://github.com/microsoft/DiCE. CCS CONCEPTS• Applied computing → Law, social and behavioral sciences.
translated by 谷歌翻译
反事实解释是作为一种有吸引力的选择,以便向算法决策提供不利影响的个人的诉讼选择。由于它们在关键应用中部署(例如,执法,财务贷款),确保我们清楚地了解这些方法的漏洞并找到解决这些方法的漏洞是重要的。但是,对反事实解释的脆弱性和缺点几乎没有了解。在这项工作中,我们介绍了第一个框架,它描述了反事解释的漏洞,并显示了如何操纵它们。更具体地,我们显示反事实解释可能会聚到众所周知的不同反应性,指示它们不稳健。利用这种洞察力,我们介绍了一部小说目标来培训看似公平的模特,反事实解释在轻微的扰动下发现了更低的成本追索。我们描述了这些模型如何在对审计师出现公平的情况下为数据中的特定子组提供低成本追索。我们对贷款和暴力犯罪预测数据集进行实验,其中某些子组在扰动下达到高达20倍的成本追索性。这些结果提高了关于当前反事实解释技术的可靠性的担忧,我们希望在强大的反事实解释中激发调查。
translated by 谷歌翻译
在文献中提出了各种各样的公平度量和可解释的人工智能(XAI)方法,以确定在关键现实环境中使用的机器学习模型中的偏差。但是,仅报告模型的偏差,或使用现有XAI技术生成解释不足以定位并最终减轻偏差源。在这项工作中,我们通过识别对这种行为的根本原因的训练数据的连贯子集来引入Gopher,该系统产生紧凑,可解释和意外模型行为的偏差或意外模型行为。具体而言,我们介绍了因果责任的概念,这些责任通过删除或更新其数据集来解决培训数据的程度可以解决偏差。建立在这一概念上,我们开发了一种有效的方法,用于生成解释模型偏差的顶级模式,该模型偏置利用来自ML社区的技术来实现因果责任,并使用修剪规则来管理模式的大搜索空间。我们的实验评估表明了Gopher在为识别和调试偏置来源产生可解释解释时的有效性。
translated by 谷歌翻译
There exist several methods that aim to address the crucial task of understanding the behaviour of AI/ML models. Arguably, the most popular among them are local explanations that focus on investigating model behaviour for individual instances. Several methods have been proposed for local analysis, but relatively lesser effort has gone into understanding if the explanations are robust and accurately reflect the behaviour of underlying models. In this work, we present a survey of the works that analysed the robustness of two classes of local explanations (feature importance and counterfactual explanations) that are popularly used in analysing AI/ML models in finance. The survey aims to unify existing definitions of robustness, introduces a taxonomy to classify different robustness approaches, and discusses some interesting results. Finally, the survey introduces some pointers about extending current robustness analysis approaches so as to identify reliable explainability methods.
translated by 谷歌翻译
Interpretability provides a means for humans to verify aspects of machine learning (ML) models and empower human+ML teaming in situations where the task cannot be fully automated. Different contexts require explanations with different properties. For example, the kind of explanation required to determine if an early cardiac arrest warning system is ready to be integrated into a care setting is very different from the type of explanation required for a loan applicant to help determine the actions they might need to take to make their application successful. Unfortunately, there is a lack of standardization when it comes to properties of explanations: different papers may use the same term to mean different quantities, and different terms to mean the same quantity. This lack of a standardized terminology and categorization of the properties of ML explanations prevents us from both rigorously comparing interpretable machine learning methods and identifying what properties are needed in what contexts. In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties. In doing so, we enable more informed selection of task-appropriate formulations of explanation properties as well as standardization for future work in interpretable machine learning.
translated by 谷歌翻译
在过去的几年中,已经引入了许多基于输入数据扰动的解释方法,以提高我们对黑盒模型做出的决策的理解。这项工作的目的是引入一种新颖的扰动方案,以便可以获得更忠实和强大的解释。我们的研究重点是扰动方向对数据拓扑的影响。我们表明,在对离散的Gromov-Hausdorff距离的最坏情况分析以及通过持久的同源性的平均分析中,沿输入歧管的正交方向的扰动更好地保留了数据拓扑。从这些结果中,我们引入EMAP算法,实现正交扰动方案。我们的实验表明,EMAP不仅改善了解释者的性能,而且还可以帮助他们克服最近对基于扰动的方法的攻击。
translated by 谷歌翻译
人工智能(AI)和机器学习(ML)在网络安全挑战中的应用已在行业和学术界的吸引力,部分原因是对关键系统(例如云基础架构和政府机构)的广泛恶意软件攻击。入侵检测系统(IDS)使用某些形式的AI,由于能够以高预测准确性处理大量数据,因此获得了广泛的采用。这些系统托管在组织网络安全操作中心(CSOC)中,作为一种防御工具,可监视和检测恶意网络流,否则会影响机密性,完整性和可用性(CIA)。 CSOC分析师依靠这些系统来决定检测到的威胁。但是,使用深度学习(DL)技术设计的IDS通常被视为黑匣子模型,并且没有为其预测提供理由。这为CSOC分析师造成了障碍,因为他们无法根据模型的预测改善决策。解决此问题的一种解决方案是设计可解释的ID(X-IDS)。这项调查回顾了可解释的AI(XAI)的最先进的ID,目前的挑战,并讨论了这些挑战如何涉及X-ID的设计。特别是,我们全面讨论了黑匣子和白盒方法。我们还在这些方法之间的性能和产生解释的能力方面提出了权衡。此外,我们提出了一种通用体系结构,该建筑认为人类在循环中,该架构可以用作设计X-ID时的指南。研究建议是从三个关键观点提出的:需要定义ID的解释性,需要为各种利益相关者量身定制的解释以及设计指标来评估解释的需求。
translated by 谷歌翻译
A critical problem in post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when. In this work, we begin to address these challenges by unifying eight popular post hoc explanation methods (LIME, C-LIME, SHAP, Occlusion, Vanilla Gradients, Gradients x Input, SmoothGrad, and Integrated Gradients). We show that these methods all perform local function approximation of the black-box model, differing only in the neighbourhood and loss function used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods which demonstrates that no single method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on faithfulness to the black-box model. We empirically validate these theoretical results using various real-world datasets, model classes, and prediction tasks. By bringing diverse explanation methods into a common framework, this work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective, properties, and relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones.
translated by 谷歌翻译
如今,人工智能(AI)已成为临床和远程医疗保健应用程序的基本组成部分,但是最佳性能的AI系统通常太复杂了,无法自我解释。可解释的AI(XAI)技术被定义为揭示系统的预测和决策背后的推理,并且在处理敏感和个人健康数据时,它们变得更加至关重要。值得注意的是,XAI并未在不同的研究领域和数据类型中引起相同的关注,尤其是在医疗保健领域。特别是,许多临床和远程健康应用程序分别基于表格和时间序列数据,而XAI并未在这些数据类型上进行分析,而计算机视觉和自然语言处理(NLP)是参考应用程序。为了提供最适合医疗领域表格和时间序列数据的XAI方法的概述,本文提供了过去5年中文献的审查,说明了生成的解释的类型以及为评估其相关性所提供的努力和质量。具体而言,我们确定临床验证,一致性评估,客观和标准化质量评估以及以人为本的质量评估作为确保最终用户有效解释的关键特征。最后,我们强调了该领域的主要研究挑战以及现有XAI方法的局限性。
translated by 谷歌翻译
检测数据集中的潜在结构是执行数据集分析的重要步骤。然而,用于子类发现的现有最先进的技术是有限的:它们仅限于检测非常少量的异常值,或者它们缺乏处理诸如图像或音频的复杂数据的统计功率。本文提出了解决该子类发现问题的解决方案:通过利用实例说明方法,可以扩展现有分类器以通过分类器的内部决策的差异来检测潜在类。这不仅使用简单的分类技术,还可以使用深度神经网络,允许一种强大而灵活的方法来检测数据集中的潜在结构。有效地,这代表了数据集进入分类器的“解释空间”的投影,并且初步结果表明,即使在处理有限的情况下,该技术也越突出了用于检测潜在类的基线。本文还包含用于自动分析分类器的管道,以及用于交互式探索该技术的结果的Web应用程序。
translated by 谷歌翻译
本文研究了与可解释的AI(XAI)实践有关的两个不同但相关的问题。机器学习(ML)在金融服务中越来越重要,例如预批准,信用承销,投资以及各种前端和后端活动。机器学习可以自动检测培训数据中的非线性和相互作用,从而促进更快,更准确的信用决策。但是,机器学习模型是不透明的,难以解释,这是建立可靠技术所需的关键要素。该研究比较了各种机器学习模型,包括单个分类器(逻辑回归,决策树,LDA,QDA),异质集合(Adaboost,随机森林)和顺序神经网络。结果表明,整体分类器和神经网络的表现优于表现。此外,使用基于美国P2P贷款平台Lending Club提供的开放式访问数据集评估了两种先进的事后不可解释能力 - 石灰和外形来评估基于ML的信用评分模型。对于这项研究,我们还使用机器学习算法来开发新的投资模型,并探索可以最大化盈利能力同时最大程度地降低风险的投资组合策略。
translated by 谷歌翻译
In the last years many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness sometimes at the cost of scarifying accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation. The aim of this paper is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
translated by 谷歌翻译
已经开发了许多方法来了解复杂的预测模型,并将高期望放在事后模型的解释性上。事实证明,这样的解释不是强大的,也不值得信赖,可以被愚弄。本文介绍了用于攻击部分依赖性(图,配置文件,PDP)的技术,这些技术是解释对表格数据训练的任何预测模型的最流行方法。我们展示了可以以对抗性方式操纵PD,这令人震惊,尤其是在可审核性成为支持黑盒机器学习的必备特征的财务或医疗应用中。欺骗是通过中毒数据使用遗传和梯度算法在所需方向弯曲和移动解释的。我们认为,这是使用遗传算法来操纵解释的第一项工作,这是可以转移的,因为它可以概括这两种方式:以模型 - 不合Stic和一种解释 - 不合Snostic的方式。
translated by 谷歌翻译