We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms-Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.
translated by 谷歌翻译
随着深度学习(DL)功效的增长,对模型差解释性的关注也会增长。归因方法通过量化输入功能对模型预测的重要性来解决解释性问题。在各种方法中,综合梯度(IG)通过声称其他方法无法满足理想的公理,而IG和类似的方法则独特地满足了公理。本文评论了IG及其应用/扩展的基本方面:1)我们确定IG函数空间与支持文献的功能空间之间的关键差异,这些空间使IG唯一性的先前主张问题成为问题。我们表明,通过引入附加的公理,\ textit {nontecreasing postitivity},可以建立唯一性主张。 2)我们通过识别Ig是/不是属性输入中IG不是Lipschitz的函数类来解决输入灵敏度的问题。 3)我们表明,单基线方法的公理具有具有概率分布基线的方法的类似特性。 4)我们引入了一种计算有效的方法,用于识别有助于IG归因图的指定区域的内部神经元。最后,我们提出了验证此方法的实验结果。
translated by 谷歌翻译
众所周知,端到端的神经NLP体系结构很难理解,这引起了近年来为解释性建模的许多努力。模型解释的基本原则是忠诚,即,解释应准确地代表模型预测背后的推理过程。这项调查首先讨论了忠诚的定义和评估及其对解释性的意义。然后,我们通过将方法分为五类来介绍忠实解释的最新进展:相似性方法,模型内部结构的分析,基于反向传播的方法,反事实干预和自我解释模型。每个类别将通过其代表性研究,优势和缺点来说明。最后,我们从它们的共同美德和局限性方面讨论了上述所有方法,并反思未来的工作方向忠实的解释性。对于有兴趣研究可解释性的研究人员,这项调查将为该领域提供可访问且全面的概述,为进一步探索提供基础。对于希望更好地了解自己的模型的用户,该调查将是一项介绍性手册,帮助选择最合适的解释方法。
translated by 谷歌翻译
除了机器学习(ML)模型的令人印象深刻的预测力外,最近还出现了解释方法,使得能够解释诸如深神经网络的复杂非线性学习模型。获得更好的理解尤其重要。对于安全 - 关键的ML应用或医学诊断等。虽然这种可解释的AI(XAI)技术对分类器达到了重大普及,但到目前为止对XAI的重点进行了很少的关注(Xair)。在这篇综述中,我们澄清了XAI对回归和分类任务的基本概念差异,为Xair建立了新的理论见解和分析,为Xair提供了真正的实际回归问题的示范,最后讨论了该领域仍然存在的挑战。
translated by 谷歌翻译
卷积神经网络(CNN)最近由于捕获非线性系统行为并提取预测性时空模式而引起了地球科学的极大关注。然而,鉴于其黑盒的性质以及预测性的重要性,可解释的人工智能方法(XAI)已成为解释CNN决策策略的一种手段。在这里,我们建立了一些最受欢迎的XAI方法的比较,并研究了它们在解释CNN的地球科学应用决策方面的保真度。我们的目标是提高对这些方法的理论局限性的认识,并深入了解相对优势和缺点,以帮助指导最佳实践。所考虑的XAI方法首先应用于理想化的归因基准,在该基准中,该网络解释的基础真实是先验,以帮助客观地评估其性能。其次,我们将XAI应用于与气候相关的预测设置,即解释CNN,该CNN经过训练,可以预测气候模拟每日快照中的大气河流数量。我们的结果突出了XAI方法的几个重要问题(例如,梯度破碎,无法区分归因的迹象,对零输入的无知),这些迹象以前在我们的领域被忽略了,如果不谨慎地考虑,可能会导致扭曲的图片CNN决策策略。我们设想,我们的分析将激发对XAI保真度的进一步调查,并将有助于在地球科学中谨慎地实施XAI,这可能导致进一步剥削CNN和深入学习预测问题。
translated by 谷歌翻译
减轻培训数据集中存在的杂散相关性的依赖性是深度学习的快速新兴和重要话题。最近的方法包括对深度神经网络(DNN)的特征归因的前瞻,进入培训过程,以减少对不需要特征的依赖性。然而,直到现在,需要履行满足所需的公理的高质量归属,以防止计算它们所需的时间。这又导致了长期训练时间或无效的归因前瞻。在这项工作中,我们通过考虑一类有效的公理归属DNN来打破这种权衡,只能使用单个向前/向后通过。我们正式证明非负面均匀的DNN,这里被称为$ \ Mathcal {x} $ - DNN,其有效地是公理的,并且通过简单地删除每层的偏置项,可以从广泛的常规DNN中毫不费力地构造它们。各种实验证明了$ \ mathcal {x} $ - dnns的优势,在常规dnn上击败最先进的通用归因方法,用于使用归因前视培训。
translated by 谷歌翻译
Explainable AI transforms opaque decision strategies of ML models into explanations that are interpretable by the user, for example, identifying the contribution of each input feature to the prediction at hand. Such explanations, however, entangle the potentially multiple factors that enter into the overall complex decision strategy. We propose to disentangle explanations by finding relevant subspaces in activation space that can be mapped to more abstract human-understandable concepts and enable a joint attribution on concepts and input features. To automatically extract the desired representation, we propose new subspace analysis formulations that extend the principle of PCA and subspace analysis to explanations. These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), optimize relevance of projected activations rather than the more traditional variance or kurtosis. This enables a much stronger focus on subspaces that are truly relevant for the prediction and the explanation, in particular, ignoring activations or concepts to which the prediction model is invariant. Our approach is general enough to work alongside common attribution techniques such as Shapley Value, Integrated Gradients, or LRP. Our proposed methods show to be practically useful and compare favorably to the state of the art as demonstrated on benchmarks and three use cases.
translated by 谷歌翻译
Interpretability provides a means for humans to verify aspects of machine learning (ML) models and empower human+ML teaming in situations where the task cannot be fully automated. Different contexts require explanations with different properties. For example, the kind of explanation required to determine if an early cardiac arrest warning system is ready to be integrated into a care setting is very different from the type of explanation required for a loan applicant to help determine the actions they might need to take to make their application successful. Unfortunately, there is a lack of standardization when it comes to properties of explanations: different papers may use the same term to mean different quantities, and different terms to mean the same quantity. This lack of a standardized terminology and categorization of the properties of ML explanations prevents us from both rigorously comparing interpretable machine learning methods and identifying what properties are needed in what contexts. In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties. In doing so, we enable more informed selection of task-appropriate formulations of explanation properties as well as standardization for future work in interpretable machine learning.
translated by 谷歌翻译
随着深度神经网络的兴起,解释这些网络预测的挑战已经越来越识别。虽然存在许多用于解释深度神经网络的决策的方法,但目前没有关于如何评估它们的共识。另一方面,鲁棒性是深度学习研究的热门话题;但是,在最近,几乎没有谈论解释性。在本教程中,我们首先呈现基于梯度的可解释性方法。这些技术使用梯度信号来分配对输入特征的决定的负担。后来,我们讨论如何为其鲁棒性和对抗性的鲁棒性在具有有意义的解释中扮演的作用来评估基于梯度的方法。我们还讨论了基于梯度的方法的局限性。最后,我们提出了在选择解释性方法之前应检查的最佳实践和属性。我们结束了未来在稳健性和解释性融合的地区研究的研究。
translated by 谷歌翻译
越来越多的电子健康记录(EHR)数据和深度学习技术进步的越来越多的可用性(DL)已经引发了在开发基于DL的诊断,预后和治疗的DL临床决策支持系统中的研究兴趣激增。尽管承认医疗保健的深度学习的价值,但由于DL的黑匣子性质,实际医疗环境中进一步采用的障碍障碍仍然存在。因此,有一个可解释的DL的新兴需求,它允许最终用户评估模型决策,以便在采用行动之前知道是否接受或拒绝预测和建议。在这篇综述中,我们专注于DL模型在医疗保健中的可解释性。我们首先引入深入解释性的方法,并作为该领域的未来研究人员或临床从业者的方法参考。除了这些方法的细节之外,我们还包括对这些方法的优缺点以及它们中的每个场景都适合的讨论,因此感兴趣的读者可以知道如何比较和选择它们供使用。此外,我们讨论了这些方法,最初用于解决一般域问题,已经适应并应用于医疗保健问题以及如何帮助医生更好地理解这些数据驱动技术。总的来说,我们希望这项调查可以帮助研究人员和从业者在人工智能(AI)和临床领域了解我们为提高其DL模型的可解释性并相应地选择最佳方法。
translated by 谷歌翻译
可解释的人工智能(XAI)的新兴领域旨在为当今强大但不透明的深度学习模型带来透明度。尽管本地XAI方法以归因图的形式解释了个体预测,从而确定了重要特征的发生位置(但没有提供有关其代表的信息),但全局解释技术可视化模型通常学会的编码的概念。因此,两种方法仅提供部分见解,并留下将模型推理解释的负担。只有少数当代技术旨在将本地和全球XAI背后的原则结合起来,以获取更多信息的解释。但是,这些方法通常仅限于特定的模型体系结构,或对培训制度或数据和标签可用性施加其他要求,这实际上使事后应用程序成为任意预训练的模型。在这项工作中,我们介绍了概念相关性传播方法(CRP)方法,该方法结合了XAI的本地和全球观点,因此允许回答“何处”和“ where”和“什么”问题,而没有其他约束。我们进一步介绍了相关性最大化的原则,以根据模型对模型的有用性找到代表性的示例。因此,我们提高了对激活最大化及其局限性的共同实践的依赖。我们证明了我们方法在各种环境中的能力,展示了概念相关性传播和相关性最大化导致了更加可解释的解释,并通过概念图表,概念组成分析和概念集合和概念子区和概念子区和概念子集和定量研究对模型的表示和推理提供了深刻的见解。它们在细粒度决策中的作用。
translated by 谷歌翻译
去年的特征是不透明的自动决策支持系统(例如深神经网络(DNNS))激增。尽管它们具有出色的概括和预测技能,但其功能不允许对其行为获得详细的解释。由于不透明的机器学习模型越来越多地用于在关键环境中做出重要的预测,因此危险是创建和使用不合理或合法的决策。因此,关于赋予机器学习模型具有解释性的重要性有一个普遍的共识。可解释的人工智能(XAI)技术可以用来验证和认证模型输出,并以可信赖,问责制,透明度和公平等理想的概念来增强它们。本指南旨在成为任何具有计算机科学背景的受众的首选手册,旨在获得对机器学习模型的直观见解,并伴随着笔直,快速和直观的解释。本文旨在通过在其特定的日常型号,数据集和用例中应用XAI技术来填补缺乏引人注目的XAI指南。图1充当读者的流程图/地图,应帮助他根据自己的数据类型找到理想的使用方法。在每章中,读者将找到所提出的方法的描述,以及在生物医学应用程序和Python笔记本上使用的示例。它可以轻松修改以应用于特定应用程序。
translated by 谷歌翻译
Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods. In light of these results, it remains unclear for practitioners how to use these methods and choose between them in a principled way. In this paper, we show that for even moderately rich model classes (easily satisfied by neural networks), any feature attribution method that is complete and linear--for example, Integrated Gradients and SHAP--can provably fail to improve on random guessing for inferring model behaviour. Our results apply to common end-tasks such as identifying local model behaviour, spurious feature identification, and algorithmic recourse. One takeaway from our work is the importance of concretely defining end-tasks. In particular, we show that once such an end-task is defined, a simple and direct approach of repeated model evaluations can outperform many other complex feature attribution methods.
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译
最近有一个努力使机器学习模型更加可解释,以便可以信任他们的性能。尽管成功,但这些方法主要集中在深度学习方法上,而机器学习中的基本优化方法(例如线性程序(LP))已被排除在外。即使可以将LPS视为白框或Clearbox模型,就输入和输出之间的关系而言,它们也不容易理解。由于线性程序仅为优化问题提供最佳解决方案,因此进一步的解释通常会有所帮助。在这项工作中,我们将解释神经网络的归因方法扩展到线性程序。这些方法通过提供模型输入的相关性分数来解释模型,以显示每个输入对输出的影响。除了使用经典的基于梯度的归因方法,我们还提出了一种将基于扰动的归因方法适应LPS的方法。我们对几种不同的线性和整数问题的评估表明,归因方法可以为线性程序生成有用的解释。但是,我们还证明了直接使用神经归因方法可能会带来一些缺点,因为这些方法在神经网络上的属性不一定会转移到线性程序中。如果线性程序具有多个最佳解决方案,则方法也可能会挣扎,因为求解器只是返回一个可能的解决方案。希望我们的结果可以用作朝这个方向进行进一步研究的好起点。
translated by 谷歌翻译
与此同时,在可解释的人工智能(XAI)的研究领域中,已经开发了各种术语,动机,方法和评估标准。随着XAI方法的数量大大增长,研究人员以及从业者以及从业者需要一种方法:掌握主题的广度,比较方法,并根据特定用例所需的特征选择正确的XAI方法语境。在文献中,可以找到许多不同细节水平和深度水平的XAI方法分类。虽然他们经常具有不同的焦点,但它们也表现出许多重叠点。本文统一了这些努力,并提供了XAI方法的分类,这是关于目前研究中存在的概念的概念。在结构化文献分析和元研究中,我们识别并审查了XAI方法,指标和方法特征的50多个最引用和最新的调查。总结在调查调查中,我们将文章的术语和概念合并为统一的结构化分类。其中的单一概念总计超过50个不同的选择示例方法,我们相应地分类。分类学可以为初学者,研究人员和从业者提供服务作为XAI方法特征和方面的参考和广泛概述。因此,它提供了针对有针对性的,用例导向的基础和上下文敏感的未来研究。
translated by 谷歌翻译
归因方法已成为基于输入功能的相关性来解释模型预测的流行方法。尽管功能重要性排名可以提供有关模型如何从原始输入中得出预测的见解,但它们并未对模型用于预测的关键特征使用明确定义。在本文中,我们提出了一种称为WHEACHA的新方法,用于解释代码模型的预测。尽管Wheacha采用追踪模型预测回到输入特征的相同机制,但它与所有现有归因方法的不同之处在于至关重要的方式。具体而言,WHEACHA将输入程序分为“小麦”(即,定义功能是模型预测其预测标签的原因)和其余的“ CHAFF”,以预测学习代码模型的任何预测。我们在工具中意识到WHEACHA,并使用它来解释四个突出的代码模型:Code2Vec,Seq-GNN,GGNN和Codebert。结果表明(1)Huoyan是有效的 - 平均以二十秒的速度服用以端到端方式计算输入程序的小麦(即包括模型预测时间); (2)所有模型用于预测输入程序的小麦均由简单的句法甚至词汇属性(即标识符名称)制成; (3)基于小麦,我们提出了一种新颖的方法,可以通过培训数据的镜头来解释代码模型的预测。
translated by 谷歌翻译
Artificial intelligence(AI) systems based on deep neural networks (DNNs) and machine learning (ML) algorithms are increasingly used to solve critical problems in bioinformatics, biomedical informatics, and precision medicine. However, complex DNN or ML models that are unavoidably opaque and perceived as black-box methods, may not be able to explain why and how they make certain decisions. Such black-box models are difficult to comprehend not only for targeted users and decision-makers but also for AI developers. Besides, in sensitive areas like healthcare, explainability and accountability are not only desirable properties of AI but also legal requirements -- especially when AI may have significant impacts on human lives. Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models and make it possible to interpret how AI systems make their decisions with transparency. An interpretable ML model can explain how it makes predictions and which factors affect the model's outcomes. The majority of state-of-the-art interpretable ML methods have been developed in a domain-agnostic way and originate from computer vision, automated reasoning, or even statistics. Many of these methods cannot be directly applied to bioinformatics problems, without prior customization, extension, and domain adoption. In this paper, we discuss the importance of explainability with a focus on bioinformatics. We analyse and comprehensively overview of model-specific and model-agnostic interpretable ML methods and tools. Via several case studies covering bioimaging, cancer genomics, and biomedical text mining, we show how bioinformatics research could benefit from XAI methods and how they could help improve decision fairness.
translated by 谷歌翻译
Saliency methods compute heat maps that highlight portions of an input that were most {\em important} for the label assigned to it by a deep net. Evaluations of saliency methods convert this heat map into a new {\em masked input} by retaining the $k$ highest-ranked pixels of the original input and replacing the rest with \textquotedblleft uninformative\textquotedblright\ pixels, and checking if the net's output is mostly unchanged. This is usually seen as an {\em explanation} of the output, but the current paper highlights reasons why this inference of causality may be suspect. Inspired by logic concepts of {\em completeness \& soundness}, it observes that the above type of evaluation focuses on completeness of the explanation, but ignores soundness. New evaluation metrics are introduced to capture both notions, while staying in an {\em intrinsic} framework -- i.e., using the dataset and the net, but no separately trained nets, human evaluations, etc. A simple saliency method is described that matches or outperforms prior methods in the evaluations. Experiments also suggest new intrinsic justifications, based on soundness, for popular heuristic tricks such as TV regularization and upsampling.
translated by 谷歌翻译
基于Shapley值的功能归因在解释机器学习模型中很受欢迎。但是,从理论和计算的角度来看,它们的估计是复杂的。我们将这种复杂性分解为两个因素:(1)〜删除特征信息的方法,以及(2)〜可拖动估计策略。这两个因素提供了一种天然镜头,我们可以更好地理解和比较24种不同的算法。基于各种特征删除方法,我们描述了多种类型的Shapley值特征属性和计算每个类型的方法。然后,基于可进行的估计策略,我们表征了两个不同的方法家族:模型 - 不合时宜的和模型特定的近似值。对于模型 - 不合稳定的近似值,我们基准了广泛的估计方法,并将其与Shapley值的替代性但等效的特征联系起来。对于特定于模型的近似值,我们阐明了对每种方法的线性,树和深模型的障碍至关重要的假设。最后,我们确定了文献中的差距以及有希望的未来研究方向。
translated by 谷歌翻译