随着现代复杂的神经网络不断破坏记录并解决更严重的问题,它们的预测也变得越来越少。目前缺乏解释性通常会破坏敏感设置中精确的机器学习工具的部署。在这项工作中,我们提出了一种基于Shapley系数的层次扩展的图像分类的模型 - 不足的解释方法 - 层次结构(H-SHAP)(H-SHAP) - 解决了当前方法的某些局限性。与其他基于沙普利的解释方法不同,H-shap是可扩展的,并且可以计算而无需近似。在某些分布假设下,例如在多个实例学习中常见的假设,H-shap检索了确切的Shapley系数,并具有指数改善的计算复杂性。我们将我们的分层方法与基于Shapley的流行基于Shapley和基于Shapley的方法进行比较,而基于Shapley的方法,医学成像方案以及一般的计算机视觉问题,表明H-Shap在准确性和运行时都超过了最先进的状态。代码和实验已公开可用。
translated by 谷歌翻译
机器学习模型,尤其是人工神经网络,越来越多地用于为在各个领域的高风险场景中(从金融服务,公共安全和医疗保健服务)提供信息。尽管神经网络在许多情况下都取得了出色的性能,但它们的复杂性质引起了人们对现实情况下的可靠性,可信赖性和公平性的关注。结果,已经提出了几种A-tostori解释方法来突出影响模型预测的特征。值得注意的是,Shapley的价值 - 一种满足几种理想特性的游戏理论数量 - 在机器学习解释性文献中获得了知名度。然而,更传统上,在统计学习中的特征是通过有条件独立性正式化的,而对其进行测试的标准方法是通过有条件的随机测试(CRT)。到目前为止,有关解释性和特征重要性的这两个观点已被认为是独特的和独立的。在这项工作中,我们表明基于沙普利的解释方法和针对特征重要性的有条件独立性测试密切相关。更确切地说,我们证明,通过类似于CRT的程序实现了一组特定的条件独立性测试,评估了Shapley系数量,以执行特定的条件独立性测试,但用于不同的零假设。此外,获得的游戏理论值上限限制了此类测试的$ p $值。结果,我们授予大型Shapley系数具有精确的统计意义,并具有控制I型错误。
translated by 谷歌翻译
Modern machine learning pipelines, in particular those based on deep learning (DL) models, require large amounts of labeled data. For classification problems, the most common learning paradigm consists of presenting labeled examples during training, thus providing strong supervision on what constitutes positive and negative samples. This constitutes a major obstacle for the development of DL models in radiology--in particular for cross-sectional imaging (e.g., computed tomography [CT] scans)--where labels must come from manual annotations by expert radiologists at the image or slice-level. These differ from examination-level annotations, which are coarser but cheaper, and could be extracted from radiology reports using natural language processing techniques. This work studies the question of what kind of labels should be collected for the problem of intracranial hemorrhage detection in brain CT. We investigate whether image-level annotations should be preferred to examination-level ones. By framing this task as a multiple instance learning problem, and employing modern attention-based DL architectures, we analyze the degree to which different levels of supervision improve detection performance. We find that strong supervision (i.e., learning with local image-level annotations) and weak supervision (i.e., learning with only global examination-level labels) achieve comparable performance in examination-level hemorrhage detection (the task of selecting the images in an examination that show signs of hemorrhage) as well as in image-level hemorrhage detection (highlighting those signs within the selected images). Furthermore, we study this behavior as a function of the number of labels available during training. Our results suggest that local labels may not be necessary at all for these tasks, drastically reducing the time and cost involved in collecting and curating datasets.
translated by 谷歌翻译
基于Shapley值的功能归因在解释机器学习模型中很受欢迎。但是,从理论和计算的角度来看,它们的估计是复杂的。我们将这种复杂性分解为两个因素:(1)〜删除特征信息的方法,以及(2)〜可拖动估计策略。这两个因素提供了一种天然镜头,我们可以更好地理解和比较24种不同的算法。基于各种特征删除方法,我们描述了多种类型的Shapley值特征属性和计算每个类型的方法。然后,基于可进行的估计策略,我们表征了两个不同的方法家族:模型 - 不合时宜的和模型特定的近似值。对于模型 - 不合稳定的近似值,我们基准了广泛的估计方法,并将其与Shapley值的替代性但等效的特征联系起来。对于特定于模型的近似值,我们阐明了对每种方法的线性,树和深模型的障碍至关重要的假设。最后,我们确定了文献中的差距以及有希望的未来研究方向。
translated by 谷歌翻译
研究人员提出了多种模型解释方法,但目前尚不清楚大多数方法如何相关或何时一种方法比另一种方法更可取。我们研究了文献,发现许多方法都是基于通过删除来解释的共同原理 - 本质上是测量从模型中删除一组特征的影响。这些方法在几个方面有所不同,因此我们为基于删除的解释开发了一个沿三个维度表征每个方法的框架:1)该方法如何删除特征,2)该方法解释的模型行为以及3)方法如何汇总每个方法功能的影响。我们的框架统一了26种现有方法,其中包括几种最广泛使用的方法(Shap,Lime,有意义的扰动,排列测试)。揭露这些方法之间的基本相似性使用户能够推荐使用哪种工具,并为正在进行的模型解释性研究提出了有希望的方向。
translated by 谷歌翻译
解释深度卷积神经网络最近引起了人们的关注,因为它有助于了解网络的内部操作以及为什么它们做出某些决定。显着地图强调了与网络决策的主要连接的显着区域,是可视化和分析计算机视觉社区深层网络的最常见方法之一。但是,由于未经证实的激活图权重的建议,这些图像没有稳固的理论基础,并且未能考虑每个像素之间的关系,因此现有方法生成的显着图不能表示图像中的真实信息。在本文中,我们开发了一种基于类激活映射的新型事后视觉解释方法,称为Shap-Cam。与以前的基于梯度的方法不同,Shap-Cam通过通过Shapley值获得每个像素的重要性来摆脱对梯度的依赖。我们证明,Shap-Cam可以在解释决策过程中获得更好的视觉性能和公平性。我们的方法在识别和本地化任务方面的表现优于以前的方法。
translated by 谷歌翻译
Shap是一种衡量机器学习模型中可变重要性的流行方法。在本文中,我们研究了用于估计外形评分的算法,并表明它是功能性方差分析分解的转换。我们使用此连接表明,在Shap近似中的挑战主要与选择功能分布的选择以及估计的$ 2^p $ ANOVA条款的数量有关。我们认为,在这种情况下,机器学习解释性和敏感性分析之间的联系是有照明的,但是直接的实际后果并不明显,因为这两个领域面临着不同的约束。机器学习的解释性问题模型可评估,但通常具有数百个(即使不是数千个)功能。敏感性分析通常处理物理或工程的模型,这些模型可能非常耗时,但在相对较小的输入空间上运行。
translated by 谷歌翻译
Shapley values are ubiquitous in interpretable Machine Learning due to their strong theoretical background and efficient implementation in the SHAP library. Computing these values previously induced an exponential cost with respect to the number of input features of an opaque model. Now, with efficient implementations such as Interventional TreeSHAP, this exponential burden is alleviated assuming one is explaining ensembles of decision trees. Although Interventional TreeSHAP has risen in popularity, it still lacks a formal proof of how/why it works. We provide such proof with the aim of not only increasing the transparency of the algorithm but also to encourage further development of these ideas. Notably, our proof for Interventional TreeSHAP is easily adapted to Shapley-Taylor indices and one-hot-encoded features.
translated by 谷歌翻译
Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
translated by 谷歌翻译
Explainable AI transforms opaque decision strategies of ML models into explanations that are interpretable by the user, for example, identifying the contribution of each input feature to the prediction at hand. Such explanations, however, entangle the potentially multiple factors that enter into the overall complex decision strategy. We propose to disentangle explanations by finding relevant subspaces in activation space that can be mapped to more abstract human-understandable concepts and enable a joint attribution on concepts and input features. To automatically extract the desired representation, we propose new subspace analysis formulations that extend the principle of PCA and subspace analysis to explanations. These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), optimize relevance of projected activations rather than the more traditional variance or kurtosis. This enables a much stronger focus on subspaces that are truly relevant for the prediction and the explanation, in particular, ignoring activations or concepts to which the prediction model is invariant. Our approach is general enough to work alongside common attribution techniques such as Shapley Value, Integrated Gradients, or LRP. Our proposed methods show to be practically useful and compare favorably to the state of the art as demonstrated on benchmarks and three use cases.
translated by 谷歌翻译
Besides accuracy, recent studies on machine learning models have been addressing the question on how the obtained results can be interpreted. Indeed, while complex machine learning models are able to provide very good results in terms of accuracy even in challenging applications, it is difficult to interpret them. Aiming at providing some interpretability for such models, one of the most famous methods, called SHAP, borrows the Shapley value concept from game theory in order to locally explain the predicted outcome of an instance of interest. As the SHAP values calculation needs previous computations on all possible coalitions of attributes, its computational cost can be very high. Therefore, a SHAP-based method called Kernel SHAP adopts an efficient strategy that approximate such values with less computational effort. In this paper, we also address local interpretability in machine learning based on Shapley values. Firstly, we provide a straightforward formulation of a SHAP-based method for local interpretability by using the Choquet integral, which leads to both Shapley values and Shapley interaction indices. Moreover, we also adopt the concept of $k$-additive games from game theory, which contributes to reduce the computational effort when estimating the SHAP values. The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.
translated by 谷歌翻译
Preddiff是一种模型不合时宜的局部归因方法,牢固地植根于概率理论。它的简单直觉是在边缘化特征时测量预测变化。在这项工作中,我们阐明了Preddiff的属性及其与Shapley值的密切联系。我们强调分类和回归之间的重要差异,这在两种形式主义中都需要特定的治疗方法。我们通过引入一种新的,有充分的基础的措施来扩展Preddiff,以实现任意特征子集之间的相互作用效果。对互动效应的研究代表了对黑盒模型的全面理解的不可避免的一步,对于科学应用尤其重要。Preddiff配备了我们的新型交互度量,是一种有前途的模型无关方法,用于获得可靠的,数值廉价和理论上声音的归因。
translated by 谷歌翻译
Saliency methods compute heat maps that highlight portions of an input that were most {\em important} for the label assigned to it by a deep net. Evaluations of saliency methods convert this heat map into a new {\em masked input} by retaining the $k$ highest-ranked pixels of the original input and replacing the rest with \textquotedblleft uninformative\textquotedblright\ pixels, and checking if the net's output is mostly unchanged. This is usually seen as an {\em explanation} of the output, but the current paper highlights reasons why this inference of causality may be suspect. Inspired by logic concepts of {\em completeness \& soundness}, it observes that the above type of evaluation focuses on completeness of the explanation, but ignores soundness. New evaluation metrics are introduced to capture both notions, while staying in an {\em intrinsic} framework -- i.e., using the dataset and the net, but no separately trained nets, human evaluations, etc. A simple saliency method is described that matches or outperforms prior methods in the evaluations. Experiments also suggest new intrinsic justifications, based on soundness, for popular heuristic tricks such as TV regularization and upsampling.
translated by 谷歌翻译
越来越多的电子健康记录(EHR)数据和深度学习技术进步的越来越多的可用性(DL)已经引发了在开发基于DL的诊断,预后和治疗的DL临床决策支持系统中的研究兴趣激增。尽管承认医疗保健的深度学习的价值,但由于DL的黑匣子性质,实际医疗环境中进一步采用的障碍障碍仍然存在。因此,有一个可解释的DL的新兴需求,它允许最终用户评估模型决策,以便在采用行动之前知道是否接受或拒绝预测和建议。在这篇综述中,我们专注于DL模型在医疗保健中的可解释性。我们首先引入深入解释性的方法,并作为该领域的未来研究人员或临床从业者的方法参考。除了这些方法的细节之外,我们还包括对这些方法的优缺点以及它们中的每个场景都适合的讨论,因此感兴趣的读者可以知道如何比较和选择它们供使用。此外,我们讨论了这些方法,最初用于解决一般域问题,已经适应并应用于医疗保健问题以及如何帮助医生更好地理解这些数据驱动技术。总的来说,我们希望这项调查可以帮助研究人员和从业者在人工智能(AI)和临床领域了解我们为提高其DL模型的可解释性并相应地选择最佳方法。
translated by 谷歌翻译
众所周知,端到端的神经NLP体系结构很难理解,这引起了近年来为解释性建模的许多努力。模型解释的基本原则是忠诚,即,解释应准确地代表模型预测背后的推理过程。这项调查首先讨论了忠诚的定义和评估及其对解释性的意义。然后,我们通过将方法分为五类来介绍忠实解释的最新进展:相似性方法,模型内部结构的分析,基于反向传播的方法,反事实干预和自我解释模型。每个类别将通过其代表性研究,优势和缺点来说明。最后,我们从它们的共同美德和局限性方面讨论了上述所有方法,并反思未来的工作方向忠实的解释性。对于有兴趣研究可解释性的研究人员,这项调查将为该领域提供可访问且全面的概述,为进一步探索提供基础。对于希望更好地了解自己的模型的用户,该调查将是一项介绍性手册,帮助选择最合适的解释方法。
translated by 谷歌翻译
除了机器学习(ML)模型的令人印象深刻的预测力外,最近还出现了解释方法,使得能够解释诸如深神经网络的复杂非线性学习模型。获得更好的理解尤其重要。对于安全 - 关键的ML应用或医学诊断等。虽然这种可解释的AI(XAI)技术对分类器达到了重大普及,但到目前为止对XAI的重点进行了很少的关注(Xair)。在这篇综述中,我们澄清了XAI对回归和分类任务的基本概念差异,为Xair建立了新的理论见解和分析,为Xair提供了真正的实际回归问题的示范,最后讨论了该领域仍然存在的挑战。
translated by 谷歌翻译
在本文中,我们对在表格数据的情况下进行了详尽的理论分析。我们证明,在较大的样本限制中,可以按照算法参数的函数以及与黑框模型相关的一些期望计算来计算表格石灰提供的可解释系数。当要解释的函数具有一些不错的代数结构(根据坐标的子集,线性,乘法或稀疏)时,我们的分析提供了对Lime提供的解释的有趣见解。这些可以应用于一系列机器学习模型,包括高斯内核或卡车随机森林。例如,对于线性函数,我们表明Lime具有理想的属性,可以提供与函数系数成正比的解释,以解释并忽略该函数未使用的坐标来解释。对于基于分区的回归器,另一方面,我们表明石灰会产生可能提供误导性解释的不希望的人工制品。
translated by 谷歌翻译
机器学习(ml)越来越多地用于通知高赌注决策。作为复杂的ML模型(例如,深神经网络)通常被认为是黑匣子,已经开发了丰富的程序,以阐明其内在的工作和他们预测来的方式,定义“可解释的AI”( xai)。显着性方法根据“重要性”的某种尺寸等级等级。由于特征重要性的正式定义是缺乏的,因此难以验证这些方法。已经证明,一些显着性方法可以突出显示与预测目标(抑制变量)没有统计关联的特征。为了避免由于这种行为而误解,我们提出了这种关联的实际存在作为特征重要性的必要条件和客观初步定义。我们仔细制作了一个地面真实的数据集,其中所有统计依赖性都是明确的和线性的,作为研究抑制变量问题的基准。我们评估了关于我们的客观定义的常见解释方法,包括LRP,DTD,Patternet,图案化,石灰,锚,Shap和基于置换的方法。我们表明,大多数这些方法无法区分此设置中的抑制器的重要功能。
translated by 谷歌翻译
随机森林已被广泛用于其提供的所谓重要措施,在输入变量的相关性来预测某一输出全局(每个数据集)级洞察能力。在另一方面,根据沙普利值方法已被引入特征相关的基于树的模型分析细化到本地(每个实例)的水平。在这种情况下,我们首先证明杂质(MDI)变量重要性得分的全球平均减少对应的Shapley值在某些条件下。然后,我们推导出变量相关的本地MDI重要的措施,这与全球MDI衡量一个非常自然的连接,并且可以与局部特征相关的一个新概念。我们进一步联系当地MDI重要性有关与沙普利值和从文献中有关措施的光进行讨论。这些措施是通过实验在几个分类和回归问题的说明。
translated by 谷歌翻译
在人类循环机器学习应用程序的背景下,如决策支持系统,可解释性方法应在不使用户等待的情况下提供可操作的见解。在本文中,我们提出了加速的模型 - 不可知论解释(ACME),一种可解释的方法,即在全球和本地层面迅速提供特征重要性分数。可以将acme应用于每个回归或分类模型的后验。 ACME计算功能排名不仅提供了一个什么,但它还提供了一个用于评估功能值的变化如何影响模型预测的原因 - 如果分析工具。我们评估了综合性和现实世界数据集的建议方法,同时也与福芙添加剂解释(Shap)相比,我们制作了灵感的方法,目前是最先进的模型无关的解释性方法。我们在生产解释的质量方面取得了可比的结果,同时急剧减少计算时间并为全局和局部解释提供一致的可视化。为了促进该领域的研究,为重复性,我们还提供了一种存储库,其中代码用于实验。
translated by 谷歌翻译