黑匣子机器学习模型的解释性是至关重要的,特别是在部署在药物或自主汽车等关键应用中。现有方法为模型的预测产生了解释,然而,如何评估这种解释的质量和可靠性仍然是一个开放的问题。在本文中,我们进一步迈出了一步,以便为从业者提供工具来判断解释的可信度。为此,我们通过测量一系列多样化的替代替代解释者中的序数共识来产生对给定解释的不确定性的估计。虽然我们通过使用集合技术鼓励多样性,但我们提出并分析了指标,以通过评级方案汇总解释者集合中所包含的信息。我们通过关于最先进的卷积神经网络集合的实验来凭经验说明了这种方法的性质。此外,通过量身定制的可视化,我们展示了不确定性估计为用户提供了超出标准代理解释者引起的具体可操作见解的具体示例。
translated by 谷歌翻译
由于黑匣子的解释越来越多地用于在高赌注设置中建立模型可信度,重要的是确保这些解释准确可靠。然而,事先工作表明,最先进的技术产生的解释是不一致的,不稳定的,并且提供了对它们的正确性和可靠性的极少了解。此外,这些方法也在计算上效率低下,并且需要显着的超参数调谐。在本文中,我们通过开发一种新的贝叶斯框架来涉及用于产生当地解释以及相关的不确定性来解决上述挑战。我们将本框架实例化以获取贝叶斯版本的石灰和kernelshap,其为特征重要性输出可靠的间隔,捕获相关的不确定性。由此产生的解释不仅使我们能够对其质量进行具体推论(例如,有95%的几率是特征重要性在给定范围内),但也是高度一致和稳定的。我们执行了一个详细的理论分析,可以利用上述不确定性来估计对样品的扰动有多少,以及如何进行更快的收敛。这项工作首次尝试在一次拍摄中通过流行的解释方法解决几个关键问题,从而以计算上有效的方式产生一致,稳定和可靠的解释。具有多个真实世界数据集和用户研究的实验评估表明,提出的框架的功效。
translated by 谷歌翻译
最近,使用模型无法轻易解释,最常见的神经网络的模型最近解决了计算机视觉中的许多问题。替代解释器是一种流行的事后解释性方法,可以进一步了解模型如何到达特定预测。通过训练一个简单,更容易解释的模型,以局部近似于非解剖系统的决策边界,我们可以估计输入特征在预测上的相对重要性。专注于图像,替代解释器,例如石灰,通过在可解释的域中采样来生成查询图像周围的本地邻域。但是,这些可解释的域传统上仅来自查询图像的固有特征,而不是考虑到该数据的流形,该数据的多种模型已在训练中暴露在训练中(或更普遍地,实际图像的多种形式) 。这导致对潜在低概率图像训练的次优替代物。我们通过对齐本地社区来解决此限制,即使无法访问此分配,代理人也接受了原始培训数据分配的培训。我们提出了两种这样做的方法,即(1)改变对局部邻域进行采样的方法,以及(2)使用感知指标传达自然图像分布的某些特性。
translated by 谷歌翻译
局部性的好处是石灰的主要前提之一,这是解释黑盒机器学习模型的最突出方法之一。这种强调依赖于一个假设,即我们在本地观察实例附近的越多,黑框模型变得越简单,并且我们可以用线性替代物模拟它越准确。尽管如此,我们的发现似乎是合乎逻辑的,表明,借助石灰的当前设计,当解释过于本地时,即当带宽参数$ \ sigma $趋于零时,替代模型可能会退化。基于此观察,本文的贡献是双重的。首先,我们研究带宽和培训附近对石灰解释的忠诚度和语义的影响。其次,基于我们的发现,我们提出了\史莱姆,这是一种调和忠诚度和位置的石灰的扩展。
translated by 谷歌翻译
Understanding why a model makes certain predictions is crucial when adapting it for real world decision making. LIME is a popular model-agnostic feature attribution method for the tasks of classification and regression. However, the task of learning to rank in information retrieval is more complex in comparison with either classification or regression. In this work, we extend LIME to propose Rank-LIME, a model-agnostic, local, post-hoc linear feature attribution method for the task of learning to rank that generates explanations for ranked lists. We employ novel correlation-based perturbations, differentiable ranking loss functions and introduce new metrics to evaluate ranking based additive feature attribution models. We compare Rank-LIME with a variety of competing systems, with models trained on the MS MARCO datasets and observe that Rank-LIME outperforms existing explanation algorithms in terms of Model Fidelity and Explain-NDCG. With this we propose one of the first algorithms to generate additive feature attributions for explaining ranked lists.
translated by 谷歌翻译
在人类循环机器学习应用程序的背景下,如决策支持系统,可解释性方法应在不使用户等待的情况下提供可操作的见解。在本文中,我们提出了加速的模型 - 不可知论解释(ACME),一种可解释的方法,即在全球和本地层面迅速提供特征重要性分数。可以将acme应用于每个回归或分类模型的后验。 ACME计算功能排名不仅提供了一个什么,但它还提供了一个用于评估功能值的变化如何影响模型预测的原因 - 如果分析工具。我们评估了综合性和现实世界数据集的建议方法,同时也与福芙添加剂解释(Shap)相比,我们制作了灵感的方法,目前是最先进的模型无关的解释性方法。我们在生产解释的质量方面取得了可比的结果,同时急剧减少计算时间并为全局和局部解释提供一致的可视化。为了促进该领域的研究,为重复性,我们还提供了一种存储库,其中代码用于实验。
translated by 谷歌翻译
Post-hoc explanation methods have become increasingly depended upon for understanding black-box classifiers in high-stakes applications, precipitating a need for reliable explanations. While numerous explanation methods have been proposed, recent works have shown that many existing methods can be inconsistent or unstable. In addition, high-performing classifiers are often highly nonlinear and can exhibit complex behavior around the decision boundary, leading to brittle or misleading local explanations. Therefore, there is an impending need to quantify the uncertainty of such explanation methods in order to understand when explanations are trustworthy. We introduce a novel uncertainty quantification method parameterized by a Gaussian Process model, which combines the uncertainty approximation of existing methods with a novel geodesic-based similarity which captures the complexity of the target black-box decision boundary. The proposed framework is highly flexible; it can be used with any black-box classifier and feature attribution method to amortize uncertainty estimates for explanations. We show theoretically that our proposed geodesic-based kernel similarity increases with the complexity of the decision boundary. Empirical results on multiple tabular and image datasets show that our decision boundary-aware uncertainty estimate improves understanding of explanations as compared to existing methods.
translated by 谷歌翻译
与经典的统计学习方法相比,机器和深度学习生存模型表现出相似甚至改进事件的预测能力,但太复杂了,无法被人类解释。有几种模型不合时宜的解释可以克服这个问题。但是,没有一个直接解释生存函数预测。在本文中,我们介绍了Survhap(t),这是第一个允许解释生存黑盒模型的解释。它基于Shapley添加性解释,其理论基础稳定,并在机器学习从业人员中广泛采用。拟议的方法旨在增强精确诊断和支持领域的专家做出决策。关于合成和医学数据的实验证实,survhap(t)可以检测具有时间依赖性效果的变量,并且其聚集是对变量对预测的重要性的决定因素,而不是存活。 survhap(t)是模型不可屈服的,可以应用于具有功能输出的所有型号。我们在http://github.com/mi2datalab/survshap中提供了python中时间相关解释的可访问实现。
translated by 谷歌翻译
Some recent works observed the instability of post-hoc explanations when input side perturbations are applied to the model. This raises the interest and concern in the stability of post-hoc explanations. However, the remaining question is: is the instability caused by the neural network model or the post-hoc explanation method? This work explores the potential source that leads to unstable post-hoc explanations. To separate the influence from the model, we propose a simple output probability perturbation method. Compared to prior input side perturbation methods, the output probability perturbation method can circumvent the neural model's potential effect on the explanations and allow the analysis on the explanation method. We evaluate the proposed method with three widely-used post-hoc explanation methods (LIME (Ribeiro et al., 2016), Kernel Shapley (Lundberg and Lee, 2017a), and Sample Shapley (Strumbelj and Kononenko, 2010)). The results demonstrate that the post-hoc methods are stable, barely producing discrepant explanations under output probability perturbations. The observation suggests that neural network models may be the primary source of fragile explanations.
translated by 谷歌翻译
我们在电影推荐任务上评估了两种流行的本地解释性技术,即石灰和外形。我们发现,这两种方法的行为取决于数据集的稀疏性。在数据集的密集段中,石灰的表现要好,而在稀疏段中,shap的表现更好。我们将这种差异追溯到石灰和摇动​​基础估计量的不同偏差变化特征。我们发现,与石灰相比,SHAP在数据的稀疏段中表现出较低的方差。我们将这种较低的差异归因于Shap和Lime中缺少的完整性约束属性。该约束是正规化器,因此增加了Shap估计器的偏差,但会降低其差异,从而导致良好的偏见差异权衡,尤其是在高稀疏数据设置中。有了这个见解,我们将相同的约束引入石灰,并制定了一个新颖的局部解释框架,称为完整性约束的石灰(攀爬),比石灰优于石灰,速度比Shap更快。
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译
As machine learning black boxes are increasingly being deployed in domains such as healthcare and criminal justice, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes. In this paper, we demonstrate that post hoc explanations techniques that rely on input perturbations, such as LIME and SHAP, are not reliable. Specifically, we propose a novel scaffolding technique that effectively hides the biases of any given classifier by allowing an adversarial entity to craft an arbitrary desired explanation. Our approach can be used to scaffold any biased classifier in such a way that its predictions on the input data distribution still remain biased, but the post hoc explanations of the scaffolded classifier look innocuous. Using extensive evaluation with multiple real world datasets (including COMPAS), we demonstrate how extremely biased (racist) classifiers crafted by our framework can easily fool popular explanation techniques such as LIME and SHAP into generating innocuous explanations which do not reflect the underlying biases. CCS CONCEPTS• Computing methodologies → Machine learning; Supervised learning by classification; • Human-centered computing → Interactive systems and tools.
translated by 谷歌翻译
Explainable artificial intelligence is proposed to provide explanations for reasoning performed by an Artificial Intelligence. There is no consensus on how to evaluate the quality of these explanations, since even the definition of explanation itself is not clear in the literature. In particular, for the widely known Local Linear Explanations, there are qualitative proposals for the evaluation of explanations, although they suffer from theoretical inconsistencies. The case of image is even more problematic, where a visual explanation seems to explain a decision while detecting edges is what it really does. There are a large number of metrics in the literature specialized in quantitatively measuring different qualitative aspects so we should be able to develop metrics capable of measuring in a robust and correct way the desirable aspects of the explanations. In this paper, we propose a procedure called REVEL to evaluate different aspects concerning the quality of explanations with a theoretically coherent development. This procedure has several advances in the state of the art: it standardizes the concepts of explanation and develops a series of metrics not only to be able to compare between them but also to obtain absolute information regarding the explanation itself. The experiments have been carried out on image four datasets as benchmark where we show REVEL's descriptive and analytical power.
translated by 谷歌翻译
了解深度神经网络的结果是朝着更广泛接受深度学习算法的重要步骤。许多方法解决了解释人工神经网络的问题,但通常提供不同的解释。此外,不同的解释方法的超级公路可能导致互相冲突。在本文中,我们提出了一种使用受限制的Boltzmann机器(RBMS)来聚合不同解释算法的特征归属的技术,以实现对深神经网络的更可靠和坚固的解释。关于现实世界数据集的几个具有挑战性的实验表明,所提出的RBM方法优于流行的特征归因方法和基本集合技术。
translated by 谷歌翻译
本地解释性方法 - 由于需要从业者将其模型输出合理化,因此寻求为每次预测产生解释的人越来越普遍。然而,比较本地解释性方法很难,因为它们每个都会在各种尺度和尺寸中产生输出。此外,由于一些可解释性方法的随机性质,可以不同地运行方法以产生给定观察的矛盾解释。在本文中,我们提出了一种基于拓扑的框架来从一组本地解释中提取简化的表示。我们通过首先为标量函数设计解释空间和模型预测之间的关系来实现。然后,我们计算这个功能的拓扑骨架。这种拓扑骨架作为这样的功能的签名,我们用于比较不同的解释方法。我们证明我们的框架不仅可以可靠地识别可解释性技术之间的差异,而且提供稳定的表示。然后,我们展示了我们的框架如何用于标识本地解释性方法的适当参数。我们的框架很简单,不需要复杂的优化,并且可以广泛应用于大多数本地解释方法。我们认为,我们的方法的实用性和多功能性将有助于促进基于拓扑的方法作为理解和比较解释方法的工具。
translated by 谷歌翻译
即使有效,模型的使用也必须伴随着转换数据的各个级别的理解(上游和下游)。因此,需求增加以定义单个数据与算法可以根据其分析可以做出的选择(例如,一种产品或一种促销报价的建议,或代表风险的保险费率)。模型用户必须确保模型不会区分,并且也可以解释其结果。本文介绍了模型解释的重要性,并解决了模型透明度的概念。在保险环境中,它专门说明了如何使用某些工具来强制执行当今可以利用机器学习的精算模型的控制。在一个简单的汽车保险中损失频率估计的示例中,我们展示了一些解释性方法的兴趣,以适应目标受众的解释。
translated by 谷歌翻译
可解释的机器学习提供了有关哪些因素推动了黑盒系统的一定预测以及是否信任高风险决策或大规模部署的洞察力。现有方法主要集中于选择解释性输入功能,这些功能遵循本地添加剂或实例方法。加性模型使用启发式采样扰动来依次学习实例特定解释器。因此,该过程效率低下,并且容易受到条件较差的样品的影响。同时,实例技术直接学习本地采样分布,并可以从其他输入中利用全球信息。但是,由于严格依赖预定义的功能,他们只能解释单一级预测并在不同设置上遇到不一致的情况。这项工作利用了这两种方法的优势,并提出了一个全球框架,用于同时学习多个目标类别的本地解释。我们还提出了一种自适应推理策略,以确定特定实例的最佳功能数量。我们的模型解释器极大地超过了忠诚的添加和实例的对应物,而在各种数据集和Black-box模型体系结构上获得了高水平的简洁性。
translated by 谷歌翻译
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.
translated by 谷歌翻译
了解黑盒机器学习模型对于广泛采用至关重要。学习全球可解释的模型是一种方法,但是与他们一起实现高性能是具有挑战性的。另一种方法是使用本地解释的模型来解释个人预测。对于本地可解释的建模,已经提出了各种方法,并且确实使用了常用,但是它们的保真度低,即它们的解释不能很好地近似预测。在本文中,我们的目标是推动高保真性的本地解释建模。我们提出了一个新颖的框架,使用实例的亚采样(LIMIS)进行局部解释的建模。 Limis利用策略梯度选择少数实例,并使用这些选定的实例将黑框模型提炼成一个低容量的本地解释模型。培训是通过衡量本地可解释模型的保真度直接获得的奖励来指导的。我们在多个表格数据集上显示了LIMIS接近匹配黑框模型的预测准确性,从忠诚度和预测准确性方面大大优于最先进的本地解释模型。
translated by 谷歌翻译
A critical problem in post hoc explainability is the lack of a common foundational goal among methods. For example, some methods are motivated by function approximation, some by game theoretic notions, and some by obtaining clean visualizations. This fragmentation of goals causes not only an inconsistent conceptual understanding of explanations but also the practical challenge of not knowing which method to use when. In this work, we begin to address these challenges by unifying eight popular post hoc explanation methods (LIME, C-LIME, SHAP, Occlusion, Vanilla Gradients, Gradients x Input, SmoothGrad, and Integrated Gradients). We show that these methods all perform local function approximation of the black-box model, differing only in the neighbourhood and loss function used to perform the approximation. This unification enables us to (1) state a no free lunch theorem for explanation methods which demonstrates that no single method can perform optimally across all neighbourhoods, and (2) provide a guiding principle to choose among methods based on faithfulness to the black-box model. We empirically validate these theoretical results using various real-world datasets, model classes, and prediction tasks. By bringing diverse explanation methods into a common framework, this work (1) advances the conceptual understanding of these methods, revealing their shared local function approximation objective, properties, and relation to one another, and (2) guides the use of these methods in practice, providing a principled approach to choose among methods and paving the way for the creation of new ones.
translated by 谷歌翻译