现有的可解释AI和可解释的ML的方法无法解释统计单元的输出变量值的变化,以说明输入值的变化以及“机制”的变化(将输入转换为输出的函数)的变化。我们提出了两种基于反事实的方法,用于使用游戏理论中的沙普利值的概念来解释各种输入粒度的单位级变化。这些方法满足了对任何单位级别更改归因方法所需的两个关键公理。通过模拟,我们研究了所提出方法的可靠性和可扩展性。我们从案例研究中获得了明智的结果,该案例研究确定了美国个人收入变化的驱动力。
translated by 谷歌翻译
鉴于大规模系统的输出度量的意外变化,重要的是要回答发生变化的原因很重要:哪些输入导致了度量的变化?此类归因问题的一个关键组成部分是估计反事实:由于单个输入的指定变化,系统度量的(假设)变化。但是,由于系统部分之间的固有随机性和复杂的相互作用,很难直接对输出度量进行建模。我们利用系统的计算结构将建模任务分解为子部分,因此每个子部分对应于一个更稳定的机制,可以随着时间的推移准确地对其进行准确的建模。使用系统的结构还有助于将指标视为结构性因果模型(SCM)的计算,从而提供了一种原则上的估计反事实的方式。具体而言,我们提出了一种使用时间序列预测模型估算反事实的方法,并构建归因得分CF-Shapley,这与理想的公理一致,以归因于观察到的输出度量的变化。与过去关于因果沙普利值的工作不同,我们提出的方法可以归因于观察到的单个输出变化(而不是人口级效应),因此在模拟数据集上评估时提供了更准确的归因分数。作为现实世界应用,我们分析了一个查询AD匹配系统,其目的是归因于AD匹配密度的度量标准的观察到的变化。归因分数解释了来自不同查询类别的查询量和广告需求如何影响AD匹配密度,从而导致可行的见解,并发现外部事件(例如“ Cheetah Day”)在推动匹配密度中的作用(例如“ Cheetah Day”)。
translated by 谷歌翻译
我们提出了一个新的因果贡献的概念,它描述了在DAG中目标节点上的节点的“内在”部分。我们显示,在某些情况下,现有的因果量化方法无法完全捕获此概念。通过以上游噪声术语递归地将每个节点写入每个节点,我们将每个节点添加的内部信息分开从其祖先所获得的每个节点添加的内部信息。要将内在信息解释为因果贡献,我们考虑“结构保留干预”,该介绍每个节点随机化,以一种模仿通常依赖父母的方式,也不会扰乱观察到的联合分布。为了获得跨越节点的任意排序的措施,我们提出了基于福利的对称化。我们描述了对方差和熵的贡献分析,但可以类似地定义对其他目标度量的贡献。
translated by 谷歌翻译
基于Shapley值的功能归因在解释机器学习模型中很受欢迎。但是,从理论和计算的角度来看,它们的估计是复杂的。我们将这种复杂性分解为两个因素:(1)〜删除特征信息的方法,以及(2)〜可拖动估计策略。这两个因素提供了一种天然镜头,我们可以更好地理解和比较24种不同的算法。基于各种特征删除方法,我们描述了多种类型的Shapley值特征属性和计算每个类型的方法。然后,基于可进行的估计策略,我们表征了两个不同的方法家族:模型 - 不合时宜的和模型特定的近似值。对于模型 - 不合稳定的近似值,我们基准了广泛的估计方法,并将其与Shapley值的替代性但等效的特征联系起来。对于特定于模型的近似值,我们阐明了对每种方法的线性,树和深模型的障碍至关重要的假设。最后,我们确定了文献中的差距以及有希望的未来研究方向。
translated by 谷歌翻译
We introduce the XPER (eXplainable PERformance) methodology to measure the specific contribution of the input features to the predictive or economic performance of a model. Our methodology offers several advantages. First, it is both model-agnostic and performance metric-agnostic. Second, XPER is theoretically founded as it is based on Shapley values. Third, the interpretation of the benchmark, which is inherent in any Shapley value decomposition, is meaningful in our context. Fourth, XPER is not plagued by model specification error, as it does not require re-estimating the model. Fifth, it can be implemented either at the model level or at the individual level. In an application based on auto loans, we find that performance can be explained by a surprisingly small number of features. XPER decompositions are rather stable across metrics, yet some feature contributions switch sign across metrics. Our analysis also shows that explaining model forecasts and model performance are two distinct tasks.
translated by 谷歌翻译
测量黑匣子预测算法中变量重要性的最流行方法是利用合成输入,这些输入结合了来自多个受试者的预测变量。这些输入可能是不可能的,身体上不可能的,甚至在逻辑上是不可能的。结果,对这种情况的预测可以基于数据,这与对黑匣子的训练非常不同。我们认为,当解释使用此类值时,用户不能相信预测算法的决定的解释。取而代之的是,我们主张一种称为同类沙普利的方法,该方法基于经济游戏理论,与大多数其他游戏理论方法不同,它仅使用实际观察到的数据来量化可变重要性。莎普利队的同伙通过缩小判断的主题的缩小,被认为与一个或多个功能上的目标主题相似。如果使用它来缩小队列对队列平均值有很大的不同,则功能很重要。我们在算法公平问题上进行了说明,其中必须将重要性归因于未经训练模型的保护变量。对于每个主题和每个预测变量,我们可以计算该预测因子对受试者的预测响应或对其实际响应的重要性。这些值可以汇总,例如在所有黑色受试者上,我们提出了一个贝叶斯引导程序来量化个人和骨料莎普利值的不确定性。
translated by 谷歌翻译
最近的一些作品关于机器学习与因果关系之间的联系。在一个反向思考过程中,从因果模型中的心理模型的基础开始,我们加强了这些初始作品,结果表明XAI实质上要求机器学习学习与手头任务一致的因果关系。通过认识到人类的心理模型(HMM)如何自然地由Pearlian结构性因果模型(SCM)表示,我们通过构建线性SCM的示例度量空间来做出两个关键观察:首先,“真实”数据的概念 - 在SCM下是合理的,其次是,人类衍生的SCM的聚集可能指向“真实” SCM。在这些见解的含义中,我们以第三种观察结果认为,从HMM中得出的解释必须暗示在SCM框架中的解释性。在此直觉之后,我们使用这些首先建立的第一原则提出了原始推导,以揭示与给定SCM一致的人类可读解释方案,证明命名结构性因果解释(SCI)是合理的。进一步,我们从理论和经验上分析了这些SCI及其数学特性。我们证明,任何现有的图形诱导方法(GIM)实际上在科幻义中都是可以解释的。我们的第一个实验(E1)评估了这种基于GIM的SCI的质量。在(E2)中,我们观察到了我们对基于SCI学习的样本效率提高的猜想的证据。对于(e3),我们进行了一项研究(n = 22),并观察基于人类的SCI比GIM的SCI优势,从而证实了我们的初始假设。
translated by 谷歌翻译
基于AI和机器学习的决策系统已在各种现实世界中都使用,包括医疗保健,执法,教育和金融。不再是牵强的,即设想一个未来,自治系统将推动整个业务决策,并且更广泛地支持大规模决策基础设施以解决社会最具挑战性的问题。当人类做出决定时,不公平和歧视的问题普遍存在,并且当使用几乎没有透明度,问责制和公平性的机器做出决定时(或可能会放大)。在本文中,我们介绍了\ textit {Causal公平分析}的框架,目的是填补此差距,即理解,建模,并可能解决决策设置中的公平性问题。我们方法的主要见解是将观察到数据中存在的差异的量化与基本且通常是未观察到的因果机制收集的因果机制的收集,这些机制首先会产生差异,挑战我们称之为因果公平的基本问题分析(FPCFA)。为了解决FPCFA,我们研究了分解差异和公平性的经验度量的问题,将这种变化归因于结构机制和人群的不同单位。我们的努力最终达到了公平地图,这是组织和解释文献中不同标准之间关系的首次系统尝试。最后,我们研究了进行因果公平分析并提出一本公平食谱的最低因果假设,该假设使数据科学家能够评估不同影响和不同治疗的存在。
translated by 谷歌翻译
This paper proposes a novel approach to explain the predictions made by data-driven methods. Since such predictions rely heavily on the data used for training, explanations that convey information about how the training data affects the predictions are useful. The paper proposes a novel approach to quantify how different data-clusters of the training data affect a prediction. The quantification is based on Shapley values, a concept which originates from coalitional game theory, developed to fairly distribute the payout among a set of cooperating players. A player's Shapley value is a measure of that player's contribution. Shapley values are often used to quantify feature importance, ie. how features affect a prediction. This paper extends this to cluster importance, letting clusters of the training data act as players in a game where the predictions are the payouts. The novel methodology proposed in this paper lets us explore and investigate how different clusters of the training data affect the predictions made by any black-box model, allowing new aspects of the reasoning and inner workings of a prediction model to be conveyed to the users. The methodology is fundamentally different from existing explanation methods, providing insight which would not be available otherwise, and should complement existing explanation methods, including explanations based on feature importance.
translated by 谷歌翻译
除了机器学习(ML)模型的令人印象深刻的预测力外,最近还出现了解释方法,使得能够解释诸如深神经网络的复杂非线性学习模型。获得更好的理解尤其重要。对于安全 - 关键的ML应用或医学诊断等。虽然这种可解释的AI(XAI)技术对分类器达到了重大普及,但到目前为止对XAI的重点进行了很少的关注(Xair)。在这篇综述中,我们澄清了XAI对回归和分类任务的基本概念差异,为Xair建立了新的理论见解和分析,为Xair提供了真正的实际回归问题的示范,最后讨论了该领域仍然存在的挑战。
translated by 谷歌翻译
Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it is the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school. * Equal contribution. This work was done while JL was a Research Fellow at the Alan Turing Institute. 2 https://obamawhitehouse.archives.gov/blog/2016/05/04/big-risks-big-opportunities-intersection-big-dataand-civil-rights 31st Conference on Neural Information Processing Systems (NIPS 2017),
translated by 谷歌翻译
Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
translated by 谷歌翻译
本文研究了通过机器学习模型估计特征对特定实例预测的贡献的问题,以及功能对模型的总体贡献。特征(变量)对预测结果的因果效应反映了该特征对预测的贡献。一个挑战是,如果没有已知的因果图,就无法从数据中估算大多数现有的因果效应。在本文中,我们根据假设的理想实验定义了解释性因果效应。该定义给不可知论的解释带来了一些好处。首先,解释是透明的,具有因果关系。其次,解释性因果效应估计可以数据驱动。第三,因果效应既提供了特定预测的局部解释,又提供了一个全局解释,显示了一个特征在预测模型中的总体重要性。我们进一步提出了一种基于解释性因果效应来解释的方法和组合变量的方法。我们显示了对某些现实世界数据集的实验的定义和方法。
translated by 谷歌翻译
Interpretability provides a means for humans to verify aspects of machine learning (ML) models and empower human+ML teaming in situations where the task cannot be fully automated. Different contexts require explanations with different properties. For example, the kind of explanation required to determine if an early cardiac arrest warning system is ready to be integrated into a care setting is very different from the type of explanation required for a loan applicant to help determine the actions they might need to take to make their application successful. Unfortunately, there is a lack of standardization when it comes to properties of explanations: different papers may use the same term to mean different quantities, and different terms to mean the same quantity. This lack of a standardized terminology and categorization of the properties of ML explanations prevents us from both rigorously comparing interpretable machine learning methods and identifying what properties are needed in what contexts. In this work, we survey properties defined in interpretable machine learning papers, synthesize them based on what they actually measure, and describe the trade-offs between different formulations of these properties. In doing so, we enable more informed selection of task-appropriate formulations of explanation properties as well as standardization for future work in interpretable machine learning.
translated by 谷歌翻译
即使有效,模型的使用也必须伴随着转换数据的各个级别的理解(上游和下游)。因此,需求增加以定义单个数据与算法可以根据其分析可以做出的选择(例如,一种产品或一种促销报价的建议,或代表风险的保险费率)。模型用户必须确保模型不会区分,并且也可以解释其结果。本文介绍了模型解释的重要性,并解决了模型透明度的概念。在保险环境中,它专门说明了如何使用某些工具来强制执行当今可以利用机器学习的精算模型的控制。在一个简单的汽车保险中损失频率估计的示例中,我们展示了一些解释性方法的兴趣,以适应目标受众的解释。
translated by 谷歌翻译
Shap是一种衡量机器学习模型中可变重要性的流行方法。在本文中,我们研究了用于估计外形评分的算法,并表明它是功能性方差分析分解的转换。我们使用此连接表明,在Shap近似中的挑战主要与选择功能分布的选择以及估计的$ 2^p $ ANOVA条款的数量有关。我们认为,在这种情况下,机器学习解释性和敏感性分析之间的联系是有照明的,但是直接的实际后果并不明显,因为这两个领域面临着不同的约束。机器学习的解释性问题模型可评估,但通常具有数百个(即使不是数千个)功能。敏感性分析通常处理物理或工程的模型,这些模型可能非常耗时,但在相对较小的输入空间上运行。
translated by 谷歌翻译
沙普利价值是衡量单个特征影响的流行方法。尽管Shapley功能归因是基于游戏理论的Desiderata,但在某些机器学习设置中,其某些约束可能不太自然,从而导致不直觉的模型解释。特别是,Shapley值对所有边际贡献都使用相同的权重 - 即,当给出大量其他功能时,当给出少数其他功能时,它具有相同的重要性。如果较大的功能集比较小的功能集更具信息性,则此属性可能是有问题的。我们的工作对沙普利特征归因的潜在局限性进行了严格的分析。我们通过为较小的影响力特征分配较大的属性来确定Shapley值在数学上是次优的设置。在这一观察结果的驱动下,我们提出了加权图,它概括了沙普利的价值,并了解到直接从数据中关注哪些边际贡献。在几个现实世界数据集上,我们证明,与沙普利值确定的功能相比,加权图确定的有影响力的特征可以更好地概括模型的预测。
translated by 谷歌翻译
This review presents empirical researchers with recent advances in causal inference, and stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data. Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, the conditional nature of all causal and counterfactual claims, and the methods that have been developed for the assessment of such claims. These advances are illustrated using a general theory of causation based on the Structural Causal Model (SCM) described in Pearl (2000a), which subsumes and unifies other approaches to causation, and provides a coherent mathematical foundation for the analysis of causes and counterfactuals. In particular, the paper surveys the development of mathematical tools for inferring (from a combination of data and assumptions) answers to three types of causal queries: (1) queries about the effects of potential interventions, (also called "causal effects" or "policy evaluation") (2) queries about probabilities of counterfactuals, (including assessment of "regret," "attribution" or "causes of effects") and (3) queries about direct and indirect effects (also known as "mediation"). Finally, the paper defines the formal and conceptual relationships between the structural and potential-outcome frameworks and presents tools for a symbiotic analysis that uses the strong features of both.
translated by 谷歌翻译
与经典的统计学习方法相比,机器和深度学习生存模型表现出相似甚至改进事件的预测能力,但太复杂了,无法被人类解释。有几种模型不合时宜的解释可以克服这个问题。但是,没有一个直接解释生存函数预测。在本文中,我们介绍了Survhap(t),这是第一个允许解释生存黑盒模型的解释。它基于Shapley添加性解释,其理论基础稳定,并在机器学习从业人员中广泛采用。拟议的方法旨在增强精确诊断和支持领域的专家做出决策。关于合成和医学数据的实验证实,survhap(t)可以检测具有时间依赖性效果的变量,并且其聚集是对变量对预测的重要性的决定因素,而不是存活。 survhap(t)是模型不可屈服的,可以应用于具有功能输出的所有型号。我们在http://github.com/mi2datalab/survshap中提供了python中时间相关解释的可访问实现。
translated by 谷歌翻译
随着在敏感应用中广泛使用复杂的机器学习模型,了解他们的决策已成为一项重要任务。对表格数据进行培训的模型在解释其基本决策过程的解释方面取得了重大进展,该过程具有少量的离散功能。但是,将这些方法应用于高维输入(例如图像)并不是一项琐碎的任务。图像由原子水平的像素组成,并不具有任何解释性。在这项工作中,我们试图使用带注释的图像的高级可解释特征来提供解释。我们利用游戏理论的Shapley价值框架,该框架在XAI问题中广泛接受。通过开发一条管道来生成反事实并随后使用它来估计莎普利值,我们获得了具有强大的公理保证的对比度和可解释的解释。
translated by 谷歌翻译