We tackle the problem of computing counterfactual explanations -- minimal changes to the features that flip an undesirable model prediction. We propose a solution to this question for linear Support Vector Machine (SVMs) models. Moreover, we introduce a way to account for weighted actions that allow for more changes in certain features than others. In particular, we show how to find counterfactual explanations with the purpose of increasing model interpretability. These explanations are valid, change only actionable features, are close to the data distribution, sparse, and take into account correlations between features. We cast this as a mixed integer programming optimization problem. Additionally, we introduce two novel scale-invariant cost functions for assessing the quality of counterfactual explanations and use them to evaluate the quality of our approach with a real medical dataset. Finally, we build a support vector machine model to predict whether law students will pass the Bar exam using protected features, and used our algorithms to uncover the inherent biases of the SVM.
translated by 谷歌翻译
这项研究通过对三种不同类型的模型进行基准评估来调查机器学习模型对产生反事实解释的影响:决策树(完全透明,可解释的,白色盒子模型),随机森林(一种半解释,灰色盒模型)和神经网络(完全不透明的黑盒模型)。我们在五个不同数据集(Compas,成人,德国,德语,糖尿病和乳腺癌)中使用四种算法(DICE,WatchERCF,原型和GrowingSpheresCF)测试了反事实生成过程。我们的发现表明:(1)不同的机器学习模型对反事实解释的产生没有影响; (2)基于接近性损失函数的唯一算法是不可行的,不会提供有意义的解释; (3)在不保证反事实生成过程中的合理性的情况下,人们无法获得有意义的评估结果。如果对当前的最新指标进行评估,则不考虑其内部机制中不合理的算法将导致偏见和不可靠的结论; (4)强烈建议对定性分析(以及定量分析),以确保对反事实解释和偏见的潜在识别进行强有力的分析。
translated by 谷歌翻译
Post-hoc explanations of machine learning models are crucial for people to understand and act on algorithmic predictions. An intriguing class of explanations is through counterfactuals, hypothetical examples that show people how to obtain a different prediction. We posit that effective counterfactual explanations should satisfy two properties: feasibility of the counterfactual actions given user context and constraints, and diversity among the counterfactuals presented. To this end, we propose a framework for generating and evaluating a diverse set of counterfactual explanations based on determinantal point processes. To evaluate the actionability of counterfactuals, we provide metrics that enable comparison of counterfactual-based methods to other local explanation methods. We further address necessary tradeoffs and point to causal implications in optimizing for counterfactuals. Our experiments on four real-world datasets show that our framework can generate a set of counterfactuals that are diverse and well approximate local decision boundaries, outperforming prior approaches to generating diverse counterfactuals. We provide an implementation of the framework at https://github.com/microsoft/DiCE. CCS CONCEPTS• Applied computing → Law, social and behavioral sciences.
translated by 谷歌翻译
可解释的人工智能(XAI)是一系列技术,可以理解人工智能(AI)系统的技术和非技术方面。 Xai至关重要,帮助满足\ emph {可信赖}人工智能的日益重要的需求,其特点是人类自主,防止危害,透明,问责制等的基本特征,反事实解释旨在提供最终用户需要更改的一组特征(及其对应的值)以实现所需的结果。目前的方法很少考虑到实现建议解释所需的行动的可行性,特别是他们缺乏考虑这些行为的因果影响。在本文中,我们将反事实解释作为潜在空间(CEILS)的干预措施,一种方法来生成由数据从数据设计潜在的因果关系捕获的反事实解释,并且同时提供可行的建议,以便到达所提出的配置文件。此外,我们的方法具有以下优点,即它可以设置在现有的反事实发生器算法之上,从而最小化施加额外的因果约束的复杂性。我们展示了我们使用合成和实际数据集的一组不同实验的方法的有效性(包括金融领域的专有数据集)。
translated by 谷歌翻译
反事实解释是作为一种有吸引力的选择,以便向算法决策提供不利影响的个人的诉讼选择。由于它们在关键应用中部署(例如,执法,财务贷款),确保我们清楚地了解这些方法的漏洞并找到解决这些方法的漏洞是重要的。但是,对反事实解释的脆弱性和缺点几乎没有了解。在这项工作中,我们介绍了第一个框架,它描述了反事解释的漏洞,并显示了如何操纵它们。更具体地,我们显示反事实解释可能会聚到众所周知的不同反应性,指示它们不稳健。利用这种洞察力,我们介绍了一部小说目标来培训看似公平的模特,反事实解释在轻微的扰动下发现了更低的成本追索。我们描述了这些模型如何在对审计师出现公平的情况下为数据中的特定子组提供低成本追索。我们对贷款和暴力犯罪预测数据集进行实验,其中某些子组在扰动下达到高达20倍的成本追索性。这些结果提高了关于当前反事实解释技术的可靠性的担忧,我们希望在强大的反事实解释中激发调查。
translated by 谷歌翻译
Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward -it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision. IntroductionThere has been an increasing trend in healthcare and criminal justice to leverage machine learning (ML) for high-stakes prediction applications that deeply impact human lives. Many of the ML models are black boxes that do not explain their predictions in a way that humans can understand. The lack of transparency and accountability of predictive models can have (and has already had) severe consequences; there have been cases of people incorrectly denied parole [1], poor bail decisions leading to the release of dangerous criminals, ML-based pollution models stating that highly polluted air was safe to breathe [2], and generally poor use of limited valuable resources in criminal justice, medicine, energy reliability, finance, and in other domains [3].Rather than trying to create models that are inherently interpretable, there has been a recent explosion of work on "Explainable ML," where a second (posthoc) model is created to explain the first black box model. This is problematic. Explanations are often not reliable, and can be misleading, as we discuss below. If we instead use models that are inherently interpretable, they provide their own explanations, which are faithful to what the model actually computes.In what follows, we discuss the problems with Explainable ML, followed by the challenges in Interpretable ML. This document is mainly relevant to high-stakes decision making and troubleshooting models, which are the main two reasons one might require an interpretable or explainable model. Interpretability is a domain-specific notion [4,5,6,7], so there cannot be an all-purpose definition. Usually, however, an interpretable machine learning model is constrained in model form so that it is either useful to someone, or obeys structural knowledge of the domain, such as monotonicity [e.g., 8], causality, structural (generative) constraints, additivity [9], or physical constraints that come from domain knowledge. Interpretable mo
translated by 谷歌翻译
反事实解释(CES)是了解如何更改算法的决策的强大手段。研究人员提出了许多CES应该满足的Desiderata实际上有用,例如需要最少的努力来制定或遵守因果模型。我们考虑了提高CES的可用性的另一个方面:对不良扰动的鲁棒性,这可能是由于不幸的情况而自然发生的。由于CES通常会规定干预的稀疏形式(即,仅应更改特征的子集),因此我们研究了针对建议更改的特征和不进行的特征分别解决鲁棒性的效果。我们的定义是可行的,因为它们可以将其作为罚款术语纳入用于发现CES的损失功能。为了实验鲁棒性,我们创建和发布代码,其中五个数据集(通常在公平和可解释的机器学习领域使用)已丰富了特定于功能的注释,这些注释可用于采样有意义的扰动。我们的实验表明,CES通常不健壮,如果发生不良扰动(即使不是最坏的情况),他们规定的干预措施可能需要比预期的要大得多,甚至变得不可能。但是,考虑搜索过程中的鲁棒性,可以很容易地完成,可以系统地发现健壮的CES。强大的CES进行额外的干预,以对比扰动的扰动比非稳定的CES降低得多。我们还发现,鲁棒性更容易实现功能更改,这为选择哪种反事实解释最适合用户提出了重要的考虑点。我们的代码可在以下网址获得:https://github.com/marcovirgolin/robust-counterfactuals。
translated by 谷歌翻译
Counterfactual explanations are a popular type of explanation for making the outcomes of a decision making system transparent to the user. Counterfactual explanations tell the user what to do in order to change the outcome of the system in a desirable way. However, it was recently discovered that the recommendations of what to do can differ significantly in their complexity between protected groups of individuals. Providing more difficult recommendations of actions to one group leads to a disadvantage of this group compared to other groups. In this work we propose a model-agnostic method for computing counterfactual explanations that do not differ significantly in their complexity between protected groups.
translated by 谷歌翻译
如今,人工智能(AI)已成为临床和远程医疗保健应用程序的基本组成部分,但是最佳性能的AI系统通常太复杂了,无法自我解释。可解释的AI(XAI)技术被定义为揭示系统的预测和决策背后的推理,并且在处理敏感和个人健康数据时,它们变得更加至关重要。值得注意的是,XAI并未在不同的研究领域和数据类型中引起相同的关注,尤其是在医疗保健领域。特别是,许多临床和远程健康应用程序分别基于表格和时间序列数据,而XAI并未在这些数据类型上进行分析,而计算机视觉和自然语言处理(NLP)是参考应用程序。为了提供最适合医疗领域表格和时间序列数据的XAI方法的概述,本文提供了过去5年中文献的审查,说明了生成的解释的类型以及为评估其相关性所提供的努力和质量。具体而言,我们确定临床验证,一致性评估,客观和标准化质量评估以及以人为本的质量评估作为确保最终用户有效解释的关键特征。最后,我们强调了该领域的主要研究挑战以及现有XAI方法的局限性。
translated by 谷歌翻译
随着机器学习和深度学习模型在多种领域变得非常普遍,因此采用决策过程的主要保留是它们的黑盒本质。可解释的人工智能(XAI)范式由于其能够降低模型不透明度的能力而获得了很多动力。 XAI方法不仅增加了利益相关者对决策过程的信任,而且还帮助开发商确保了其公平性。最近的努力用于创建透明的模型和事后解释。但是,对于时间序列数据,开发了更少的方法,而在多元数据集方面甚至更少。在这项工作中,我们利用塑形组的固有解释性来开发模型不可知的多元时间序列(MTS)反事实解释算法。反事实可能会通过指示在输入上必须执行哪些更改以改变最终决定,从而对制作黑框模型产生巨大影响。我们在现实生活中的太阳耀斑预测数据集上测试了我们的方法,并证明我们的方法会产生高质量的反事实。此外,与唯一的MTS反事实生成算法的比较表明,除了视觉上可以解释外,我们的解释在接近性,稀疏性和合理性方面也很出色。
translated by 谷歌翻译
降低降低是一种流行的预处理,也是数据挖掘中广泛使用的工具。透明度通常是通过解释来实现的,如今已成为基于机器学习的系统(例如分类器和推荐系统)的广泛接受和关键要求。但是,降低维度和其他数据挖掘工具的透明度尚未得到太多考虑,但要了解其行为至关重要 - 特别是从业者可能想了解为什么特定样本被映射到特定位置。为了(本地)理解给定维度降低方法的行为,我们介绍了降低维度的对比解释的抽象概念,并将实现此概念的实现应用于解释两个维数据可视化的特定应用。
translated by 谷歌翻译
识别受机器学习模型决策影响的人算法追索的问题最近受到了很多关注。一些最近的作品模型用户产生的成本,直接与用户满意相关联。但他们假设在所有用户共享的单一全局成本函数。当用户对其对其愿意行动的愿意和与改变该功能相关的不同成本具有相似的偏好时,这是一个不切实际的假设。在这项工作中,我们正式化了用户特定成本函数的概念,并引入了一种用于用户识别可操作的辅助的新方法。默认情况下,我们假设用户的成本函数是从追索方法隐藏的,尽管我们的框架允许用户部分或完全指定其偏好或成本函数。我们提出了一个客观函数,预期的最低成本(EMC),基于两个关键的想法:(1)在向用户呈现一组选项时,用户可以采用至少一个低成本解决方案至关重要; (2)当我们不了解用户的真实成本函数时,我们可以通过首先采样合理的成本函数来满足用户满意度,然后找到一个达到用户在期望中的良好成本的集合。我们以新颖的离散优化算法优化EMC,成本优化的本地搜索(COL),保证可以在迭代中提高追索性质量​​。具有模拟用户成本的流行实际数据集的实验评估表明,与强基线方法相比,我们的方法多达25.89个百分点。使用标准公平度量,我们还表明,我们的方法可以在人口统计组中提供比较可比方法的更公平的解决方案,我们验证了我们的方法是否稳健地击败成本函数分布。
translated by 谷歌翻译
由于算法预测对人类的影响增加,模型解释性已成为机器学习(ML)的重要问题。解释不仅可以帮助用户了解为什么ML模型做出某些预测,还可以帮助用户了解这些预测如何更改。在本论文中,我们研究了从三个有利位置的ML模型的解释性:算法,用户和教学法,并为解释性问题贡献了一些新颖的解决方案。
translated by 谷歌翻译
In the last years many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness sometimes at the cost of scarifying accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation. The aim of this paper is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.
translated by 谷歌翻译
在本文中,我们提出了一种新的可解释性形式主义,旨在阐明测试集的每个输入变量如何影响机器学习模型的预测。因此,我们根据训练有素的机器学习决策规则提出了一个群体的解释性形式,它们是根据其对输入变量分布的可变性的反应。为了强调每个输入变量的影响,这种形式主义使用信息理论框架,该框架量化了基于熵投影的所有输入输出观测值的影响。因此,这是第一个统一和模型不可知的形式主义,使数据科学家能够解释输入变量之间的依赖性,它们对预测错误的影响以及它们对输出预测的影响。在大型样本案例中提供了熵投影的收敛速率。最重要的是,我们证明,计算框架中的解释具有低算法的复杂性,使其可扩展到现实生活中的大数据集。我们通过解释通过在各种数据集上使用XGBoost,随机森林或深层神经网络分类器(例如成人收入,MNIST,CELEBA,波士顿住房,IRIS以及合成的)上使用的复杂决策规则来说明我们的策略。最终,我们明确了基于单个观察结果的解释性策略石灰和摇摆的差异。可以通过使用自由分布的Python工具箱https://gems-ai.aniti.fr/来复制结果。
translated by 谷歌翻译
The advances in Artificial Intelligence are creating new opportunities to improve lives of people around the world, from business to healthcare, from lifestyle to education. For example, some systems profile the users using their demographic and behavioral characteristics to make certain domain-specific predictions. Often, such predictions impact the life of the user directly or indirectly (e.g., loan disbursement, determining insurance coverage, shortlisting applications, etc.). As a result, the concerns over such AI-enabled systems are also increasing. To address these concerns, such systems are mandated to be responsible i.e., transparent, fair, and explainable to developers and end-users. In this paper, we present ComplAI, a unique framework to enable, observe, analyze and quantify explainability, robustness, performance, fairness, and model behavior in drift scenarios, and to provide a single Trust Factor that evaluates different supervised Machine Learning models not just from their ability to make correct predictions but from overall responsibility perspective. The framework helps users to (a) connect their models and enable explanations, (b) assess and visualize different aspects of the model, such as robustness, drift susceptibility, and fairness, and (c) compare different models (from different model families or obtained through different hyperparameter settings) from an overall perspective thereby facilitating actionable recourse for improvement of the models. It is model agnostic and works with different supervised machine learning scenarios (i.e., Binary Classification, Multi-class Classification, and Regression) and frameworks. It can be seamlessly integrated with any ML life-cycle framework. Thus, this already deployed framework aims to unify critical aspects of Responsible AI systems for regulating the development process of such real systems.
translated by 谷歌翻译
特征属性是用于模型解释的常见范例,因为它们在为模型分配每个输入特征的单个数字分数时是简单的。在可操作的追索范围中,其中解释的目标是改善模型消费者的结果,通常不清楚应该如何正确使用特征归因。通过这项工作,我们的目标是加强和澄清可操作追索和特征归因之间的联系。具体地,我们提出了一种Shap,CoShap的变种,它使用反事实生成技术来生产背景数据集以便在边缘(A.K.a.介入)福利价值框架内使用。我们在使用朔芙值的特征归属时仔细考虑的可动手追索程序设置中的需求,同时涉及单调的要求,具有许多合成示例。此外,我们通过提出和证明要素归属,反事实能力的定量评分来展示COSHAP的功效,表明如通过该指标测量,Coshap优于使用单调树集合在公共数据集上进行评估时的现有方法。
translated by 谷歌翻译
随着机器学习(ML)模型越来越多地被部署在高风险应用程序中,决策者提出了更严格的数据保护法规(例如GDPR,CCPA)。一个关键原则是``被遗忘的权利'',它使用户有权删除其数据。另一个关键原则是实现可操作的解释的权利,也称为算法追索权,使用户可以逆转不利的决定。迄今为止,尚不清楚这两个原则是否可以同时进行操作。因此,我们在数据删除请求的背景下介绍和研究追索权无效的问题。更具体地说,我们从理论上和经验上分析流行的最先进算法的行为,并证明如果这些算法产生的记录可能会无效,如果少数数据删除请求(例如1或2)保证书(例如1或2)预测模型的更新。对于线性模型和过度参数化的神经网络的设置 - 通过神经切线内核(NTK)进行了研究 - 我们建议一个框架来识别最小的关键训练点的最小值,当删除时,它将导致最大程度地提高其最大程度的分数。无效的回流。使用我们的框架,我们从经验上确定,从训练集中删除2个数据实例可以使流行的最先进算法最多无效所有回报的95%。因此,我们的工作提出了有关``被遗忘的权利''的背景下``可行解释权''的兼容性的基本问题。
translated by 谷歌翻译
随着AI系统表现出越来越强烈的预测性能,它们的采用已经在许多域中种植。然而,在刑事司法和医疗保健等高赌场域中,由于安全,道德和法律问题,往往是完全自动化的,但是完全手工方法可能是不准确和耗时的。因此,对研究界的兴趣日益增长,以增加人力决策。除了为此目的开发AI技术之外,人民AI决策的新兴领域必须采用实证方法,以形成对人类如何互动和与AI合作做出决定的基础知识。为了邀请和帮助结构研究努力了解理解和改善人为 - AI决策的研究,我们近期对本课题的实证人体研究的文献。我们总结了在三个重要方面的100多篇论文中的研究设计选择:(1)决定任务,(2)AI模型和AI援助要素,以及(3)评估指标。对于每个方面,我们总结了当前的趋势,讨论了现场当前做法中的差距,并列出了未来研究的建议。我们的调查强调了开发共同框架的需要考虑人类 - AI决策的设计和研究空间,因此研究人员可以在研究设计中进行严格的选择,研究界可以互相构建并产生更广泛的科学知识。我们还希望这项调查将成为HCI和AI社区的桥梁,共同努力,相互塑造人类决策的经验科学和计算技术。
translated by 谷歌翻译