在尝试“解释”机器学习模型的预测中,研究人员提出了数百种技术,以归因于认为重要的功能的预测。虽然这些归属常常被声称持有改善人类“了解”模型的潜力,但令人惊讶地小的工作明确评估了对这种愿望的进步。在本文中,我们进行了一个众群研究,参与者与欺骗检测模型进行互动,以区分真实和假酒店评论。他们受到模拟新鲜评论模型的挑战,并以降低最初预测的类的概率的目标。成功的操纵将导致对抗性示例。在培训(但不是测试)阶段,突出显示输入跨度以传达Parience。通过我们的评估,我们观察到,对于线性袋式模型,与无解释控制相比,可以在训练期间访问特征系数的参与者能够在测试阶段中更大减少模型置信度。对于基于BERT的分类器,流行的本地解释不会提高它们在无法解释案例上降低模型信心的能力。值得注意的是,当由培训的线性模型的(全局)归属的(全局)归属给出的解释以模仿BERT模型,人们可以有效地操纵模型。
translated by 谷歌翻译
虽然许多方法旨在通过突出突出特征来解释预测,但是这些解释服务的目标以及如何评估它们通常不合适。在这项工作中,我们介绍了一个框架,通过在训练教师模型的学生模型上授予学生模型的准确性增益来量化解释的价值。至关重要的是,培训期间学生可以使用解释,但在测试时间不可用。与先前的建议相比,我们的方法不太易于绘制,实现原则,自动,模型 - 无话会的归属。使用我们的框架,我们比较了许多归属方法,用于文本分类和问题应答,并观察不同学生模型架构和学习策略之间的定量差异(在中度到高度)。
translated by 谷歌翻译
随着AI系统表现出越来越强烈的预测性能,它们的采用已经在许多域中种植。然而,在刑事司法和医疗保健等高赌场域中,由于安全,道德和法律问题,往往是完全自动化的,但是完全手工方法可能是不准确和耗时的。因此,对研究界的兴趣日益增长,以增加人力决策。除了为此目的开发AI技术之外,人民AI决策的新兴领域必须采用实证方法,以形成对人类如何互动和与AI合作做出决定的基础知识。为了邀请和帮助结构研究努力了解理解和改善人为 - AI决策的研究,我们近期对本课题的实证人体研究的文献。我们总结了在三个重要方面的100多篇论文中的研究设计选择:(1)决定任务,(2)AI模型和AI援助要素,以及(3)评估指标。对于每个方面,我们总结了当前的趋势,讨论了现场当前做法中的差距,并列出了未来研究的建议。我们的调查强调了开发共同框架的需要考虑人类 - AI决策的设计和研究空间,因此研究人员可以在研究设计中进行严格的选择,研究界可以互相构建并产生更广泛的科学知识。我们还希望这项调查将成为HCI和AI社区的桥梁,共同努力,相互塑造人类决策的经验科学和计算技术。
translated by 谷歌翻译
越来越多的研究进行了人类主题评估,以研究为用户提供机器学习模型的解释是否可以帮助他们制定实际现实世界中的用例。但是,运行的用户研究具有挑战性且昂贵,因此每个研究通常只评估有限的不同设置,例如,研究通常只评估一些任意选择的解释方法。为了应对这些挑战和援助用户研究设计,我们介绍了用用例的模拟评估(Simevals)。 SIMEVALS涉及培训算法剂,以输入信息内容(例如模型解释),这些信息内容将在人类学科研究中提交给每个参与者,以预测感兴趣的用例的答案。算法代理的测试集精度提供了衡量下游用例信息内容的预测性。我们对三种现实世界用例(正向模拟,模型调试和反事实推理)进行全面评估,以证明Simevals可以有效地确定哪种解释方法将为每个用例提供帮助。这些结果提供了证据表明,Simevals可用于有效筛选一组重要的用户研究设计决策,例如在进行潜在昂贵的用户研究之前,选择应向用户提供哪些解释。
translated by 谷歌翻译
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 85 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, fairness, usability, and human-AI team performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
translated by 谷歌翻译
自动错误通常涉及培训数据和学习过程,调试机器学习模型很难。如果我们没有关于模型如何实际工作的线索,这变得更加困难。在这项调查中,我们审查了利用解释的论文使人类提供反馈和调试NLP模型。我们称这个问题解释为基础的人类调试(EBHD)。特别是,我们沿着EBHD的三个维度(错误上下文,工作流程和实验设置)分类和讨论现有工作,编译EBHD组件如何影响反馈提供商的调查结果,并突出可能是未来的研究方向的打开问题。
translated by 谷歌翻译
由于机器学习越来越多地应用于高冲击,高风险域,因此有许多新方法旨在使AI模型更具人类解释。尽管最近的可解释性工作增长,但缺乏对所提出的技术的系统评价。在这项工作中,我们提出了一种新的人类评估框架蜂巢(可视化解释的人类可解释性),用于计算机愿景中的不同解释性方法;据我们所知,这是它的第一个工作。我们认为,人类研究应该是正确评估方法对人类用户的可解释方式的金标。虽然由于与成本,研究设计和跨方法比较相关的挑战,我们常常避免人类研究,但我们描述了我们的框架如何减轻这些问题并进行IRB批准的四种方法,这些方法是代表解释性的多样性:GradCam,Bagnet ,protopnet和prodotree。我们的结果表明,解释(无论它们是否实际正确)发芽人类信任,但用户对用户不够明确,以区分正确和不正确的预测。最后,我们还开展框架以实现未来的研究,并鼓励更多以人以人为本的解释方法。
translated by 谷歌翻译
已经提出了多种解释性方法和理论评价分数。然而,尚不清楚:(1)这些方法有多有用的现实情景和(2)理论措施如何预测人类实际使用方法的有用性。为了填补这一差距,我们在规模中进行了人类的心理物理学实验,以评估人类参与者(n = 1,150)以利用代表性归因方法学习预测不同图像分类器的决定的能力。我们的结果表明,用于得分的理论措施可解释方法的反映在现实世界方案中的个人归因方法的实际实用性不佳。此外,个人归因方法帮助人类参与者预测分类器的决策的程度在分类任务和数据集中广泛变化。总体而言,我们的结果突出了该领域的根本挑战 - 建议致力于开发更好的解释方法和部署人以人为本的评估方法。我们将制定框架的代码可用于缓解新颖解释性方法的系统评估。
translated by 谷歌翻译
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems due to their psychological validity, flexibility across problem domains and proposed legal compliance. While over 100 counterfactual methods exist, claiming to generate plausible explanations akin to those preferred by people, few have actually been tested on users ($\sim7\%$). So, the psychological validity of these counterfactual algorithms for effective XAI for image data is not established. This issue is addressed here using a novel methodology that (i) gathers ground truth human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated ground-truth explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not "minimally edit" images when generating counterfactual explanations. Instead, they make larger, "meaningful" edits that better approximate prototypes in the counterfactual class.
translated by 谷歌翻译
专家决策者开始依靠数据驱动的自动化代理来帮助他们提供各种任务。对于此合作执行正确,人类决策者必须具有何时以及不依赖代理人的何时和何时具有智力模式。在这项工作中,我们的目标是确保人工决策者学习代理商的优势和劣势的有效心理模型。为了实现这一目标,我们提出了一个基于示例的教学策略,人类在代理人的帮助下解决任务并尝试制定一组何时和不推迟的指导方针。我们提出了一种新颖的AI的心理模型的参数化,其在教学示例周围的当地地区应用最近的邻居规则。使用此模型,我们推出了选择代表教学集的近最优策略。我们验证了我们在使用人群工人的多跳问题回答任务中对教学战略的好处,并发现当工人从教学阶段绘制正确的教训时,他们的任务性能提高了,我们还在一组合成实验上验证了我们的方法。
translated by 谷歌翻译
众所周知,端到端的神经NLP体系结构很难理解,这引起了近年来为解释性建模的许多努力。模型解释的基本原则是忠诚,即,解释应准确地代表模型预测背后的推理过程。这项调查首先讨论了忠诚的定义和评估及其对解释性的意义。然后,我们通过将方法分为五类来介绍忠实解释的最新进展:相似性方法,模型内部结构的分析,基于反向传播的方法,反事实干预和自我解释模型。每个类别将通过其代表性研究,优势和缺点来说明。最后,我们从它们的共同美德和局限性方面讨论了上述所有方法,并反思未来的工作方向忠实的解释性。对于有兴趣研究可解释性的研究人员,这项调查将为该领域提供可访问且全面的概述,为进一步探索提供基础。对于希望更好地了解自己的模型的用户,该调查将是一项介绍性手册,帮助选择最合适的解释方法。
translated by 谷歌翻译
创意支持工具中的反馈可以帮助人群推动他们的意思。但是,目前的反馈方法需要从促进者或同行中进行人力评估。这不可扩展到大人群。我们提出可解释的定向多样性来自动预测观点的质量和多样性分数,并提供AI解释 - 归因,对比归因和反事实建议 - 反馈意见(低),以及如何获得更高的分数。由于用户迭代地提高其想象,这些解释提供了多面反馈。我们进行了形成性和控制的用户研究,以了解解释的使用和有用性,以提高观念多样性和质量。用户感谢解释反馈帮助重点努力,并提供了改进的方向。这导致解释与没有反馈或反馈仅具有预测的反馈和反馈相比提高了多样性。因此,我们的方法为解释和丰富的反馈开辟了可解释的AI的机会,以获得迭代人群思想和创造力支​​持工具。
translated by 谷歌翻译
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.
translated by 谷歌翻译
Prior work has identified a resilient phenomenon that threatens the performance of human-AI decision-making teams: overreliance, when people agree with an AI, even when it is incorrect. Surprisingly, overreliance does not reduce when the AI produces explanations for its predictions, compared to only providing predictions. Some have argued that overreliance results from cognitive biases or uncalibrated trust, attributing overreliance to an inevitability of human cognition. By contrast, our paper argues that people strategically choose whether or not to engage with an AI explanation, demonstrating empirically that there are scenarios where AI explanations reduce overreliance. To achieve this, we formalize this strategic choice in a cost-benefit framework, where the costs and benefits of engaging with the task are weighed against the costs and benefits of relying on the AI. We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze. Through 5 studies (N = 731), we find that costs such as task difficulty (Study 1), explanation difficulty (Study 2, 3), and benefits such as monetary compensation (Study 4) affect overreliance. Finally, Study 5 adapts the Cognitive Effort Discounting paradigm to quantify the utility of different explanations, providing further support for our framework. Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.
translated by 谷歌翻译
Modern machine learning models are opaque, and as a result there is a burgeoning academic subfield on methods that explain these models' behavior. However, what is the precise goal of providing such explanations, and how can we demonstrate that explanations achieve this goal? Some research argues that explanations should help teach a student (either human or machine) to simulate the model being explained, and that the quality of explanations can be measured by the simulation accuracy of students on unexplained examples. In this work, leveraging meta-learning techniques, we extend this idea to improve the quality of the explanations themselves, specifically by optimizing explanations such that student models more effectively learn to simulate the original model. We train models on three natural language processing and computer vision tasks, and find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods. Through human annotations and a user study, we further find that these learned explanations more closely align with how humans would explain the required decisions in these tasks. Our code is available at https://github.com/coderpat/learning-scaffold
translated by 谷歌翻译
在许多现实世界的背景下,成功的人类合作要求人类有效地将补充信息来源整合到AI信息的决策中。但是,实际上,人类决策者常常缺乏对AI模型与自己有关的信息的了解。关于如何有效沟通不可观察的指南,几乎没有可用的准则:可能影响结果但模型无法使用的功能。在这项工作中,我们进行了一项在线实验,以了解以及如何显式交流潜在相关的不可观念,从而影响人们在做出预测时如何整合模型输出和无法观察到的。我们的发现表明,提示有关不可观察的提示可以改变人类整合模型输出和不可观察的方式,但不一定会改善性能。此外,这些提示的影响可能会根据决策者的先前领域专业知识而有所不同。我们通过讨论对基于AI的决策支持工具的未来研究和设计的影响来结束。
translated by 谷歌翻译
机器学习(ML)模型在许多软件工程任务中起着越来越普遍的作用。然而,由于大多数模型现在由不透明的深度神经网络供电,因此开发人员可能很难理解为什么该模型的结论以及如何对模型的预测作用。这一问题的激励,本文探讨了源代码模型的反事实解释。这种反事实解释构成了模型“改变其思想”的源代码的最小变化。我们将反事实解释生成整合到真实世界中的源代码的模型。我们描述了影响能够找到现实和合理的反事工艺解释的能力,以及对模型用户的这种解释的有用性。在一系列实验中,我们研究了我们对三种不同模型的方法的功效,每个模型都是基于在源代码上运行的伯特式架构。
translated by 谷歌翻译
自我跟踪可以提高人们对他们不健康的行为的认识,为行为改变提供见解。事先工作探索了自动跟踪器如何反映其记录数据,但它仍然不清楚他们从跟踪反馈中学到多少,以及哪些信息更有用。实际上,反馈仍然可以压倒,并简明扼要可以通过增加焦点和减少解释负担来改善学习。为了简化反馈,我们提出了一个自动跟踪反馈显着框架,以定义提供反馈的特定信息,为什么这些细节以及如何呈现它们(手动引出或自动反馈)。我们从移动食品跟踪的实地研究中收集了调查和膳食图像数据,并实施了Salientrack,一种机器学习模型,以预测用户从跟踪事件中学习。使用可解释的AI(XAI)技术,SalientRack识别该事件的哪些特征是最突出的,为什么它们导致正面学习结果,并优先考虑如何根据归属分数呈现反馈。我们展示了用例,并进行了形成性研究,以展示Salientrack的可用性和有用性。我们讨论自动跟踪中可读性的影响,以及如何添加模型解释性扩大了提高反馈体验的机会。
translated by 谷歌翻译
We investigate whether three types of post hoc model explanations--feature attribution, concept activation, and training point ranking--are effective for detecting a model's reliance on spurious signals in the training data. Specifically, we consider the scenario where the spurious signal to be detected is unknown, at test-time, to the user of the explanation method. We design an empirical methodology that uses semi-synthetic datasets along with pre-specified spurious artifacts to obtain models that verifiably rely on these spurious training signals. We then provide a suite of metrics that assess an explanation method's reliability for spurious signal detection under various conditions. We find that the post hoc explanation methods tested are ineffective when the spurious artifact is unknown at test-time especially for non-visible artifacts like a background blur. Further, we find that feature attribution methods are susceptible to erroneously indicating dependence on spurious signals even when the model being explained does not rely on spurious artifacts. This finding casts doubt on the utility of these approaches, in the hands of a practitioner, for detecting a model's reliance on spurious signals.
translated by 谷歌翻译
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing DNNs become more complex and diverse, ranging from improving a conventional model accuracy metric to infusing advanced human virtues such as fairness, accountability, transparency (FaccT), and unbiasedness. Recently, techniques in Explainable Artificial Intelligence (XAI) are attracting considerable attention, and have tremendously helped Machine Learning (ML) engineers in understanding AI models. However, at the same time, we started to witness the emerging need beyond XAI among AI communities; based on the insights learned from XAI, how can we better empower ML engineers in steering their DNNs so that the model's reasonableness and performance can be improved as intended? This article provides a timely and extensive literature overview of the field Explanation-Guided Learning (EGL), a domain of techniques that steer the DNNs' reasoning process by adding regularization, supervision, or intervention on model explanations. In doing so, we first provide a formal definition of EGL and its general learning paradigm. Secondly, an overview of the key factors for EGL evaluation, as well as summarization and categorization of existing evaluation procedures and metrics for EGL are provided. Finally, the current and potential future application areas and directions of EGL are discussed, and an extensive experimental study is presented aiming at providing comprehensive comparative studies among existing EGL models in various popular application domains, such as Computer Vision (CV) and Natural Language Processing (NLP) domains.
translated by 谷歌翻译