在许多现实世界中的高级应用程序中,解释人工智能(AI)模型的决策(AI)模型越来越重要。数以百计的论文提出了新功能归因方法,在其工作中讨论或利用这些工具。然而,尽管人类是目标最终用户,但大多数归因方法仅在代理自动评估指标上进行评估(Zhang等人,2018年; Zhou等人,2016年; Petsiuk等人,2018年)。在本文中,我们进行了首个用户研究,以衡量归因地图的有效性,以帮助人类进行成像网分类和斯坦福犬细粒分类,以及图像是自然或对抗性的(即包含对抗性扰动)。总体而言,特征归因比显示最近的训练集示例的人更有效。在一项艰巨的狗分类的艰巨任务中,向人类提供归因地图无济于事,而是与仅AI相比会损害人类团队的性能。重要的是,我们发现自动归因地图评估措施与实际人类AI团队的绩效较差。我们的发现鼓励社区严格测试其在下游人类应用应用程序上的方法,并重新考虑现有的评估指标。
translated by 谷歌翻译
在许多高风险应用中,人工智能(AI)的预测越来越重要,甚至是必要的,而人类是最终的决策者。在这项工作中,我们提出了两种自我解剖图像分类器的新型架构,这些架构首先解释,然后通过利用查询图像和示例之间的视觉对应关系来预测(与事后解释)。我们的模型始终在分布(OOD)数据集上始终改进(提高1-4分),同时在分布测试中略差(比Resnet-50)和$ k $ near的邻居分类器更差(1至2分)。 (KNN)。通过大规模的人类对成像网和幼崽的研究,我们基于对应的解释对用户的解释比KNN解释更有用。我们的解释可帮助用户更准确地拒绝AI的错误决策,而不是所有其他测试方法。有趣的是,我们首次表明,在ImageNet和Cub图像分类任务中,有可能实现互补的人类团队的准确性(即比Ai-Olone或单词更高)。
translated by 谷歌翻译
由于机器学习越来越多地应用于高冲击,高风险域,因此有许多新方法旨在使AI模型更具人类解释。尽管最近的可解释性工作增长,但缺乏对所提出的技术的系统评价。在这项工作中,我们提出了一种新的人类评估框架蜂巢(可视化解释的人类可解释性),用于计算机愿景中的不同解释性方法;据我们所知,这是它的第一个工作。我们认为,人类研究应该是正确评估方法对人类用户的可解释方式的金标。虽然由于与成本,研究设计和跨方法比较相关的挑战,我们常常避免人类研究,但我们描述了我们的框架如何减轻这些问题并进行IRB批准的四种方法,这些方法是代表解释性的多样性:GradCam,Bagnet ,protopnet和prodotree。我们的结果表明,解释(无论它们是否实际正确)发芽人类信任,但用户对用户不够明确,以区分正确和不正确的预测。最后,我们还开展框架以实现未来的研究,并鼓励更多以人以人为本的解释方法。
translated by 谷歌翻译
已经提出了多种解释性方法和理论评价分数。然而,尚不清楚:(1)这些方法有多有用的现实情景和(2)理论措施如何预测人类实际使用方法的有用性。为了填补这一差距,我们在规模中进行了人类的心理物理学实验,以评估人类参与者(n = 1,150)以利用代表性归因方法学习预测不同图像分类器的决定的能力。我们的结果表明,用于得分的理论措施可解释方法的反映在现实世界方案中的个人归因方法的实际实用性不佳。此外,个人归因方法帮助人类参与者预测分类器的决策的程度在分类任务和数据集中广泛变化。总体而言,我们的结果突出了该领域的根本挑战 - 建议致力于开发更好的解释方法和部署人以人为本的评估方法。我们将制定框架的代码可用于缓解新颖解释性方法的系统评估。
translated by 谷歌翻译
We propose a technique for producing 'visual explanations' for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable.Our approach -Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say 'dog' in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept.Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fullyconnected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. visual question answering) or reinforcement learning, all without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative vi-
translated by 谷歌翻译
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the chal-
translated by 谷歌翻译
专家决策者开始依靠数据驱动的自动化代理来帮助他们提供各种任务。对于此合作执行正确,人类决策者必须具有何时以及不依赖代理人的何时和何时具有智力模式。在这项工作中,我们的目标是确保人工决策者学习代理商的优势和劣势的有效心理模型。为了实现这一目标,我们提出了一个基于示例的教学策略,人类在代理人的帮助下解决任务并尝试制定一组何时和不推迟的指导方针。我们提出了一种新颖的AI的心理模型的参数化,其在教学示例周围的当地地区应用最近的邻居规则。使用此模型,我们推出了选择代表教学集的近最优策略。我们验证了我们在使用人群工人的多跳问题回答任务中对教学战略的好处,并发现当工人从教学阶段绘制正确的教训时,他们的任务性能提高了,我们还在一组合成实验上验证了我们的方法。
translated by 谷歌翻译
近年来,可解释的人工智能(XAI)已成为一个非常适合的框架,可以生成人类对“黑盒”模型的可理解解释。在本文中,一种新颖的XAI视觉解释算法称为相似性差异和唯一性(SIDU)方法,该方法可以有效地定位负责预测的整个对象区域。通过各种计算和人类主题实验分析了SIDU算法的鲁棒性和有效性。特别是,使用三种不同类型的评估(应用,人类和功能地面)评估SIDU算法以证明其出色的性能。在对“黑匣子”模型的对抗性攻击的情况下,进一步研究了Sidu的鲁棒性,以更好地了解其性能。我们的代码可在:https://github.com/satyamahesh84/sidu_xai_code上找到。
translated by 谷歌翻译
我们提出了CX-TOM,简短于与理论的理论,一种新的可解释的AI(XAI)框架,用于解释深度卷积神经网络(CNN)制定的决定。与生成解释的XAI中的当前方法形成对比,我们将说明作为迭代通信过程,即对话框,机器和人类用户之间。更具体地说,我们的CX-TOM框架通过调解机器和人类用户的思想之间的差异,在对话中生成解释顺序。为此,我们使用思想理论(汤姆),帮助我们明确地建模人类的意图,通过人类的推断,通过机器推断出人类的思想。此外,大多数最先进的XAI框架提供了基于注意的(或热图)的解释。在我们的工作中,我们表明,这些注意力的解释不足以增加人类信任在潜在的CNN模型中。在CX-TOM中,我们使用命名为您定义的故障行的反事实解释:给定CNN分类模型M预测C_PRED的CNN分类模型M的输入图像I,错误线识别最小的语义级别特征(例如,斑马上的条纹,狗的耳朵),称为可解释的概念,需要从I添加或删除,以便将m的分类类别改变为另一个指定的c_alt。我们认为,由于CX-TOM解释的迭代,概念和反事本质,我们的框架对于专家和非专家用户来说是实用的,更加自然,以了解复杂的深度学习模式的内部运作。广泛的定量和定性实验验证了我们的假设,展示了我们的CX-TOM显着优于最先进的可解释的AI模型。
translated by 谷歌翻译
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing DNNs become more complex and diverse, ranging from improving a conventional model accuracy metric to infusing advanced human virtues such as fairness, accountability, transparency (FaccT), and unbiasedness. Recently, techniques in Explainable Artificial Intelligence (XAI) are attracting considerable attention, and have tremendously helped Machine Learning (ML) engineers in understanding AI models. However, at the same time, we started to witness the emerging need beyond XAI among AI communities; based on the insights learned from XAI, how can we better empower ML engineers in steering their DNNs so that the model's reasonableness and performance can be improved as intended? This article provides a timely and extensive literature overview of the field Explanation-Guided Learning (EGL), a domain of techniques that steer the DNNs' reasoning process by adding regularization, supervision, or intervention on model explanations. In doing so, we first provide a formal definition of EGL and its general learning paradigm. Secondly, an overview of the key factors for EGL evaluation, as well as summarization and categorization of existing evaluation procedures and metrics for EGL are provided. Finally, the current and potential future application areas and directions of EGL are discussed, and an extensive experimental study is presented aiming at providing comprehensive comparative studies among existing EGL models in various popular application domains, such as Computer Vision (CV) and Natural Language Processing (NLP) domains.
translated by 谷歌翻译
Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To bridge the research gap, we propose the Clinical XAI Guidelines that consist of five criteria a clinical XAI needs to be optimized for. The guidelines recommend choosing an explanation form based on Guideline 1 (G1) Understandability and G2 Clinical relevance. For the chosen explanation form, its specific XAI technique should be optimized for G3 Truthfulness, G4 Informative plausibility, and G5 Computational efficiency. Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly. Sixteen commonly-used heatmap XAI techniques were evaluated and found to be insufficient for clinical use due to their failure in G3 and G4. Our evaluation demonstrated the use of Clinical XAI Guidelines to support the design and evaluation of clinically viable XAI.
translated by 谷歌翻译
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 85 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, fairness, usability, and human-AI team performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
translated by 谷歌翻译
最近的工作表明,当AI的预测不可靠时,可以学会推迟人类的选择性预测系统的潜在好处,特别是提高医疗保健等高赌注应用中AI系统的可靠性。然而,大多数事先工作假定当他们解决预测任务时,人类行为仍然保持不变,作为人类艾队团队的一部分而不是自己。我们表明,通过执行实验来规定在选择性预测的背景下量化人AI相互作用的实验并非如此。特别是,我们研究将不同类型信息传送给人类的影响,了解AI系统的决定推迟。使用现实世界的保护数据和选择性预测系统,可以在单独工作的人体或AI系统上提高预期准确性,我们表明,这种消息传递对人类判断的准确性产生了重大影响。我们的结果研究了消息传递策略的两个组成部分:1)人类是否被告知AI系统的预测和2)是否被告知选择性预测系统的决定推迟。通过操纵这些消息传递组件,我们表明,通过通知人类推迟的决定,可以显着提高人类的性能,但不透露对AI的预测。因此,我们表明,考虑在设计选择性预测系统时如何传送到人类的决定是至关重要的,并且必须使用循环框架仔细评估人类-AI团队的复合精度。
translated by 谷歌翻译
随着AI系统表现出越来越强烈的预测性能,它们的采用已经在许多域中种植。然而,在刑事司法和医疗保健等高赌场域中,由于安全,道德和法律问题,往往是完全自动化的,但是完全手工方法可能是不准确和耗时的。因此,对研究界的兴趣日益增长,以增加人力决策。除了为此目的开发AI技术之外,人民AI决策的新兴领域必须采用实证方法,以形成对人类如何互动和与AI合作做出决定的基础知识。为了邀请和帮助结构研究努力了解理解和改善人为 - AI决策的研究,我们近期对本课题的实证人体研究的文献。我们总结了在三个重要方面的100多篇论文中的研究设计选择:(1)决定任务,(2)AI模型和AI援助要素,以及(3)评估指标。对于每个方面,我们总结了当前的趋势,讨论了现场当前做法中的差距,并列出了未来研究的建议。我们的调查强调了开发共同框架的需要考虑人类 - AI决策的设计和研究空间,因此研究人员可以在研究设计中进行严格的选择,研究界可以互相构建并产生更广泛的科学知识。我们还希望这项调查将成为HCI和AI社区的桥梁,共同努力,相互塑造人类决策的经验科学和计算技术。
translated by 谷歌翻译
基于概念的解释性方法旨在使用一组预定义的语义概念来解释深度神经网络模型的预测。这些方法在新的“探针”数据集上评估了训练有素的模型,并将模型预测与该数据集中标记的视觉概念相关联。尽管他们受欢迎,但他们的局限性并未被文献所理解和阐明。在这项工作中,我们分析了基于概念的解释中的三个常见因素。首先,选择探针数据集对生成的解释有深远的影响。我们的分析表明,不同的探针数据集可能会导致非常不同的解释,并表明这些解释在探针数据集之外不可概括。其次,我们发现探针数据集中的概念通常比他们声称要解释的课程更不太明显,更难学习,这使解释的正确性提出了质疑。我们认为,仅在基于概念的解释中才能使用视觉上的显着概念。最后,尽管现有方法使用了数百甚至数千个概念,但我们的人类研究揭示了32个或更少的概念更严格的上限,除此之外,这些解释实际上不太有用。我们对基于概念的解释性方法的未来发展和分析提出建议。可以在\ url {https://github.com/princetonvisualai/overlookedfactors}找到我们的分析和用户界面的代码。
translated by 谷歌翻译
机器学习模型需要提供对比解释,因为人们经常寻求理解为什么发生令人费解的预测而不是一些预期的结果。目前的对比解释是实例或原始特征之间的基本比较,这仍然难以解释,因为它们缺乏语义含义。我们认为解释必须与其他概念,假设和协会更加相关。受到认知心理学的感知过程的启发,我们提出了具有对比显着性,反事实合成和对比提示的可靠可解释的AI的XAI感知处理框架和REXNET模型。我们调查了声乐情绪识别的应用,实施了模块化的多任务深度神经网络,以预测言论的情感。从思想和对照研究来看,我们发现,反事实解释是有用的,并进一步增强了语义线索,但不具有显着性解释。这项工作为提供和评估了感知应用提供了可关联的对比解释的AI,提供了深度识别。
translated by 谷歌翻译
Prior work has identified a resilient phenomenon that threatens the performance of human-AI decision-making teams: overreliance, when people agree with an AI, even when it is incorrect. Surprisingly, overreliance does not reduce when the AI produces explanations for its predictions, compared to only providing predictions. Some have argued that overreliance results from cognitive biases or uncalibrated trust, attributing overreliance to an inevitability of human cognition. By contrast, our paper argues that people strategically choose whether or not to engage with an AI explanation, demonstrating empirically that there are scenarios where AI explanations reduce overreliance. To achieve this, we formalize this strategic choice in a cost-benefit framework, where the costs and benefits of engaging with the task are weighed against the costs and benefits of relying on the AI. We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze. Through 5 studies (N = 731), we find that costs such as task difficulty (Study 1), explanation difficulty (Study 2, 3), and benefits such as monetary compensation (Study 4) affect overreliance. Finally, Study 5 adapts the Cognitive Effort Discounting paradigm to quantify the utility of different explanations, providing further support for our framework. Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.
translated by 谷歌翻译
人为决策的合作努力实现超出人类或人工智能表现的团队绩效。但是,许多因素都会影响人类团队的成功,包括用户的领域专业知识,AI系统的心理模型,对建议的信任等等。这项工作检查了用户与三种模拟算法模型的互动,所有这些模型都具有相似的精度,但对其真正的正面和真实负率进行了不同的调整。我们的研究检查了在非平凡的血管标签任务中的用户性能,参与者表明给定的血管是流动还是停滞。我们的结果表明,虽然AI-Assistant的建议可以帮助用户决策,但用户相对于AI的基线性能和AI错误类型的补充调整等因素会显着影响整体团队的整体绩效。新手用户有所改善,但不能达到AI的准确性。高度熟练的用户通常能够识别何时应遵循AI建议,并通常保持或提高其性能。与AI相似的准确性水平的表演者在AI建议方面是最大的变化。此外,我们发现用户对AI的性能亲戚的看法也对给出AI建议时的准确性是否有所提高产生重大影响。这项工作提供了有关与人类协作有关的因素的复杂性的见解,并提供了有关如何开发以人为中心的AI算法来补充用户在决策任务中的建议。
translated by 谷歌翻译
本文介绍了置信度优化(CO)分数,以直接测量热插拔/显着图的贡献到模型的分类性能。可说明的人工智能(XAI)社区中使用的常见热映射生成方法通过我们称之为增强解释(AX)来测试。我们在这些热爱方法的CO分配中找到了一个惊人的\ Texit {Gap}。间隙可能用作深度神经网络(DNN)预测的正确性的新颖指标。我们进一步介绍了生成的AX(GAX)方法以产生能够获得高CO分数的显着图。使用迷人,我们也定性展示了DNN架构的不行性。
translated by 谷歌翻译
We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% -15% on CIFAR-10 and 11% -14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets.
translated by 谷歌翻译