可解释的AI(XAI)是支持高赌注视觉检测任务的人AI合作的承诺手段,例如来自卫星成像仪的损坏检测任务,作为完全自动化的方法不太可能是完全安全可靠的。然而,大多数现有的XAI技术都没有通过对人类的特定任务特定需求进行解释来了解。因此,我们迈向了解Xai人类在损坏检测任务中需要什么迈出的第一步。我们在在线众包的研究中了解人们如何在评估基于卫星图像的建筑损坏的严重程度时解释自己的评估。通过与60人群的研究,我们介绍了六种主要策略,即人类利用解释他们的视觉伤害评估。我们对我们的调查结果提出了对这种视觉检测环境的设计设计的影响,并讨论了未来研究的机会。
translated by 谷歌翻译
The xView2 competition and xBD dataset spurred significant advancements in overhead building damage detection, but the competition's pixel level scoring can lead to reduced solution performance in areas with tight clusters of buildings or uninformative context. We seek to advance automatic building damage assessment for disaster relief by proposing an auxiliary challenge to the original xView2 competition. This new challenge involves a new dataset and metrics indicating solution performance when damage is more local and limited than in xBD. Our challenge measures a network's ability to identify individual buildings and their damage level without excessive reliance on the buildings' surroundings. Methods that succeed on this challenge will provide more fine-grained, precise damage information than original xView2 solutions. The best-performing xView2 networks' performances dropped noticeably in our new limited/local damage detection task. The common causes of failure observed are that (1) building objects and their classifications are not separated well, and (2) when they are, the classification is strongly biased by surrounding buildings and other damage context. Thus, we release our augmented version of the dataset with additional object-level scoring metrics https://gitlab.kitware.com/dennis.melamed/xfbd to test independence and separability of building objects, alongside the pixel-level performance metrics of the original competition. We also experiment with new baseline models which improve independence and separability of building damage predictions. Our results indicate that building damage detection is not a fully-solved problem, and we invite others to use and build on our dataset augmentations and metrics.
translated by 谷歌翻译
自我跟踪可以提高人们对他们不健康的行为的认识,为行为改变提供见解。事先工作探索了自动跟踪器如何反映其记录数据,但它仍然不清楚他们从跟踪反馈中学到多少,以及哪些信息更有用。实际上,反馈仍然可以压倒,并简明扼要可以通过增加焦点和减少解释负担来改善学习。为了简化反馈,我们提出了一个自动跟踪反馈显着框架,以定义提供反馈的特定信息,为什么这些细节以及如何呈现它们(手动引出或自动反馈)。我们从移动食品跟踪的实地研究中收集了调查和膳食图像数据,并实施了Salientrack,一种机器学习模型,以预测用户从跟踪事件中学习。使用可解释的AI(XAI)技术,SalientRack识别该事件的哪些特征是最突出的,为什么它们导致正面学习结果,并优先考虑如何根据归属分数呈现反馈。我们展示了用例,并进行了形成性研究,以展示Salientrack的可用性和有用性。我们讨论自动跟踪中可读性的影响,以及如何添加模型解释性扩大了提高反馈体验的机会。
translated by 谷歌翻译
Prior work has identified a resilient phenomenon that threatens the performance of human-AI decision-making teams: overreliance, when people agree with an AI, even when it is incorrect. Surprisingly, overreliance does not reduce when the AI produces explanations for its predictions, compared to only providing predictions. Some have argued that overreliance results from cognitive biases or uncalibrated trust, attributing overreliance to an inevitability of human cognition. By contrast, our paper argues that people strategically choose whether or not to engage with an AI explanation, demonstrating empirically that there are scenarios where AI explanations reduce overreliance. To achieve this, we formalize this strategic choice in a cost-benefit framework, where the costs and benefits of engaging with the task are weighed against the costs and benefits of relying on the AI. We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze. Through 5 studies (N = 731), we find that costs such as task difficulty (Study 1), explanation difficulty (Study 2, 3), and benefits such as monetary compensation (Study 4) affect overreliance. Finally, Study 5 adapts the Cognitive Effort Discounting paradigm to quantify the utility of different explanations, providing further support for our framework. Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.
translated by 谷歌翻译
概念诱导是基于正式的逻辑推理在描述逻辑上的,已在本体工程中使用,以从基本数据(ABOX)图创建本体(Tbox)公理。在本文中,我们表明它也可以用来解释数据差异,例如在可解释的AI(XAI)的背景下,我们表明它实际上可以以对人类观察者有意义的方式进行。我们的方法利用了从Wikipedia类别层次结构策划的大型层次结构,作为背景知识。
translated by 谷歌翻译
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 85 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, fairness, usability, and human-AI team performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
translated by 谷歌翻译
创意支持工具中的反馈可以帮助人群推动他们的意思。但是,目前的反馈方法需要从促进者或同行中进行人力评估。这不可扩展到大人群。我们提出可解释的定向多样性来自动预测观点的质量和多样性分数,并提供AI解释 - 归因,对比归因和反事实建议 - 反馈意见(低),以及如何获得更高的分数。由于用户迭代地提高其想象,这些解释提供了多面反馈。我们进行了形成性和控制的用户研究,以了解解释的使用和有用性,以提高观念多样性和质量。用户感谢解释反馈帮助重点努力,并提供了改进的方向。这导致解释与没有反馈或反馈仅具有预测的反馈和反馈相比提高了多样性。因此,我们的方法为解释和丰富的反馈开辟了可解释的AI的机会,以获得迭代人群思想和创造力支​​持工具。
translated by 谷歌翻译
本研究提出了一种使用深入学习工作流程来量化建筑环境中的损坏的新方法来量化。由于自动履带,从谷歌地球获得了全世界50个震中的自然灾害前后的空中图像,从谷歌地球获得了一台10,000个空中图像数据库,每像素的空间分辨率为2米。该研究利用算法SEG-Net在两个实例(现有和后自然灾害)中的卫星图像中的建筑环境的语义分割。对于图像分割,SEG-Net是最受欢迎和最通用的CNN架构之一。 SEG-NET算法在分割中达到了92%的精度。分割后,我们将两种情况之间的差异与变化百分比进行了比较。这种变化系数代表了数控的损坏,城市环境必须量化建筑环境中的整体损坏。这样的指数可以让政府估计受影响家庭的数量,也许是住房损害的程度。
translated by 谷歌翻译
Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable AI-both to increase the transparency of AI systems and to provide recourse. Cognitive science and psychology, however, have pointed out that people regularly use CFs to express causal relationships. Most AI systems are only able to capture associations or correlations in data so interpreting them as casual would not be justified. In this paper, we present two experiment (total N = 364) exploring the effects of CF explanations of AI system's predictions on lay people's causal beliefs about the real world. In Experiment 1 we found that providing CF explanations of an AI system's predictions does indeed (unjustifiably) affect people's causal beliefs regarding factors/features the AI uses and that people are more likely to view them as causal factors in the real world. Inspired by the literature on misinformation and health warning messaging, Experiment 2 tested whether we can correct for the unjustified change in causal beliefs. We found that pointing out that AI systems capture correlations and not necessarily causal relationships can attenuate the effects of CF explanations on people's causal beliefs.
translated by 谷歌翻译
机器学习模型需要提供对比解释,因为人们经常寻求理解为什么发生令人费解的预测而不是一些预期的结果。目前的对比解释是实例或原始特征之间的基本比较,这仍然难以解释,因为它们缺乏语义含义。我们认为解释必须与其他概念,假设和协会更加相关。受到认知心理学的感知过程的启发,我们提出了具有对比显着性,反事实合成和对比提示的可靠可解释的AI的XAI感知处理框架和REXNET模型。我们调查了声乐情绪识别的应用,实施了模块化的多任务深度神经网络,以预测言论的情感。从思想和对照研究来看,我们发现,反事实解释是有用的,并进一步增强了语义线索,但不具有显着性解释。这项工作为提供和评估了感知应用提供了可关联的对比解释的AI,提供了深度识别。
translated by 谷歌翻译
Very few eXplainable AI (XAI) studies consider how users understanding of explanations might change depending on whether they know more or less about the to be explained domain (i.e., whether they differ in their expertise). Yet, expertise is a critical facet of most high stakes, human decision making (e.g., understanding how a trainee doctor differs from an experienced consultant). Accordingly, this paper reports a novel, user study (N=96) on how peoples expertise in a domain affects their understanding of post-hoc explanations by example for a deep-learning, black box classifier. The results show that peoples understanding of explanations for correct and incorrect classifications changes dramatically, on several dimensions (e.g., response times, perceptions of correctness and helpfulness), when the image-based domain considered is familiar (i.e., MNIST) as opposed to unfamiliar (i.e., Kannada MNIST). The wider implications of these new findings for XAI strategies are discussed.
translated by 谷歌翻译
Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.
translated by 谷歌翻译
机器学习的最新进展导致人们对可解释的AI(XAI)的兴趣越来越大,使人类能够深入了解机器学习模型的决策。尽管最近有这种兴趣,但XAI技术的实用性尚未在人机组合中得到特征。重要的是,XAI提供了增强团队情境意识(SA)和共享心理模型发展的希望,这是有效的人机团队的关键特征。快速开发这种心理模型在临时人机团队中尤其重要,因为代理商对他人的决策策略没有先验知识。在本文中,我们提出了两个新颖的人类受试者实验,以量化在人机组合场景中部署XAI技术的好处。首先,我们证明XAI技术可以支持SA($ P <0.05)$。其次,我们研究了通过协作AI政策抽象诱导的不同SA级别如何影响临时人机组合绩效。重要的是,我们发现XAI的好处不是普遍的,因为对人机团队的组成有很大的依赖。新手受益于XAI提供增加的SA($ P <0.05 $),但容易受到认知开销的影响($ P <0.05 $)。另一方面,专家性能随着基于XAI的支持($ p <0.05 $)而降低,这表明关注XAI的成本超过了从提供的其他信息中获得的收益以增强SA所获得的收益。我们的结果表明,研究人员必须通过仔细考虑人机团队组成以及XAI方法如何增强SA来故意在正确的情况下设计和部署正确的XAI技术。
translated by 谷歌翻译
自然灾害(例如飓风)之后,数以百万计的需要紧急援助。为了最佳地分配资源,人类规划人员需要准确分析可以从多个来源流动的数据。这激发了多模式机器学习框架的开发,这些框架可以集成多个数据源并有效利用它们。迄今为止,研究界主要集中于单峰推理,以提供损害的细粒度评估。此外,以前的研究主要依赖于灾后图像,这可能需要几天才能可用。在这项工作中,我们提出了一个多模式框架(GALENET),用于通过与天气数据和飓风的轨迹补充污水架图像来评估损害的严重程度。通过对两次飓风的数据进行的广泛实验,我们证明了(i)与单峰方法相比,多模式方法的优点,以及(ii)Galenet在融合各种模态下的有效性。此外,我们表明,在没有后架图像的情况下,Galenet可以利用前碟片前的图像,从而阻止决策的大幅度延迟。
translated by 谷歌翻译
可解释的人工智能(XAI)的目的是产生人类解释的解释,但没有关于人类如何解释AI产生的解释的计算精确理论。缺乏理论意味着XAI的验证必须逐案基础进行经验进行,这阻止了XAI中的系统理论构建。我们提出了一种心理理论,即人类如何从显着图中得出结论,这是XAI解释的最常见形式,这是首次允许精确预测说明的推理以解释为条件。我们的理论认为,没有解释的人类期望AI对自己做出类似的决定,并通过与自己会做出的解释进行比较来解释解释。比较是通过Shepard在相似空间中的普遍泛化定律(一种认知科学的经典理论)形式化的。对AI图像分类的预注册用户研究具有显着性图的解释表明,我们的理论定量与参与者对AI的预测相匹配。
translated by 谷歌翻译
近年来,人们对可解释的AI(XAI)领域的兴趣激增,文献中提出了很多算法。但是,关于如何评估XAI的共识缺乏共识阻碍了该领域的发展。我们强调说,XAI并不是一组整体技术 - 研究人员和从业人员已经开始利用XAI算法来构建服务于不同使用环境的XAI系统,例如模型调试和决策支持。然而,对XAI的算法研究通常不会考虑到这些多样化的下游使用环境,从而对实际用户产生有限的有效性甚至意想不到的后果,以及从业者做出技术选择的困难。我们认为,缩小差距的一种方法是开发评估方法,这些方法在这些用法上下文中说明了不同的用户需求。为了实现这一目标,我们通过考虑XAI评估标准对XAI的原型用法上下文的相对重要性,介绍了情境化XAI评估的观点。为了探索XAI评估标准的上下文依赖性,我们进行了两项调查研究,一项与XAI主题专家,另一项与人群工人进行。我们的结果敦促通过使用使用的评估实践进行负责任的AI研究,并在不同使用环境中对XAI的用户需求有细微的了解。
translated by 谷歌翻译
事实证明,在学习环境中,社会智能代理(SIA)的部署在不同的应用领域具有多个优势。社会代理创作工具使场景设计师能够创造出对SIAS行为的高度控制的量身定制体验,但是,另一方面,这是有代价的,因为该方案及其创作的复杂性可能变得霸道。在本文中,我们介绍了可解释的社会代理创作工具的概念,目的是分析社会代理的创作工具是否可以理解和解释。为此,我们检查了创作工具Fatima-Toolkit是否可以理解,并且从作者的角度来看,其创作步骤可以解释。我们进行了两项用户研究,以定量评估Fatima-Toolkit的解释性,可理解性和透明度,从场景设计师的角度来看。关键发现之一是,法蒂玛 - 库尔基特(Fatima-Toolkit)的概念模型通常是可以理解的,但是基于情感的概念并不那么容易理解和使用。尽管关于Fatima-Toolkit的解释性有一些积极的方面,但仍需要取得进展,以实现完全可以解释的社会代理商创作工具。我们提供一组关键概念和可能的解决方案,可以指导开发人员构建此类工具。
translated by 谷歌翻译
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems due to their psychological validity, flexibility across problem domains and proposed legal compliance. While over 100 counterfactual methods exist, claiming to generate plausible explanations akin to those preferred by people, few have actually been tested on users ($\sim7\%$). So, the psychological validity of these counterfactual algorithms for effective XAI for image data is not established. This issue is addressed here using a novel methodology that (i) gathers ground truth human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated ground-truth explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not "minimally edit" images when generating counterfactual explanations. Instead, they make larger, "meaningful" edits that better approximate prototypes in the counterfactual class.
translated by 谷歌翻译
当前主导的人工智能和机器学习技术神经网络基于归纳统计学习。当今的神经网络是信息处理系统,因此无法理解和推理能力,因此他们无法以人类有效的形式解释促进决策。在这项工作中,我们将科学理论的基本哲学重新访问和使用作为一种分析镜头,目的是揭示,可以从旨在解释神经网络推动的决策的方法中揭示什么,更重要的是,而不是期望的。通过进行案例研究,我们研究了在两个平凡的领域,动物和头饰上的解释性方法的选择。通过我们的研究,我们证明这些方法的有用性取决于人类领域的知识以及我们理解,概括和理性的能力。当目标是进一步了解受过训练的神经网络的优势和劣势时,解释性方法可能很有用。如果我们的目标是使用这些解释性方法来促进可行的决策或建立对ML模型的信任,那么他们需要比今天不太模棱两可。在这项工作中,我们从研究中得出结论,基于解释性方法是对值得信赖的人工智能和机器学习的核心追求。
translated by 谷歌翻译
自动摘要方法是有效的,但可能患有低质量。相比之下,手动摘要很昂贵,但质量更高。人类和人工智能可以协作以提高总结性能吗?在类似的文本生成任务(例如机器翻译)中,人类AI合作的形式是“后编辑” AI生成的文本,可减少人类的工作量并提高AI输出的质量。因此,我们探讨了邮政编辑是否提供文本摘要中的优势。具体来说,我们对72名参与者进行了实验,将提供的后编辑摘要与手动摘要进行了摘要,以摘要质量,人为效率和用户在正式新闻(XSUM新闻)和非正式(REDDIT帖子)文本方面进行了比较。这项研究对何时编辑的文本摘要提供了宝贵的见解:在某些情况下(例如,何时参与者缺乏领域知识),但在其他情况下却没有帮助(例如,何时提供的摘要包括不准确的信息)。参与者的不同编辑策略和援助需求为未来的人类摘要系统提供了影响。
translated by 谷歌翻译