People are not very good at detecting lies, which may explain why they refrain from accusing others of lying, given the social costs attached to false accusations - both for the accuser and the accused. Here we consider how this social balance might be disrupted by the availability of lie-detection algorithms powered by Artificial Intelligence. Will people elect to use lie detection algorithms that perform better than humans, and if so, will they show less restraint in their accusations? We built a machine learning classifier whose accuracy (67\%) was significantly better than human accuracy (50\%) in a lie-detection task and conducted an incentivized lie-detection experiment in which we measured participants' propensity to use the algorithm, as well as the impact of that use on accusation rates. We find that the few people (33\%) who elect to use the algorithm drastically increase their accusation rates (from 25\% in the baseline condition up to 86% when the algorithm flags a statement as a lie). They make more false accusations (18pp increase), but at the same time, the probability of a lie remaining undetected is much lower in this group (36pp decrease). We consider individual motivations for using lie detection algorithms and the social implications of these algorithms.
translated by 谷歌翻译
Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.
translated by 谷歌翻译
人为决策的合作努力实现超出人类或人工智能表现的团队绩效。但是,许多因素都会影响人类团队的成功,包括用户的领域专业知识,AI系统的心理模型,对建议的信任等等。这项工作检查了用户与三种模拟算法模型的互动,所有这些模型都具有相似的精度,但对其真正的正面和真实负率进行了不同的调整。我们的研究检查了在非平凡的血管标签任务中的用户性能,参与者表明给定的血管是流动还是停滞。我们的结果表明,虽然AI-Assistant的建议可以帮助用户决策,但用户相对于AI的基线性能和AI错误类型的补充调整等因素会显着影响整体团队的整体绩效。新手用户有所改善,但不能达到AI的准确性。高度熟练的用户通常能够识别何时应遵循AI建议,并通常保持或提高其性能。与AI相似的准确性水平的表演者在AI建议方面是最大的变化。此外,我们发现用户对AI的性能亲戚的看法也对给出AI建议时的准确性是否有所提高产生重大影响。这项工作提供了有关与人类协作有关的因素的复杂性的见解,并提供了有关如何开发以人为中心的AI算法来补充用户在决策任务中的建议。
translated by 谷歌翻译
Incivility remains a major challenge for online discussion platforms, to such an extent that even conversations between well-intentioned users can often derail into uncivil behavior. Traditionally, platforms have relied on moderators to -- with or without algorithmic assistance -- take corrective actions such as removing comments or banning users. In this work we propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in and actively guides them as they are drafting their replies to avoid further escalation. As a proof of concept for this paradigm, we design an algorithmic tool that provides such proactive information directly to users, and conduct a user study in a popular discussion platform. Through a mixed methods approach combining surveys with a randomized controlled experiment, we uncover qualitative and quantitative insights regarding how the participants utilize and react to this information. Most participants report finding this proactive paradigm valuable, noting that it helps them to identify tension that they may have otherwise missed and prompts them to further reflect on their own replies and to revise them. These effects are corroborated by a comparison of how the participants draft their reply when our tool warns them that their conversation is at risk of derailing into uncivil behavior versus in a control condition where the tool is disabled. These preliminary findings highlight the potential of this user-centered paradigm and point to concrete directions for future implementations.
translated by 谷歌翻译
尽管Ai在各个领域的超人表现,但人类往往不愿意采用AI系统。许多现代AI技术中缺乏可解释性的缺乏可令人伤害他们的采用,因为用户可能不相信他们不理解的决策过程的系统。我们通过一种新的实验调查这一主张,其中我们使用互动预测任务来分析可解释性和结果反馈对AI信任的影响和AI辅助预测任务的人类绩效。我们发现解释性导致了不强大的信任改进,而结果反馈具有明显更大且更可靠的效果。然而,这两个因素对参与者的任务表现产生了适度的影响。我们的研究结果表明(1)接受重大关注的因素,如可解释性,在越来越多的信任方面可能比其他结果反馈的因素效果,而(2)通过AI系统增强人类绩效可能不是在AI中增加信任的简单问题。 ,随着增加的信任并不总是与性能同样大的改进相关联。这些调查结果邀请了研究界不仅关注产生解释的方法,而且还专注于确保在实践中产生影响和表现的技巧。
translated by 谷歌翻译
人类交流越来越多地与AI产生的语言混合。在聊天,电子邮件和社交媒体中,AI系统会产生智能答复,自动完成和翻译。 AI生成的语言通常不被认为是人类语言的姿势,引起了人们对新型欺骗和操纵形式的担忧。在这里,我们研究了人类如何辨别AI产生的最个人化和结果形式之一 - 一种自我表现。在六个实验中,参与者(n = 4,650)试图识别由最先进的语言模型产生的自我表现。在专业,款待和浪漫的环境中,我们发现人类无法识别AI生成的自我表现。将定性分析与语言特征工程相结合,我们发现人类对语言的人类判断受到直观但有缺陷的启发式方法的困扰,例如将第一人称代词,真实的单词或家庭主题与人类相关联。我们表明,这些启发式方法使人类对产生的语言的判断可预测和可操纵,从而使AI系统能够产生比人类更具人类的语言。我们通过讨论解决方案(例如AI的重音或合理使用政策)来结束,以减少产生语言的欺骗潜力,从而限制人类直觉的颠覆。
translated by 谷歌翻译
人工智能(AI)系统越来越多地用于提供建议以促进人类决策。尽管大量工作探讨了如何优化AI系统以产生准确且公平的建议以及如何向人类决策者提供算法建议,但在这项工作中,我们提出了一个不同的基本问题:何时应该提供建议?由于当前不断提供算法建议的局限性的限制,我们提出了以双向方式与人类用户互动的AI系统的设计。我们的AI系统学习使用过去的人类决策为政策提供建议。然后,对于新案例,学识渊博的政策利用人类的意见来确定算法建议将是有用的案例,以及人类最好单独决定的情况。我们通过使用美国刑事司法系统的数据对审前释放决策进行大规模实验来评估我们的方法。在我们的实验中,要求参与者评估被告违反其释放条款的风险,如果释放,并受到不同建议方法的建议。结果表明,与固定的非交互式建议方法相比,我们的交互式辅助方法可以在需要时提供建议,并显着改善人类决策。我们的方法在促进人类学习,保留人类决策者的互补优势以及对建议的更积极反应方面具有额外的优势。
translated by 谷歌翻译
超现实视觉效果的技术的最新进展引起了人们的关注,即政治演讲的深层视频很快将与真实的视频录制无法视觉区分。通信研究中的传统观念预测,当故事的同一版本被视为视频而不是文字时,人们会更频繁地跌倒假新闻。在这里,我们评估了41,822名参与者在一个实验中如何将真实的政治演讲与捏造区分开来,在该实验中,演讲被随机显示为文本,音频和视频的排列。我们发现获得音频和视觉沟通方式的访问提高了参与者的准确性。在这里,人类的判断更多地依赖于话语,视听线索比所说的语音内容。但是,我们发现反思性推理调节了参与者考虑视觉信息的程度:认知反射测试的表现较低与对所说内容的过度依赖有关。
translated by 谷歌翻译
我们回顾了有关模型的文献,这些文献试图解释具有金钱回报的正常形式游戏所描述的社交互动中的人类行为。我们首先涵盖社会和道德偏好。然后,我们专注于日益增长的研究,表明人们对描述行动的语言做出反应,尤其是在激活道德问题时。最后,我们认为行为经济学正处于向基于语言的偏好转变的范式中,这将需要探索新的模型和实验设置。
translated by 谷歌翻译
Prior work has identified a resilient phenomenon that threatens the performance of human-AI decision-making teams: overreliance, when people agree with an AI, even when it is incorrect. Surprisingly, overreliance does not reduce when the AI produces explanations for its predictions, compared to only providing predictions. Some have argued that overreliance results from cognitive biases or uncalibrated trust, attributing overreliance to an inevitability of human cognition. By contrast, our paper argues that people strategically choose whether or not to engage with an AI explanation, demonstrating empirically that there are scenarios where AI explanations reduce overreliance. To achieve this, we formalize this strategic choice in a cost-benefit framework, where the costs and benefits of engaging with the task are weighed against the costs and benefits of relying on the AI. We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze. Through 5 studies (N = 731), we find that costs such as task difficulty (Study 1), explanation difficulty (Study 2, 3), and benefits such as monetary compensation (Study 4) affect overreliance. Finally, Study 5 adapts the Cognitive Effort Discounting paradigm to quantify the utility of different explanations, providing further support for our framework. Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.
translated by 谷歌翻译
人类不断受到他人的行为和观点的影响。至关重要的是,人类之间的社会影响是由互惠构成的:我们更多地遵循一直在考虑我们意见的人的建议。在当前的工作中,我们研究了与社会类人机器人互动时相互影响的影响是否可以出现。在一项联合任务中,人类参与者和人形机器人进行了感知估计,然后在观察伴侣的判断后可以公开修改它们。结果表明,赋予机器人表达和调节其对人类判断的易感水平的能力代表了双刃剑。一方面,当机器人遵循他们的建议时,参与者对机器人的能力失去了信心。另一方面,参与者不愿透露他们对易感机器人缺乏信心,这表明出现了支持人类机器人合作的社会影响力的相互机制。
translated by 谷歌翻译
Intelligent agents have great potential as facilitators of group conversation among older adults. However, little is known about how to design agents for this purpose and user group, especially in terms of agent embodiment. To this end, we conducted a mixed methods study of older adults' reactions to voice and body in a group conversation facilitation agent. Two agent forms with the same underlying artificial intelligence (AI) and voice system were compared: a humanoid robot and a voice assistant. One preliminary study (total n=24) and one experimental study comparing voice and body morphologies (n=36) were conducted with older adults and an experienced human facilitator. Findings revealed that the artificiality of the agent, regardless of its form, was beneficial for the socially uncomfortable task of conversation facilitation. Even so, talkative personality types had a poorer experience with the "bodied" robot version. Design implications and supplementary reactions, especially to agent voice, are also discussed.
translated by 谷歌翻译
Taking advice from others requires confidence in their competence. This is important for interaction with peers, but also for collaboration with social robots and artificial agents. Nonetheless, we do not always have access to information about others' competence or performance. In these uncertain environments, do our prior beliefs about the nature and the competence of our interacting partners modulate our willingness to rely on their judgments? In a joint perceptual decision making task, participants made perceptual judgments and observed the simulated estimates of either a human participant, a social humanoid robot or a computer. Then they could modify their estimates based on this feedback. Results show participants' belief about the nature of their partner biased their compliance with its judgments: participants were more influenced by the social robot than human and computer partners. This difference emerged strongly at the very beginning of the task and decreased with repeated exposure to empirical feedback on the partner's responses, disclosing the role of prior beliefs in social influence under uncertainty. Furthermore, the results of our functional task suggest an important difference between human-human and human-robot interaction in the absence of overt socially relevant signal from the partner: the former is modulated by social normative mechanisms, whereas the latter is guided by purely informational mechanisms linked to the perceived competence of the partner.
translated by 谷歌翻译
情绪分析中最突出的任务是为文本分配情绪,并了解情绪如何在语言中表现出来。自然语言处理的一个重要观察结果是,即使没有明确提及情感名称,也可以通过单独参考事件来隐式传达情绪。在心理学中,被称为评估理论的情感理论类别旨在解释事件与情感之间的联系。评估可以被形式化为变量,通过他们认为相关的事件的人们的认知评估来衡量认知评估。其中包括评估事件是否是新颖的,如果该人认为自己负责,是否与自己的目标以及许多其他人保持一致。这样的评估解释了哪些情绪是基于事件开发的,例如,新颖的情况会引起惊喜或不确定后果的人可能引起恐惧。我们在文本中分析了评估理论对情绪分析的适用性,目的是理解注释者是否可以可靠地重建评估概念,如果可以通过文本分类器预测,以及评估概念是否有助于识别情感类别。为了实现这一目标,我们通过要求人们发短信描述触发特定情绪并披露其评估的事件来编译语料库。然后,我们要求读者重建文本中的情感和评估。这种设置使我们能够衡量是否可以纯粹从文本中恢复情绪和评估,并为判断模型的绩效指标提供人体基准。我们将文本分类方法与人类注释者的比较表明,两者都可以可靠地检测出具有相似性能的情绪和评估。我们进一步表明,评估概念改善了文本中情绪的分类。
translated by 谷歌翻译
人工智能算法越来越多地被公共机构作为决策助手,并承诺克服人类决策者的偏见。同时,他们可能会在人类算法中引入新的偏见。在心理学和公共行政文献上,我们调查了两个关键偏见:即使面对来自其他来源的警告信号(自动化偏见)的警告信号,对算法建议过度依赖,并选择性地采用算法建议时,这与刻板印象相对应(Selective Adherence)。我们通过在荷兰瓦中进行的三项实验研究评估这些研究,讨论了我们发现对公共部门决策在自动化时代的影响。总体而言,我们的研究表明,对已经脆弱和处境不利的公民自动化自动化的潜在负面影响。
translated by 谷歌翻译
随着AI系统表现出越来越强烈的预测性能,它们的采用已经在许多域中种植。然而,在刑事司法和医疗保健等高赌场域中,由于安全,道德和法律问题,往往是完全自动化的,但是完全手工方法可能是不准确和耗时的。因此,对研究界的兴趣日益增长,以增加人力决策。除了为此目的开发AI技术之外,人民AI决策的新兴领域必须采用实证方法,以形成对人类如何互动和与AI合作做出决定的基础知识。为了邀请和帮助结构研究努力了解理解和改善人为 - AI决策的研究,我们近期对本课题的实证人体研究的文献。我们总结了在三个重要方面的100多篇论文中的研究设计选择:(1)决定任务,(2)AI模型和AI援助要素,以及(3)评估指标。对于每个方面,我们总结了当前的趋势,讨论了现场当前做法中的差距,并列出了未来研究的建议。我们的调查强调了开发共同框架的需要考虑人类 - AI决策的设计和研究空间,因此研究人员可以在研究设计中进行严格的选择,研究界可以互相构建并产生更广泛的科学知识。我们还希望这项调查将成为HCI和AI社区的桥梁,共同努力,相互塑造人类决策的经验科学和计算技术。
translated by 谷歌翻译
数字虚假信息的传播(又称“假新闻”)可以说是互联网上最重要的威胁之一,它可能造成大规模的个人和社会伤害。虚假新闻攻击的敏感性取决于互联网用户在阅读后是否认为虚假新闻文章/摘要是合法的。在本文中,我们试图通过神经认知方法来深入了解用户对以文本为中心的假新闻攻击的敏感性。我们通过脑电图调查了与假/真实新闻有关的神经基础。我们与人类用户进行实验,以彻底调查用户对假/真实新闻的认知处理和认知处理。我们分析了不同类别新闻文章的假/真实新闻检测任务相关的神经活动。我们的结果表明,在人脑处理假新闻与真实新闻的方式上可能没有统计学意义或自动可推断的差异,而当人们受到(真实/假)新闻与安息状态甚至之间的差异时,会观察到明显的差异一些不同类别的假新闻。这一神经认知发现可能有助于证明用户对假新闻攻击的敏感性,这也从行为分析中得到了证实。换句话说,假新闻文章似乎与行为和神经领域的真实新闻文章几乎没有区别。我们的作品旨在剖析假新闻攻击的基本神经现象,并通过人类生物学的极限解释了用户对这些攻击的敏感性。我们认为,对于研究人员和从业者来说,这可能是一个显着的见解楷模
translated by 谷歌翻译
社交媒体的回声室是一个重要的问题,可以引起许多负面后果,最近影响对Covid-19的响应。回声室促进病毒的阴谋理论,发现与疫苗犹豫不决,较少遵守面具授权,以及社会疏散的实践。此外,回声室的问题与政治极化等其他相关问题相连,以及误导的传播。回声室被定义为用户网络,用户只与支持其预先存在的信仰和意见的意见相互作用,并且他们排除和诋毁其他观点。本调查旨在从社会计算的角度检查社交媒体上的回声室现象,并为可能的解决方案提供蓝图。我们调查了相关文献,了解回声室的属性以及它们如何影响个人和社会。此外,我们展示了算法和心理的机制,这导致了回声室的形成。这些机制可以以两种形式表现出:(1)社交媒体推荐系统的偏见和(2)内部偏见,如确认偏见和精梳性。虽然减轻内部偏见是非常挑战的,但努力消除推荐系统的偏见。这些推荐系统利用我们自己的偏见来个性化内容建议,以使我们参与其中才能观看更多广告。因此,我们进一步研究了回声室检测和预防的不同计算方法,主要基于推荐系统。
translated by 谷歌翻译
解释已被框起来是更好,更公平的人类决策的基本特征。在公平的背景下,这一点尚未得到适当的研究,因为先前的工作主要根据他们对人们的看法的影响进行了评估。但是,我们认为,要促进更公正的决定,它们必须使人类能够辨别正确和错误的AI建议。为了验证我们的概念论点,我们进行了一项实证研究,以研究解释,公平感和依赖行为之间的关系。我们的发现表明,解释会影响人们的公平感,这反过来又影响了依赖。但是,我们观察到,低公平的看法会导致AI建议的更多替代,无论它们是正确还是错。这(i)引起了人们对现有解释对增强分配公平性的有用性的怀疑,并且(ii)为为什么不必将感知作为适当依赖的代理而被混淆的重要案例。
translated by 谷歌翻译
业务分析(BA)的广泛采用带来了财务收益和提高效率。但是,当BA以公正的影响为决定时,这些进步同时引起了人们对法律和道德挑战的不断增加。作为对这些关注的回应,对算法公平性的新兴研究涉及算法输出,这些算法可能会导致不同的结果或其他形式的对人群亚组的不公正现象,尤其是那些在历史上被边缘化的人。公平性是根据法律合规,社会责任和效用是相关的;如果不充分和系统地解决,不公平的BA系统可能会导致社会危害,也可能威胁到组织自己的生存,其竞争力和整体绩效。本文提供了有关算法公平的前瞻性,注重BA的评论。我们首先回顾有关偏见来源和措施的最新研究以及偏见缓解算法。然后,我们对公用事业关系的详细讨论进行了详细的讨论,强调经常假设这两种构造之间经常是错误的或短视的。最后,我们通过确定企业学者解决有效和负责任的BA的关键的有影响力的公开挑战的机会来绘制前进的道路。
translated by 谷歌翻译