Using chatbots to deliver recommendations is increasingly popular. The design of recommendation chatbots has primarily been taking an information-centric approach by focusing on the recommended content per se. Limited attention is on how social connection and relational strategies, such as self-disclosure from a chatbot, may influence users' perception and acceptance of the recommendation. In this work, we designed, implemented, and evaluated a social chatbot capable of performing three different levels of self-disclosure: factual information (low), cognitive opinions (medium), and emotions (high). In the evaluation, we recruited 372 participants to converse with the chatbot on two topics: movies and COVID-19 experiences. In each topic, the chatbot performed small talks and made recommendations relevant to the topic. Participants were randomly assigned to four experimental conditions where the chatbot used factual, cognitive, emotional, and adaptive strategies to perform self-disclosures. By training a text classifier to identify users' level of self-disclosure in real-time, the adaptive chatbot can dynamically match its self-disclosure to the level of disclosure exhibited by the users. Our results show that users reciprocate with higher-level self-disclosure when a recommendation chatbot consistently displays emotions throughout the conversation. Chatbot's emotional disclosure also led to increased interactional enjoyment and more positive interpersonal perception towards the bot, fostering a stronger human-chatbot relationship and thus leading to increased recommendation effectiveness, including a higher tendency to accept the recommendation. We discuss the understandings obtained and implications to future design.
translated by 谷歌翻译
最近十年表明,人们对机器人作为福祉教练的兴趣越来越大。但是,尚未提出针对机器人设计作为促进心理健康的教练的凝聚力和全面的准则。本文详细介绍了基于基于扎根理论方法的定性荟萃分析的设计和道德建议,该方法是通过三项以用户为中心的涉及机器人福祉教练的三个不同的以用户为中心进行的,即:(1)与参与性设计研究一起进行的。 11名参与者由两位潜在用户组成,他们与人类教练一起参加了简短的专注于解决方案的实践研究,以及不同学科的教练,(2)半结构化的个人访谈数据,这些数据来自20名参加积极心理学干预研究的参与者借助机器人福祉教练胡椒,(3)与3名积极心理学研究的参与者以及2名相关的福祉教练进行了一项参与式设计研究。在进行主题分析和定性荟萃分析之后,我们将收集到收敛性和不同主题的数据整理在一起,并从这些结果中提炼了一套设计准则和道德考虑。我们的发现可以在设计机器人心理福祉教练时考虑到关键方面的关键方面。
translated by 谷歌翻译
Intelligent agents have great potential as facilitators of group conversation among older adults. However, little is known about how to design agents for this purpose and user group, especially in terms of agent embodiment. To this end, we conducted a mixed methods study of older adults' reactions to voice and body in a group conversation facilitation agent. Two agent forms with the same underlying artificial intelligence (AI) and voice system were compared: a humanoid robot and a voice assistant. One preliminary study (total n=24) and one experimental study comparing voice and body morphologies (n=36) were conducted with older adults and an experienced human facilitator. Findings revealed that the artificiality of the agent, regardless of its form, was beneficial for the socially uncomfortable task of conversation facilitation. Even so, talkative personality types had a poorer experience with the "bodied" robot version. Design implications and supplementary reactions, especially to agent voice, are also discussed.
translated by 谷歌翻译
Empathy is a vital factor that contributes to mutual understanding, and joint problem-solving. In recent years, a growing number of studies have recognized the benefits of empathy and started to incorporate empathy in conversational systems. We refer to this topic as empathetic conversational systems. To identify the critical gaps and future opportunities in this topic, this paper examines this rapidly growing field using five review dimensions: (i) conceptual empathy models and frameworks, (ii) adopted empathy-related concepts, (iii) datasets and algorithmic techniques developed, (iv) evaluation strategies, and (v) state-of-the-art approaches. The findings show that most studies have centered on the use of the EMPATHETICDIALOGUES dataset, and the text-based modality dominates research in this field. Studies mainly focused on extracting features from the messages of the users and the conversational systems, with minimal emphasis on user modeling and profiling. Notably, studies that have incorporated emotion causes, external knowledge, and affect matching in the response generation models, have obtained significantly better results. For implementation in diverse real-world settings, we recommend that future studies should address key gaps in areas of detecting and authenticating emotions at the entity level, handling multimodal inputs, displaying more nuanced empathetic behaviors, and encompassing additional dialogue system features.
translated by 谷歌翻译
情绪分析中最突出的任务是为文本分配情绪,并了解情绪如何在语言中表现出来。自然语言处理的一个重要观察结果是,即使没有明确提及情感名称,也可以通过单独参考事件来隐式传达情绪。在心理学中,被称为评估理论的情感理论类别旨在解释事件与情感之间的联系。评估可以被形式化为变量,通过他们认为相关的事件的人们的认知评估来衡量认知评估。其中包括评估事件是否是新颖的,如果该人认为自己负责,是否与自己的目标以及许多其他人保持一致。这样的评估解释了哪些情绪是基于事件开发的,例如,新颖的情况会引起惊喜或不确定后果的人可能引起恐惧。我们在文本中分析了评估理论对情绪分析的适用性,目的是理解注释者是否可以可靠地重建评估概念,如果可以通过文本分类器预测,以及评估概念是否有助于识别情感类别。为了实现这一目标,我们通过要求人们发短信描述触发特定情绪并披露其评估的事件来编译语料库。然后,我们要求读者重建文本中的情感和评估。这种设置使我们能够衡量是否可以纯粹从文本中恢复情绪和评估,并为判断模型的绩效指标提供人体基准。我们将文本分类方法与人类注释者的比较表明,两者都可以可靠地检测出具有相似性能的情绪和评估。我们进一步表明,评估概念改善了文本中情绪的分类。
translated by 谷歌翻译
Incivility remains a major challenge for online discussion platforms, to such an extent that even conversations between well-intentioned users can often derail into uncivil behavior. Traditionally, platforms have relied on moderators to -- with or without algorithmic assistance -- take corrective actions such as removing comments or banning users. In this work we propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in and actively guides them as they are drafting their replies to avoid further escalation. As a proof of concept for this paradigm, we design an algorithmic tool that provides such proactive information directly to users, and conduct a user study in a popular discussion platform. Through a mixed methods approach combining surveys with a randomized controlled experiment, we uncover qualitative and quantitative insights regarding how the participants utilize and react to this information. Most participants report finding this proactive paradigm valuable, noting that it helps them to identify tension that they may have otherwise missed and prompts them to further reflect on their own replies and to revise them. These effects are corroborated by a comparison of how the participants draft their reply when our tool warns them that their conversation is at risk of derailing into uncivil behavior versus in a control condition where the tool is disabled. These preliminary findings highlight the potential of this user-centered paradigm and point to concrete directions for future implementations.
translated by 谷歌翻译
本文详细概述了将连续学习(CL)应用于单课的人类机器人互动(HRI)会议(AVG。31 +-10分钟)的案例研究,其中机器人的心理健康教练是积极的(n = 20)参与者的心理学(PP)练习。我们介绍了互动会议后与参与者进行的简短半结构访谈记录的数据的主题分析(TA)的结果,以及对统计结果的分析,证明了参与者的个性如何影响他们如何看待机器人的方式及其互动。
translated by 谷歌翻译
人为决策的合作努力实现超出人类或人工智能表现的团队绩效。但是,许多因素都会影响人类团队的成功,包括用户的领域专业知识,AI系统的心理模型,对建议的信任等等。这项工作检查了用户与三种模拟算法模型的互动,所有这些模型都具有相似的精度,但对其真正的正面和真实负率进行了不同的调整。我们的研究检查了在非平凡的血管标签任务中的用户性能,参与者表明给定的血管是流动还是停滞。我们的结果表明,虽然AI-Assistant的建议可以帮助用户决策,但用户相对于AI的基线性能和AI错误类型的补充调整等因素会显着影响整体团队的整体绩效。新手用户有所改善,但不能达到AI的准确性。高度熟练的用户通常能够识别何时应遵循AI建议,并通常保持或提高其性能。与AI相似的准确性水平的表演者在AI建议方面是最大的变化。此外,我们发现用户对AI的性能亲戚的看法也对给出AI建议时的准确性是否有所提高产生重大影响。这项工作提供了有关与人类协作有关的因素的复杂性的见解,并提供了有关如何开发以人为中心的AI算法来补充用户在决策任务中的建议。
translated by 谷歌翻译
对心理健康支持的需求不断增长,强调了对话代理在全球和中国作为人类支持者的重要性。这些代理可以增加可用性并降低心理健康支持的相对成本。提供的支持可以分为两种主要类型:认知和情感支持。关于该主题的现有工作主要集中在采用认知行为疗法(CBT)原理的构造药物上。此类代理根据预定义的模板和练习来运行,以提供认知支持。但是,使用此类药物对情绪支持的研究是有限的。此外,大多数建设的代理商都以英语运作,强调了在中国进行此类研究的重要性。在这项研究中,我们分析了表情符疾病在减少精神痛苦症状方面的有效性。 Emohaa是一种对话剂,通过基于CBT的练习和指导性对话提供认知支持。它还通过使用户能够发泄所需的情绪问题来支持情感上的支持。该研究包括134名参与者,分为三组:Emohaa(基于CBT),Emohaa(Full)和控制。实验结果表明,与对照组相比,使用Emohaa的参与者在精神困扰症状方面的改善得到了更大的改善。我们还发现,添加情感支持剂对这种改善,主要是抑郁和失眠有互补的影响。根据获得的结果和参与者对平台的满意,我们得出结论,Emohaa是减少精神困扰的实用和有效工具。
translated by 谷歌翻译
Any organization needs to improve their products, services, and processes. In this context, engaging with customers and understanding their journey is essential. Organizations have leveraged various techniques and technologies to support customer engagement, from call centres to chatbots and virtual agents. Recently, these systems have used Machine Learning (ML) and Natural Language Processing (NLP) to analyze large volumes of customer feedback and engagement data. The goal is to understand customers in context and provide meaningful answers across various channels. Despite multiple advances in Conversational Artificial Intelligence (AI) and Recommender Systems (RS), it is still challenging to understand the intent behind customer questions during the customer journey. To address this challenge, in this paper, we study and analyze the recent work in Conversational Recommender Systems (CRS) in general and, more specifically, in chatbot-based CRS. We introduce a pipeline to contextualize the input utterances in conversations. We then take the next step towards leveraging reverse feature engineering to link the contextualized input and learning model to support intent recognition. Since performance evaluation is achieved based on different ML models, we use transformer base models to evaluate the proposed approach using a labelled dialogue dataset (MSDialogue) of question-answering interactions between information seekers and answer providers.
translated by 谷歌翻译
情绪可以提供自然的交流方式,以补充许多领域中社交机器人(例如文本和语音)现有的多模式能力。我们与112、223和151名参与者进行了三项在线研究,以调查使用情绪作为搜救(SAR)机器人的交流方式的好处。在第一个实验中,我们研究了通过机器人的情绪传达与SAR情况有关的信息的可行性,从而导致了从SAR情况到情绪的映射。第二项研究使用控制控制理论是推导此类映射的替代方法。此方法更灵活,例如允许对不同的情绪集和不同机器人进行调整。在第三个实验中,我们使用LED作为表达通道为外观受限的室外现场研究机器人创建了情感表达。在各种模拟的SAR情况下,使用这些情感表达式,我们评估了这些表达式对参与者(采用救援人员的作用)的影响。我们的结果和提议的方法提供了(a)有关情感如何帮助在SAR背景下传达信息的见解,以及(b)在(模拟)SAR通信环境中添加情绪为传播方式的有效性的证据。
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a 'good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
translated by 谷歌翻译
使用深度学习来产生类似人类的文本的自回归语言模型已变得越来越普遍。这样的模型为智能健康,金融和自动驾驶等领域的流行虚拟助手提供动力。尽管这些大语言模型的参数正在改善,但担心这些模型可能对社会中的所有亚组都没有平等。尽管对跨学科的AI公平性进行了越来越多的讨论,但缺乏系统的指标来评估公平在对话系统中的意义以及如何使不同人群参与评估循环。本文基于审议民主和科学技术研究的理论,提出了一个分析框架,以解开人类对话中的公平意义。使用此框架,我们进行了一项审计研究,以研究GPT-3如何应对有关关键科学和社会主题的不同亚人群的反应:气候变化和黑人生活问题(BLM)运动。我们的语料库包括在性别,种族和种族,教育水平,英语作为第一语言的GPT-3和3290个人之间的超过20,000轮对话,以及对问题的看法。我们发现,在观点和教育少数群体中,对GPT-3的用户经验实质上较差;但是,这两个小组获得了最大的知识增长,改变了聊天后对BLM和气候变化工作的态度改变。我们将这些用户的经验划分为对话差异,发现GPT-3在对教育和舆论少数群体群体做出反应时,与对多数群体的反应相比,它使用了更多的负面表达。我们讨论了我们的发现对集中多样性,公平和包容性的审议对话AI系统的含义。
translated by 谷歌翻译
在公共危机时期,寻求信息对于人们的自我保健和福祉至关重要。广泛的研究调查了经验理解和技术解决方案,以促进受影响地区的家庭公民寻求信息。但是,建立有限的知识是为了支持需要在其东道国发生危机的国际移民。当前的论文对居住在日本和美国(n = 14)的两名中国移民(n = 14)进行了访谈研究。参与者反思了他们在共同大流行期间寻求经验的信息。反思补充了两周的自我追踪,参与者保持了相关信息寻求实践的记录。我们的数据表明,参与者经常绕开语言绕道,或访问普通话资源以获取有关其东道国疫情爆发的信息。他们还进行了战略性利用普通话信息,以进行选择性阅读,交叉检查以及对日语或英语的共同信息的上下文化解释。尽管这种做法增强了参与者对共同相关信息收集和感官的有效性,但他们有时会通过有时认识的方式使人们处于不利地位。此外,参与者缺乏对审查以移民为导向的信息的认识或偏爱,尽管该信息可用,这些信息是由东道国公共当局发布的。在这些发现的基础上,我们讨论了改善国际移民在非本地语言和文化环境中寻求共同相关信息的解决方案。我们主张包容性危机基础设施,这些基础设施将吸引以当地语言流利程度,信息素养和利用公共服务的经验的不同水平的人们。
translated by 谷歌翻译
随着AI系统表现出越来越强烈的预测性能,它们的采用已经在许多域中种植。然而,在刑事司法和医疗保健等高赌场域中,由于安全,道德和法律问题,往往是完全自动化的,但是完全手工方法可能是不准确和耗时的。因此,对研究界的兴趣日益增长,以增加人力决策。除了为此目的开发AI技术之外,人民AI决策的新兴领域必须采用实证方法,以形成对人类如何互动和与AI合作做出决定的基础知识。为了邀请和帮助结构研究努力了解理解和改善人为 - AI决策的研究,我们近期对本课题的实证人体研究的文献。我们总结了在三个重要方面的100多篇论文中的研究设计选择:(1)决定任务,(2)AI模型和AI援助要素,以及(3)评估指标。对于每个方面,我们总结了当前的趋势,讨论了现场当前做法中的差距,并列出了未来研究的建议。我们的调查强调了开发共同框架的需要考虑人类 - AI决策的设计和研究空间,因此研究人员可以在研究设计中进行严格的选择,研究界可以互相构建并产生更广泛的科学知识。我们还希望这项调查将成为HCI和AI社区的桥梁,共同努力,相互塑造人类决策的经验科学和计算技术。
translated by 谷歌翻译
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 85 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, fairness, usability, and human-AI team performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
translated by 谷歌翻译
我们展示了一个讲故事机器人,通过ACT-R认知架构控制,能够采用不同的说服技术和道德阶段,同时对关于Covid-19的一些主题进行交谈。论文的主要贡献包括在对话期间,在代理程序内记忆中可用的有说服力技术的使用(如果有)使用(如果有的话)的需求驱动模型的提议。在这种模型中测试的说服技术组合从使用讲故事,以绘制技术和基于修辞的参数。据我们所知,这代表了建立一个有说服力的代理商,能够整合关于对话管理,讲故事和说服技术以及道德态度的明确接地的认知假设的混合。本文介绍了63名参与者对系统的探索性评估结果
translated by 谷歌翻译
自我跟踪可以提高人们对他们不健康的行为的认识,为行为改变提供见解。事先工作探索了自动跟踪器如何反映其记录数据,但它仍然不清楚他们从跟踪反馈中学到多少,以及哪些信息更有用。实际上,反馈仍然可以压倒,并简明扼要可以通过增加焦点和减少解释负担来改善学习。为了简化反馈,我们提出了一个自动跟踪反馈显着框架,以定义提供反馈的特定信息,为什么这些细节以及如何呈现它们(手动引出或自动反馈)。我们从移动食品跟踪的实地研究中收集了调查和膳食图像数据,并实施了Salientrack,一种机器学习模型,以预测用户从跟踪事件中学习。使用可解释的AI(XAI)技术,SalientRack识别该事件的哪些特征是最突出的,为什么它们导致正面学习结果,并优先考虑如何根据归属分数呈现反馈。我们展示了用例,并进行了形成性研究,以展示Salientrack的可用性和有用性。我们讨论自动跟踪中可读性的影响,以及如何添加模型解释性扩大了提高反馈体验的机会。
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译