人类交流越来越多地与AI产生的语言混合。在聊天,电子邮件和社交媒体中,AI系统会产生智能答复,自动完成和翻译。 AI生成的语言通常不被认为是人类语言的姿势,引起了人们对新型欺骗和操纵形式的担忧。在这里,我们研究了人类如何辨别AI产生的最个人化和结果形式之一 - 一种自我表现。在六个实验中,参与者(n = 4,650)试图识别由最先进的语言模型产生的自我表现。在专业,款待和浪漫的环境中,我们发现人类无法识别AI生成的自我表现。将定性分析与语言特征工程相结合,我们发现人类对语言的人类判断受到直观但有缺陷的启发式方法的困扰,例如将第一人称代词,真实的单词或家庭主题与人类相关联。我们表明,这些启发式方法使人类对产生的语言的判断可预测和可操纵,从而使AI系统能够产生比人类更具人类的语言。我们通过讨论解决方案(例如AI的重音或合理使用政策)来结束,以减少产生语言的欺骗潜力,从而限制人类直觉的颠覆。
translated by 谷歌翻译
使用深度学习来产生类似人类的文本的自回归语言模型已变得越来越普遍。这样的模型为智能健康,金融和自动驾驶等领域的流行虚拟助手提供动力。尽管这些大语言模型的参数正在改善,但担心这些模型可能对社会中的所有亚组都没有平等。尽管对跨学科的AI公平性进行了越来越多的讨论,但缺乏系统的指标来评估公平在对话系统中的意义以及如何使不同人群参与评估循环。本文基于审议民主和科学技术研究的理论,提出了一个分析框架,以解开人类对话中的公平意义。使用此框架,我们进行了一项审计研究,以研究GPT-3如何应对有关关键科学和社会主题的不同亚人群的反应:气候变化和黑人生活问题(BLM)运动。我们的语料库包括在性别,种族和种族,教育水平,英语作为第一语言的GPT-3和3290个人之间的超过20,000轮对话,以及对问题的看法。我们发现,在观点和教育少数群体中,对GPT-3的用户经验实质上较差;但是,这两个小组获得了最大的知识增长,改变了聊天后对BLM和气候变化工作的态度改变。我们将这些用户的经验划分为对话差异,发现GPT-3在对教育和舆论少数群体群体做出反应时,与对多数群体的反应相比,它使用了更多的负面表达。我们讨论了我们的发现对集中多样性,公平和包容性的审议对话AI系统的含义。
translated by 谷歌翻译
我们建议并探讨可以将语言模型作为社会科学研究中特定人类亚人群的有效代理进行研究的可能性。人工智能工具的实践和研究应用有时受到有问题的偏见(例如种族主义或性别歧视)的限制,这些偏见通常被视为模型的统一特性。我们表明,一个这样的工具中的“算法偏见”(GPT-3语言模型)既是细粒度又是人口统计相关的,这意味着适当的条件会导致其准确地仿真来自各种人类的响应分布亚组。我们将此属性称为“算法忠诚度”,并在GPT-3中探索其范围。我们通过将模型调节在美国进行的多项大型调查中的数千个社会人口统计背景故事中调节,从而创建“硅样本”。然后,我们比较硅和人类样品,以证明GPT-3中包含的信息远远超出了表面相似性。它是细微的,多方面的,并反映了特征人类态度的思想,态度和社会文化背景之间的复杂相互作用。我们建议,具有足够算法的忠诚度的语言模型构成了一种新颖而有力的工具,可以促进各种学科的人类和社会的理解。
translated by 谷歌翻译
情绪分析中最突出的任务是为文本分配情绪,并了解情绪如何在语言中表现出来。自然语言处理的一个重要观察结果是,即使没有明确提及情感名称,也可以通过单独参考事件来隐式传达情绪。在心理学中,被称为评估理论的情感理论类别旨在解释事件与情感之间的联系。评估可以被形式化为变量,通过他们认为相关的事件的人们的认知评估来衡量认知评估。其中包括评估事件是否是新颖的,如果该人认为自己负责,是否与自己的目标以及许多其他人保持一致。这样的评估解释了哪些情绪是基于事件开发的,例如,新颖的情况会引起惊喜或不确定后果的人可能引起恐惧。我们在文本中分析了评估理论对情绪分析的适用性,目的是理解注释者是否可以可靠地重建评估概念,如果可以通过文本分类器预测,以及评估概念是否有助于识别情感类别。为了实现这一目标,我们通过要求人们发短信描述触发特定情绪并披露其评估的事件来编译语料库。然后,我们要求读者重建文本中的情感和评估。这种设置使我们能够衡量是否可以纯粹从文本中恢复情绪和评估,并为判断模型的绩效指标提供人体基准。我们将文本分类方法与人类注释者的比较表明,两者都可以可靠地检测出具有相似性能的情绪和评估。我们进一步表明,评估概念改善了文本中情绪的分类。
translated by 谷歌翻译
我们提出了一项探索性定性研究,以了解作家如何与下一页建议相互作用。尽管对建议系统对写作的影响进行了一些定量研究,但几乎没有定性的工作来理解作家如何与建议系统互动及其如何影响他们的写作过程 - 特别是针对非本地但英国作家的。我们进行了一项研究,要求业余作家分别写两部电影评论,一本没有建议。我们发现作家以各种复杂的方式与下一页建议互动 - 作家能够抽象建议的多个部分并将其纳入他们的写作中 - 即使他们不同意整个建议。建议系统对写作过程也有各种影响 - 以独特的方式为写作过程的不同方面做出了影响。我们提出了一种用于与GPT-2写作的作家 - 探索互动模型,用于电影评论写作任务,然后是该模型可用于未来研究的方式,并概述了研究和设计的机会。
translated by 谷歌翻译
Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.
translated by 谷歌翻译
自我跟踪可以提高人们对他们不健康的行为的认识,为行为改变提供见解。事先工作探索了自动跟踪器如何反映其记录数据,但它仍然不清楚他们从跟踪反馈中学到多少,以及哪些信息更有用。实际上,反馈仍然可以压倒,并简明扼要可以通过增加焦点和减少解释负担来改善学习。为了简化反馈,我们提出了一个自动跟踪反馈显着框架,以定义提供反馈的特定信息,为什么这些细节以及如何呈现它们(手动引出或自动反馈)。我们从移动食品跟踪的实地研究中收集了调查和膳食图像数据,并实施了Salientrack,一种机器学习模型,以预测用户从跟踪事件中学习。使用可解释的AI(XAI)技术,SalientRack识别该事件的哪些特征是最突出的,为什么它们导致正面学习结果,并优先考虑如何根据归属分数呈现反馈。我们展示了用例,并进行了形成性研究,以展示Salientrack的可用性和有用性。我们讨论自动跟踪中可读性的影响,以及如何添加模型解释性扩大了提高反馈体验的机会。
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
Intelligent agents have great potential as facilitators of group conversation among older adults. However, little is known about how to design agents for this purpose and user group, especially in terms of agent embodiment. To this end, we conducted a mixed methods study of older adults' reactions to voice and body in a group conversation facilitation agent. Two agent forms with the same underlying artificial intelligence (AI) and voice system were compared: a humanoid robot and a voice assistant. One preliminary study (total n=24) and one experimental study comparing voice and body morphologies (n=36) were conducted with older adults and an experienced human facilitator. Findings revealed that the artificiality of the agent, regardless of its form, was beneficial for the socially uncomfortable task of conversation facilitation. Even so, talkative personality types had a poorer experience with the "bodied" robot version. Design implications and supplementary reactions, especially to agent voice, are also discussed.
translated by 谷歌翻译
语言模型可以根据给定的文化背景产生有害和偏置的输出并表现出不良行为。我们提出了一种将语言模型适应社会(PALM)与值目标数据集的过程,以通过在反映预定的一组目标值集合的数据集上进行制备和微调来显着地改变模型行为的迭代过程。我们使用三个指标评估我们的进程:具有人类评估的定量指标,将输出遵守目标值,毒性评分对产出;和定性度量分析与给定社会类别相关的最常见的单词。通过每次迭代,我们根据来自评估的观察到的缺点添加其他培训数据集示例。与基线和控制模型相比,PALMS在所有指标上显着更好地为广泛的GPT-3语言模型尺寸进行了基线和控制模型,而不会影响能力完整性。我们发现PALMS的有效性随模型规模而增加。我们表明,显着调整语言模型行为与小型手腕策划数据集是可行的。
translated by 谷歌翻译
随着人工智能系统变得越来越强大和普遍,人们对机器的道德或缺乏道德的关注变得越来越关注。然而,向机器讲授道德是一项艰巨的任务,因为道德仍然是人类中最激烈的争论问题之一,更不用说AI了。但是,部署到数百万用户的现有AI系统已经在做出充满道德影响的决策,这构成了一个看似不可能的挑战:教学机器的道德意义,而人类继续努力努力。为了探索这一挑战,我们介绍了Delphi,这是一个基于深层神经网络的实验框架,直接训练了描述性道德判断,例如,“帮助朋友”通常是不错的,而“帮助朋友传播假新闻”不是。经验结果提供了对机器伦理的承诺和局限性的新见解。面对新的道德情况,德尔菲(Delphi)表现出强大的概括能力,而现成的神经网络模型表现出明显差的判断,包括不公正的偏见,证实了对明确教学机器的道德意义的必要性。然而,德尔菲并不完美,表现出对普遍性偏见和不一致的敏感性。尽管如此,我们还是展示了不完美的Delphi的积极用例,包括在其他不完美的AI系统中将其用作组件模型。重要的是,我们根据著名的道德理论来解释Delphi的运营化,这使我们提出了重要的未来研究问题。
translated by 谷歌翻译
我们生活中情绪的重要性和普及性使得情感计算了一个非常重要和充满活力的工作。自动情感识别(AER)和情感分析的系统可以是巨大进展的促进者(例如,改善公共卫生和商业),而且还有巨大伤害的推动者(例如,用于抑制持不同政见者和操纵选民)。因此,情感计算社区必须积极地与其创作的道德后果搞。在本文中,我已经从AI伦理和情感认可文学中综合和组织信息,以提出与AER相关的五十个道德考虑因素。值得注意的是,纸张捏出了隐藏在如何框架的假设,并且在经常对数据,方法和评估的选择中的选择。特别关注在隐私和社会群体上的AER对AER的影响。沿途,关键建议是针对负责任的航空制作的。纸张的目标是促进和鼓励更加思考为什么自动化,如何自动化,以及如何在建立AER系统之前判断成功。此外,该纸张作为情感认可的有用介绍文件(补充调查文章)。
translated by 谷歌翻译
在公共危机时期,寻求信息对于人们的自我保健和福祉至关重要。广泛的研究调查了经验理解和技术解决方案,以促进受影响地区的家庭公民寻求信息。但是,建立有限的知识是为了支持需要在其东道国发生危机的国际移民。当前的论文对居住在日本和美国(n = 14)的两名中国移民(n = 14)进行了访谈研究。参与者反思了他们在共同大流行期间寻求经验的信息。反思补充了两周的自我追踪,参与者保持了相关信息寻求实践的记录。我们的数据表明,参与者经常绕开语言绕道,或访问普通话资源以获取有关其东道国疫情爆发的信息。他们还进行了战略性利用普通话信息,以进行选择性阅读,交叉检查以及对日语或英语的共同信息的上下文化解释。尽管这种做法增强了参与者对共同相关信息收集和感官的有效性,但他们有时会通过有时认识的方式使人们处于不利地位。此外,参与者缺乏对审查以移民为导向的信息的认识或偏爱,尽管该信息可用,这些信息是由东道国公共当局发布的。在这些发现的基础上,我们讨论了改善国际移民在非本地语言和文化环境中寻求共同相关信息的解决方案。我们主张包容性危机基础设施,这些基础设施将吸引以当地语言流利程度,信息素养和利用公共服务的经验的不同水平的人们。
translated by 谷歌翻译
人为决策的合作努力实现超出人类或人工智能表现的团队绩效。但是,许多因素都会影响人类团队的成功,包括用户的领域专业知识,AI系统的心理模型,对建议的信任等等。这项工作检查了用户与三种模拟算法模型的互动,所有这些模型都具有相似的精度,但对其真正的正面和真实负率进行了不同的调整。我们的研究检查了在非平凡的血管标签任务中的用户性能,参与者表明给定的血管是流动还是停滞。我们的结果表明,虽然AI-Assistant的建议可以帮助用户决策,但用户相对于AI的基线性能和AI错误类型的补充调整等因素会显着影响整体团队的整体绩效。新手用户有所改善,但不能达到AI的准确性。高度熟练的用户通常能够识别何时应遵循AI建议,并通常保持或提高其性能。与AI相似的准确性水平的表演者在AI建议方面是最大的变化。此外,我们发现用户对AI的性能亲戚的看法也对给出AI建议时的准确性是否有所提高产生重大影响。这项工作提供了有关与人类协作有关的因素的复杂性的见解,并提供了有关如何开发以人为中心的AI算法来补充用户在决策任务中的建议。
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
Incivility remains a major challenge for online discussion platforms, to such an extent that even conversations between well-intentioned users can often derail into uncivil behavior. Traditionally, platforms have relied on moderators to -- with or without algorithmic assistance -- take corrective actions such as removing comments or banning users. In this work we propose a complementary paradigm that directly empowers users by proactively enhancing their awareness about existing tension in the conversation they are engaging in and actively guides them as they are drafting their replies to avoid further escalation. As a proof of concept for this paradigm, we design an algorithmic tool that provides such proactive information directly to users, and conduct a user study in a popular discussion platform. Through a mixed methods approach combining surveys with a randomized controlled experiment, we uncover qualitative and quantitative insights regarding how the participants utilize and react to this information. Most participants report finding this proactive paradigm valuable, noting that it helps them to identify tension that they may have otherwise missed and prompts them to further reflect on their own replies and to revise them. These effects are corroborated by a comparison of how the participants draft their reply when our tool warns them that their conversation is at risk of derailing into uncivil behavior versus in a control condition where the tool is disabled. These preliminary findings highlight the potential of this user-centered paradigm and point to concrete directions for future implementations.
translated by 谷歌翻译
最近十年表明,人们对机器人作为福祉教练的兴趣越来越大。但是,尚未提出针对机器人设计作为促进心理健康的教练的凝聚力和全面的准则。本文详细介绍了基于基于扎根理论方法的定性荟萃分析的设计和道德建议,该方法是通过三项以用户为中心的涉及机器人福祉教练的三个不同的以用户为中心进行的,即:(1)与参与性设计研究一起进行的。 11名参与者由两位潜在用户组成,他们与人类教练一起参加了简短的专注于解决方案的实践研究,以及不同学科的教练,(2)半结构化的个人访谈数据,这些数据来自20名参加积极心理学干预研究的参与者借助机器人福祉教练胡椒,(3)与3名积极心理学研究的参与者以及2名相关的福祉教练进行了一项参与式设计研究。在进行主题分析和定性荟萃分析之后,我们将收集到收敛性和不同主题的数据整理在一起,并从这些结果中提炼了一套设计准则和道德考虑。我们的发现可以在设计机器人心理福祉教练时考虑到关键方面的关键方面。
translated by 谷歌翻译
This study introduces and examines the potential of an AI system to generate health awareness messages. The topic of folic acid, a vitamin that is critical during pregnancy, served as a test case. Using prompt engineering, we generated messages that could be used to raise awareness and compared them to retweeted human-generated messages via computational and human evaluation methods. The system was easy to use and prolific, and computational analyses revealed that the AI-generated messages were on par with human-generated ones in terms of sentiment, reading ease, and semantic content. Also, the human evaluation study showed that AI-generated messages ranked higher in message quality and clarity. We discuss the theoretical, practical, and ethical implications of these results.
translated by 谷歌翻译
Digital platforms, including online forums and helplines, have emerged as avenues of support for caregivers suffering from postpartum mental health distress. Understanding support seekers' experiences as shared on these platforms could provide crucial insight into caregivers' needs during this vulnerable time. In the current work, we provide a descriptive analysis of the concerns, psychological states, and motivations shared by healthy and distressed postpartum support seekers on two digital platforms, a one-on-one digital helpline and a publicly available online forum. Using a combination of human annotations, dictionary models and unsupervised techniques, we find stark differences between the experiences of distressed and healthy mothers. Distressed mothers described interpersonal problems and a lack of support, with 8.60% - 14.56% reporting severe symptoms including suicidal ideation. In contrast, the majority of healthy mothers described childcare issues, such as questions about breastfeeding or sleeping, and reported no severe mental health concerns. Across the two digital platforms, we found that distressed mothers shared similar content. However, the patterns of speech and affect shared by distressed mothers differed between the helpline vs. the online forum, suggesting the design of these platforms may shape meaningful measures of their support-seeking experiences. Our results provide new insight into the experiences of caregivers suffering from postpartum mental health distress. We conclude by discussing methodological considerations for understanding content shared by support seekers and design considerations for the next generation of support tools for postpartum parents.
translated by 谷歌翻译
We demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-ofthe-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous nonsparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks. We also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora.
translated by 谷歌翻译