在本文中,我们检查了复杂性的概念,因为它适用于生成和进化艺术和设计。复杂性具有许多不同,纪律的特定定义,例如物理系统(熵)的复杂性,信息复杂性的算法测量和“复杂系统”。我们将一系列不同的复杂度措施应用于三个不同的进化艺术数据集,并查看艺术家的复杂性和个人审美判断之间的相关性(在两个数据集的情况下)或生成3D形式的物理测量复杂性。我们的结果表明,每个集合和测量的相关程度都不同,表明没有整体“更好”的措施。但是,具体措施确实在各个数据集中表现良好,表明仔细选择可以增加使用此类措施的值。然后,我们通过对复杂性和美学的看法进行大规模调查来评估观众复杂度措施的价值。我们通过讨论生成和进化艺术中的直接措施的价值来得出结论,提高神经影像学和心理学的最新发现,这提出了人类审美判断的许多外在因素,超出了所判断的物体的可测量特性。
translated by 谷歌翻译
Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
人为决策的合作努力实现超出人类或人工智能表现的团队绩效。但是,许多因素都会影响人类团队的成功,包括用户的领域专业知识,AI系统的心理模型,对建议的信任等等。这项工作检查了用户与三种模拟算法模型的互动,所有这些模型都具有相似的精度,但对其真正的正面和真实负率进行了不同的调整。我们的研究检查了在非平凡的血管标签任务中的用户性能,参与者表明给定的血管是流动还是停滞。我们的结果表明,虽然AI-Assistant的建议可以帮助用户决策,但用户相对于AI的基线性能和AI错误类型的补充调整等因素会显着影响整体团队的整体绩效。新手用户有所改善,但不能达到AI的准确性。高度熟练的用户通常能够识别何时应遵循AI建议,并通常保持或提高其性能。与AI相似的准确性水平的表演者在AI建议方面是最大的变化。此外,我们发现用户对AI的性能亲戚的看法也对给出AI建议时的准确性是否有所提高产生重大影响。这项工作提供了有关与人类协作有关的因素的复杂性的见解,并提供了有关如何开发以人为中心的AI算法来补充用户在决策任务中的建议。
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
鉴于大型语言模型的广泛能力,应该有可能朝着一般的文本的助手工作,这些助手与人类价值一致,这意味着它是有帮助,诚实的和无害的。在此方向上的初始遗传,我们研究简单的基线技术和评估,例如提示。我们发现,从模型规模增加适度的干预措施的好处,概括为各种对准评估,并不会损害大型模型的性能。接下来,我们调查与对齐,比较仿制,二进制歧视和排名偏好建模相关的几个培训目标的缩放趋势。我们发现排名优先级模型比模仿学习更好地表现得多,并且通常以模型大小更有利地缩放。相比之下,二进制歧视通常与模仿学习非常类似地执行和缩放。最后,我们研究了一种“偏好模型预训练阶段的培训阶段,其目的是在对人偏好的芬明时提高样本效率。
translated by 谷歌翻译
Recommender systems can strongly influence which information we see online, e.g., on social media, and thus impact our beliefs, decisions, and actions. At the same time, these systems can create substantial business value for different stakeholders. Given the growing potential impact of such AI-based systems on individuals, organizations, and society, questions of fairness have gained increased attention in recent years. However, research on fairness in recommender systems is still a developing area. In this survey, we first review the fundamental concepts and notions of fairness that were put forward in the area in the recent past. Afterward, through a review of more than 150 scholarly publications, we present an overview of how research in this field is currently operationalized, e.g., in terms of general research methodology, fairness measures, and algorithmic approaches. Overall, our analysis of recent works points to specific research gaps. In particular, we find that in many research works in computer science, very abstract problem operationalizations are prevalent, and questions of the underlying normative claims and what represents a fair recommendation in the context of a given application are often not discussed in depth. These observations call for more interdisciplinary research to address fairness in recommendation in a more comprehensive and impactful manner.
translated by 谷歌翻译
Interaction and cooperation with humans are overarching aspirations of artificial intelligence (AI) research. Recent studies demonstrate that AI agents trained with deep reinforcement learning are capable of collaborating with humans. These studies primarily evaluate human compatibility through "objective" metrics such as task performance, obscuring potential variation in the levels of trust and subjective preference that different agents garner. To better understand the factors shaping subjective preferences in human-agent cooperation, we train deep reinforcement learning agents in Coins, a two-player social dilemma. We recruit participants for a human-agent cooperation study and measure their impressions of the agents they encounter. Participants' perceptions of warmth and competence predict their stated preferences for different agents, above and beyond objective performance metrics. Drawing inspiration from social science and biology research, we subsequently implement a new "partner choice" framework to elicit revealed preferences: after playing an episode with an agent, participants are asked whether they would like to play the next round with the same agent or to play alone. As with stated preferences, social perception better predicts participants' revealed preferences than does objective performance. Given these results, we recommend human-agent interaction researchers routinely incorporate the measurement of social perception and subjective preferences into their studies.
translated by 谷歌翻译
在本文中,我们提出了一种方法,用于预测社交媒体对等体之间的信任链接,其中一个是在多识别信任建模的人工智能面积。特别是,我们提出了一种数据驱动的多面信任信任建模,该信任建模包括许多不同的特征以进行全面分析。我们专注于展示类似用户的聚类如何实现关键新功能:支持更个性化的,从而为用户提供更准确的预测。在信任感知项目推荐任务中说明,我们在大yelp数据集的上下文中评估所提出的框架。然后,我们讨论如何提高社交媒体的可信关系的检测可以帮助在最近爆发的社交网络环境中支持在线用户的违法行为和谣言的传播。我们的结论是关于一个特别易受资助的用户基础,老年人的反思,以说明关于用户组的推理价值,期望通过通过数据分析获得的洞察力集成已知偏好的一些未来方向。
translated by 谷歌翻译
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a 'good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
translated by 谷歌翻译
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how HCI and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 85 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, fairness, usability, and human-AI team performance. Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
translated by 谷歌翻译
情绪分析中最突出的任务是为文本分配情绪,并了解情绪如何在语言中表现出来。自然语言处理的一个重要观察结果是,即使没有明确提及情感名称,也可以通过单独参考事件来隐式传达情绪。在心理学中,被称为评估理论的情感理论类别旨在解释事件与情感之间的联系。评估可以被形式化为变量,通过他们认为相关的事件的人们的认知评估来衡量认知评估。其中包括评估事件是否是新颖的,如果该人认为自己负责,是否与自己的目标以及许多其他人保持一致。这样的评估解释了哪些情绪是基于事件开发的,例如,新颖的情况会引起惊喜或不确定后果的人可能引起恐惧。我们在文本中分析了评估理论对情绪分析的适用性,目的是理解注释者是否可以可靠地重建评估概念,如果可以通过文本分类器预测,以及评估概念是否有助于识别情感类别。为了实现这一目标,我们通过要求人们发短信描述触发特定情绪并披露其评估的事件来编译语料库。然后,我们要求读者重建文本中的情感和评估。这种设置使我们能够衡量是否可以纯粹从文本中恢复情绪和评估,并为判断模型的绩效指标提供人体基准。我们将文本分类方法与人类注释者的比较表明,两者都可以可靠地检测出具有相似性能的情绪和评估。我们进一步表明,评估概念改善了文本中情绪的分类。
translated by 谷歌翻译
在过去的十年中,计算机愿景,旨在了解视觉世界的人工智能分支,从简单地识别图像中的物体来描述图片,回答有关图像的问题,以及围绕物理空间的机器人操纵甚至产生新的视觉内容。随着这些任务和应用程序的现代化,因此依赖更多数据,用于模型培训或评估。在本章中,我们展示了新颖的互动策略可以为计算机愿景提供新的数据收集和评估。首先,我们提出了一种众群界面,以通过数量级加速付费数据收集,喂养现代视觉模型的数据饥饿性质。其次,我们探索使用自动社交干预措施增加志愿者贡献的方法。第三,我们开发一个系统,以确保人类对生成视觉模型的评估是可靠的,实惠和接地在心理物理学理论中。我们结束了人机互动的未来机会,以帮助计算机愿景。
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
计算机视觉(CV)取得了显着的结果,在几个任务中表现优于人类。尽管如此,如果不正确处理,可能会导致重大歧视,因为CV系统高度依赖于他们所用的数据,并且可以在此类数据中学习和扩大偏见。因此,理解和发现偏见的问题至关重要。但是,没有关于视觉数据集中偏见的全面调查。因此,这项工作的目的是:i)描述可能在视觉数据集中表现出来的偏差; ii)回顾有关视觉数据集中偏置发现和量化方法的文献; iii)讨论现有的尝试收集偏见视觉数据集的尝试。我们研究的一个关键结论是,视觉数据集中发现和量化的问题仍然是开放的,并且在方法和可以解决的偏见范围方面都有改进的余地。此外,没有无偏见的数据集之类的东西,因此科学家和从业者必须意识到其数据集中的偏见并使它们明确。为此,我们提出了一个清单,以在Visual DataSet收集过程中发现不同类型的偏差。
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Scale-invariance is an open problem in many computer vision subfields. For example, object labels should remain constant across scales, yet model predictions diverge in many cases. This problem gets harder for tasks where the ground-truth labels change with the presentation scale. In image quality assessment (IQA), downsampling attenuates impairments, e.g., blurs or compression artifacts, which can positively affect the impression evoked in subjective studies. To accurately predict perceptual image quality, cross-resolution IQA methods must therefore account for resolution-dependent errors induced by model inadequacies as well as for the perceptual label shifts in the ground truth. We present the first study of its kind that disentangles and examines the two issues separately via KonX, a novel, carefully crafted cross-resolution IQA database. This paper contributes the following: 1. Through KonX, we provide empirical evidence of label shifts caused by changes in the presentation resolution. 2. We show that objective IQA methods have a scale bias, which reduces their predictive performance. 3. We propose a multi-scale and multi-column DNN architecture that improves performance over previous state-of-the-art IQA models for this task, including recent transformers. We thus both raise and address a novel research problem in image quality assessment.
translated by 谷歌翻译
我们建议并探讨可以将语言模型作为社会科学研究中特定人类亚人群的有效代理进行研究的可能性。人工智能工具的实践和研究应用有时受到有问题的偏见(例如种族主义或性别歧视)的限制,这些偏见通常被视为模型的统一特性。我们表明,一个这样的工具中的“算法偏见”(GPT-3语言模型)既是细粒度又是人口统计相关的,这意味着适当的条件会导致其准确地仿真来自各种人类的响应分布亚组。我们将此属性称为“算法忠诚度”,并在GPT-3中探索其范围。我们通过将模型调节在美国进行的多项大型调查中的数千个社会人口统计背景故事中调节,从而创建“硅样本”。然后,我们比较硅和人类样品,以证明GPT-3中包含的信息远远超出了表面相似性。它是细微的,多方面的,并反映了特征人类态度的思想,态度和社会文化背景之间的复杂相互作用。我们建议,具有足够算法的忠诚度的语言模型构成了一种新颖而有力的工具,可以促进各种学科的人类和社会的理解。
translated by 谷歌翻译
本文介绍了Aesthetic Bot的实现,这是一个自动化的Twitter帐户,该帐户发布了用户制造或从进化系统生成的小型游戏地图的图像。然后,该机器人提示用户通过图像线程中发布的民意调查进行投票,以获取最令人愉悦的地图。这创建了一个评级系统,该系统允许以无缝集成到用户定期更新的Twitter内容fef中的方式直接与机器人进行交互。在每次投票回合结束时,该机器人从每张地图的投票分布中学习,以模仿设计和视觉美学的用户偏好,以生成将赢得未来投票配对的地图。我们讨论了自机器人生成游戏地图和参与的Twitter用户发布以来发生的持续结果和新兴行为。
translated by 谷歌翻译
越来越多的监督委员会和监管机构试图监视和管理对人们生活做出决定的算法。先前的工作已经探讨了人们如何认为应该做出算法的决策,但是对诸如社会人工学或直接经验之类的个人因素如何以决策情景的方式影响了他们的道德观点。我们通过探索人们对程序算法公平的一个方面的看法(在算法决定中使用特定功能的公平性)迈出了填补这一空白的一步,这与他们的(i)人口统计学(年龄,教育,性别,种族,政治观点,政治观点,政治观点, )和(ii)算法决策方案的个人经历。我们发现,具有算法决策背景的政治观点和个人经验会极大地影响对使用不同特征进行保释决策的公平性的看法。利用我们的结果,我们讨论了对利益相关者参与和算法监督的影响,包括需要考虑在构成监督和监管机构时考虑多样性的多个维度。
translated by 谷歌翻译