虽然视觉和语言模型在视觉问题回答等任务上表现良好,但在基本的人类常识性推理技能方面,它们会挣扎。在这项工作中,我们介绍了Winogavil:在线游戏,以收集视觉和语言协会(例如,狼人到满月),用作评估最先进模型的动态基准。受欢迎的纸牌游戏代号的启发,Spymaster提供了与几个视觉候选者相关的文本提示,另一个玩家必须识别它们。人类玩家因创建对竞争对手AI模型而具有挑战性的联想而获得了回报,但仍然可以由其他人类玩家解决。我们使用游戏来收集3.5k实例,发现它们对人类的直观(> 90%的Jaccard索引),但对最先进的AI模型充满挑战,其中最佳模型(Vilt)的得分为52% ,成功的位置在视觉上是显着的。我们的分析以及我们从玩家那里收集的反馈表明,收集的关联需要多种推理技能,包括一般知识,常识,抽象等。我们发布数据集,代码和交互式游戏,旨在允许未来的数据收集,可用于开发具有更好关联能力的模型。
translated by 谷歌翻译
A core process in human cognition is analogical mapping: the ability to identify a similar relational structure between different situations. We introduce a novel task, Visual Analogies of Situation Recognition, adapting the classical word-analogy task into the visual domain. Given a triplet of images, the task is to select an image candidate B' that completes the analogy (A to A' is like B to what?). Unlike previous work on visual analogy that focused on simple image transformations, we tackle complex analogies requiring understanding of scenes. We leverage situation recognition annotations and the CLIP model to generate a large set of 500k candidate analogies. Crowdsourced annotations for a sample of the data indicate that humans agree with the dataset label ~80% of the time (chance level 25%). Furthermore, we use human annotations to create a gold-standard dataset of 3,820 validated analogies. Our experiments demonstrate that state-of-the-art models do well when distractors are chosen randomly (~86%), but struggle with carefully chosen distractors (~53%, compared to 90% human accuracy). We hope our dataset will encourage the development of new analogy-making models. Website: https://vasr-dataset.github.io/
translated by 谷歌翻译
人类具有出色的能力来推理绑架并假设超出图像的字面内容的内容。通过识别散布在整个场景中的具体视觉线索,我们几乎不禁根据我们的日常经验和对世界的知识来提出可能的推论。例如,如果我们在道路旁边看到一个“ 20英里 /小时”的标志,我们可能会假设街道位于居民区(而不是在高速公路上),即使没有房屋。机器可以执行类似的视觉推理吗?我们提出了Sherlock,这是一个带注释的103K图像的语料库,用于测试机器能力,以超出字面图像内容的绑架推理。我们采用免费观看范式:参与者首先观察并识别图像中的显着线索(例如,对象,动作),然后给定线索,然后提供有关场景的合理推论。我们总共收集了363K(线索,推理)对,该对形成了首个绑架的视觉推理数据集。使用我们的语料库,我们测试了三个互补的绑架推理轴。我们评估模型的能力:i)从大型候选人语料库中检索相关推论; ii)通过边界框来定位推论的证据,iii)比较合理的推论,以匹配人类在新收集的19k李克特级判断的诊断语料库上的判断。尽管我们发现具有多任务目标的微调夹RN50x64优于强大的基准,但模型性能与人类一致之间存在着重要的净空。可在http://visualabduction.com/上获得数据,模型和排行榜
translated by 谷歌翻译
为了使AI安全地在医院,学校和工作场所等现实世界中安全部署,它必须能够坚定地理解物理世界。这种推理的基础是物理常识:了解可用对象的物理特性和提供的能力,如何被操纵以及它们如何与其他对象进行交互。物理常识性推理从根本上是一项多感官任务,因为物理特性是通过多种模式表现出来的,其中两个是视觉和声学。我们的论文通过贡献PACS来朝着现实世界中的物理常识推理:第一个用于物理常识属性注释的视听基准。 PACS包含13,400对答案对,涉及1,377个独特的物理常识性问题和1,526个视频。我们的数据集提供了新的机会来通过将音频作为此多模式问题的核心组成部分来推进物理推理的研究领域。使用PACS,我们在我们的新挑战性任务上评估了多种最先进的模型。尽管某些模型显示出令人鼓舞的结果(精度为70%),但它们都没有人类的绩效(精度为95%)。我们通过证明多模式推理的重要性并为未来的研究提供了可能的途径来结束本文。
translated by 谷歌翻译
我们挑战AI模型,以“展示”对《纽约客》标题比赛的复杂多模式幽默的理解。具体而言,我们开发了三个精心限制的任务,以掌握图像和标题之间的潜在复杂和意外的关系,并且对人类经验的广泛品种产生了复杂和意外的寓意;这些是纽约口径卡通的标志。我们调查了直接将卡通像素和字幕输入的视觉和语言模型,以及仅通过提供图像的文本描述来规避图像处理的仅限语言模型。即使我们为卡通图像提供了丰富的多方面注释,我们也可以确定高质量的机器学习模型(例如,微调,175b参数语言模型)和人类之间的性能差距。我们公开发布我们的语料库,包括描述图像的位置/实体的注释,场景的不寻常以及对笑话的解释。
translated by 谷歌翻译
We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly annotated dataset that, with >1k distinct stimuli, is orders of magnitude larger and more diverse than prior resources. It is both visually and linguistically richer, moving beyond whole shape descriptions to include segmentation maps and part labels. We use this resource to evaluate the abstract visual reasoning capacities of recent multi-modal models. We observe that pre-trained weights demonstrate limited abstract reasoning, which dramatically improves with fine-tuning. We also observe that explicitly describing parts aids abstract reasoning for both humans and models, especially when jointly encoding the linguistic and visual inputs. KiloGram is available at https://lil.nlp.cornell.edu/kilogram .
translated by 谷歌翻译
Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's actions, goals, and mental states. While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer.Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and highquality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show that while humans find VCR easy (over 90% accuracy), state-of-the-art vision models struggle (∼45%).To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cognition Networks (R2C), that models the necessary layered inferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (∼65%); still, the challenge is far from solved, and we provide analysis that suggests avenues for future work.
translated by 谷歌翻译
成对图像和文本的大型数据集越来越受到愿景和愿景和语言任务的通用表示。此类数据集已通过查询搜索引擎或收集HTML Alt-Text构建 - 由于Web数据是嘈杂的,因此它们需要复杂的过滤管道来维护质量。我们探索备用数据源以收集具有最小滤波的高质量数据。我们介绍Redcaps - 从Reddit收集的12M图像文本对的大规模数据集。来自Reddit的图像和标题描绘并描述了各种各样的物体和场景。我们从手动策划的FuSoddits集中收集数据,这为粗略图像标签提供给粗略图像标签,并允许我们转向数据集组合而不标记单个实例。我们展示Redcaps培训的标题模型产生了人类优选的丰富和各种标题,并学习转移到许多下游任务的视觉表现。
translated by 谷歌翻译
图像文本匹配(ITM)是评估视觉和语言(VL)模型的常见任务。但是,现有的ITM基准有一个重大限制。他们有许多缺失的信件,源自数据构建过程本身。例如,标题仅与一个图像匹配,尽管标题可以与其他类似图像匹配,反之亦然。为了纠正大规模的虚假负面因素,我们通过提供与机器和人类注释者的缺失关联来构建扩展的可可验证(ECCV)标题数据集。我们在注释过程中采用五个具有不同属性的最先进的ITM模型。与原始的MS-Coco相比,我们的数据集提供了X3.6的X3.6积极图像到支撑关联和X8.5字幕到图像关联。我们还建议使用基于等级的公制映射@r,而不是流行的召回@k(r@k)。我们在现有和拟议的基准测试中重新评估了现有的25个VL模型。我们的发现是现有的基准测试,例如可可1K r@k,可可5k r@k,cxc r@1彼此高度相关,而当我们转移到eccv map@r时,排名会改变。最后,我们深入研究机器注释者选择引入的偏差的效果。源代码和数据集可从https://github.com/naver-ai/eccv-caption获得
translated by 谷歌翻译
Winograd架构挑战 - 一套涉及代词参考消歧的双句话,似乎需要使用致辞知识 - 是由2011年的赫克托勒维克斯提出的。到2019年,基于大型预先训练的变压器的一些AI系统基于语言模型和微调这些问题,精度优于90%。在本文中,我们审查了Winograd架构挑战的历史并评估了其重要性。
translated by 谷歌翻译
目前的视觉问题应答(VQA)任务主要考虑回答自然图像的人为注释问题。然而,除了自然图像之外,在视觉理解和推理研究中仍然可以解读具有语义丰富性的抽象图。在这项工作中,我们介绍了ICON问题的新挑战(ICONQA),其目标是在图标图像上下文中回答问题。我们发布了ICONQA,这是一个由107,439个问题和三个子任务组成的大型数据集:多图像选择,多文本选择和填充空白。 ICONQA数据集是由真实世界图中的启发,突出了抽象图理解和综合认知推理的重要性。因此,ICONQA不仅需要对象识别和文本理解等感知技能,而且还需要多种认知推理技能,例如几何推理,致辞推理和算术推理。为了促进潜在的iconqa模型来学习图标图像的语义表示,我们进一步发布了一个图标数据集图标645,其中包含377级上的645,687个彩色图标。我们进行广泛的用户研究和盲目实验,并重现各种先进的VQA方法来基准iconQA任务。此外,我们开发了一个强大的ICONQA基线Patch-TRM,它应用金字塔跨模型变压器,其中包含在图标数据集上预先培训的输入图嵌入式。 iconqa和图标645可在https://iconqa.github.io提供。
translated by 谷歌翻译
情绪分析中最突出的任务是为文本分配情绪,并了解情绪如何在语言中表现出来。自然语言处理的一个重要观察结果是,即使没有明确提及情感名称,也可以通过单独参考事件来隐式传达情绪。在心理学中,被称为评估理论的情感理论类别旨在解释事件与情感之间的联系。评估可以被形式化为变量,通过他们认为相关的事件的人们的认知评估来衡量认知评估。其中包括评估事件是否是新颖的,如果该人认为自己负责,是否与自己的目标以及许多其他人保持一致。这样的评估解释了哪些情绪是基于事件开发的,例如,新颖的情况会引起惊喜或不确定后果的人可能引起恐惧。我们在文本中分析了评估理论对情绪分析的适用性,目的是理解注释者是否可以可靠地重建评估概念,如果可以通过文本分类器预测,以及评估概念是否有助于识别情感类别。为了实现这一目标,我们通过要求人们发短信描述触发特定情绪并披露其评估的事件来编译语料库。然后,我们要求读者重建文本中的情感和评估。这种设置使我们能够衡量是否可以纯粹从文本中恢复情绪和评估,并为判断模型的绩效指标提供人体基准。我们将文本分类方法与人类注释者的比较表明,两者都可以可靠地检测出具有相似性能的情绪和评估。我们进一步表明,评估概念改善了文本中情绪的分类。
translated by 谷歌翻译
When answering a question, people often draw upon their rich world knowledge in addition to the particular context. Recent work has focused primarily on answering questions given some relevant document or context, and required very little general background. To investigate question answering with prior knowledge, we present COMMONSENSEQA: a challenging new dataset for commonsense question answering. To capture common sense beyond associations, we extract from CON-CEPTNET (Speer et al., 2017) multiple target concepts that have the same semantic relation to a single source concept. Crowd-workers are asked to author multiple-choice questions that mention the source concept and discriminate in turn between each of the target concepts. This encourages workers to create questions with complex semantics that often require prior knowledge. We create 12,247 questions through this procedure and demonstrate the difficulty of our task with a large number of strong baselines. Our best baseline is based on BERT-large (Devlin et al., 2018) and obtains 56% accuracy, well below human performance, which is 89%.
translated by 谷歌翻译
我们介绍了Sparrow,这是一个寻求信息的对话代理,与提示的语言模型基线相比,训练有素,更有帮助,正确和无害。我们使用从人类反馈中的强化学习来培训我们的模型,以帮助人类评估者判断代理人的行为。首先,为了使我们的代理人更有帮助和无害,我们将良好对话的要求分解为代理人应遵循的自然语言规则,并分别向评估者询问每个规则。我们证明,这种崩溃使我们能够收集对代理行为的更多针对性的人类判断,并允许更有效的规则条件奖励模型。其次,我们的代理商在收集对模型声明的偏好判决时提供了支持事实主张的来源的证据。对于事实问题,麻雀提供的证据支持了78%的时间。比基线比基线更享受麻雀,同时对人类的对抗性探测更具弹性,在探测时只有8%的时间违反了我们的规则。最后,我们进行了广泛的分析,表明尽管我们的模型学会遵守我们的规则,但它可以表现出分布偏见。
translated by 谷歌翻译
In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks. The GLUE benchmark, introduced a little over one year ago, offers a single-number metric that summarizes progress on a diverse set of such tasks, but performance on the benchmark has recently surpassed the level of non-expert humans, suggesting limited headroom for further research. In this paper we present SuperGLUE, a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard. SuperGLUE is available at super.gluebenchmark.com.
translated by 谷歌翻译
多模式培训的最新进展使用文本描述,可以显着增强机器对图像和视频的理解。然而,目前尚不清楚语言在多大程度上可以完全捕捉不同方式的感官体验。一种表征感官体验的良好方法取决于相似性判断,即人们认为两个截然不同的刺激是相似的程度。我们在一系列大规模的行为研究($ n = 1,823美元的参与者)中探讨了人类相似性判断与语言之间的关系,这三种模式(图像,音频和视频)和两种类型的文本描述符:简单的文字描述符: - 文本字幕。在此过程中,我们引入了一条新型的自适应管道,用于标签挖掘,既有高效又是领域。我们表明,基于文本描述符的预测管道表现出色,我们将其与基于视觉,音频和视频处理体系结构的611基线模型进行了比较。我们进一步表明,文本描述符和模型在多种方式之间和模型之间预测人类相似性的程度各不相同。综上所述,这些研究说明了整合机器学习和认知科学方法的价值,以更好地了解人类和机器表示之间的相似性和差异。我们在https://words-are-are-all-you-need.s3.amazonaws.com/index.html上介绍了交互式可视化,以探索人类所经历的刺激和本文中报道的不同方法之间的相似性。
translated by 谷歌翻译
为了实现长文档理解的构建和测试模型,我们引入质量,具有中文段的多项选择QA DataSet,具有约5,000个令牌的平均长度,比典型的当前模型更长。与经过段落的事先工作不同,我们的问题是由阅读整个段落的贡献者编写和验证的,而不是依赖摘要或摘录。此外,只有一半的问题是通过在紧缩时间限制下工作的注释器来应答,表明略读和简单的搜索不足以一直表现良好。目前的模型在此任务上表现不佳(55.4%),并且落后于人类性能(93.5%)。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼0.25M images, ∼0.76M questions, and ∼10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).
translated by 谷歌翻译
As text generated by large language models proliferates, it becomes vital to understand how humans engage with such text, and whether or not they are able to detect when the text they are reading did not originate with a human writer. Prior work on human detection of generated text focuses on the case where an entire passage is either human-written or machine-generated. In this paper, we study a more realistic setting where text begins as human-written and transitions to being generated by state-of-the-art neural language models. We show that, while annotators often struggle at this task, there is substantial variance in annotator skill and that given proper incentives, annotators can improve at this task over time. Furthermore, we conduct a detailed comparison study and analyze how a variety of variables (model size, decoding strategy, fine-tuning, prompt genre, etc.) affect human detection performance. Finally, we collect error annotations from our participants and use them to show that certain textual genres influence models to make different types of errors and that certain sentence-level features correlate highly with annotator selection. We release the RoFT dataset: a collection of over 21,000 human annotations paired with error classifications to encourage future work in human detection and evaluation of generated text.
translated by 谷歌翻译