图像文本匹配(ITM)是评估视觉和语言(VL)模型的常见任务。但是,现有的ITM基准有一个重大限制。他们有许多缺失的信件,源自数据构建过程本身。例如,标题仅与一个图像匹配,尽管标题可以与其他类似图像匹配,反之亦然。为了纠正大规模的虚假负面因素,我们通过提供与机器和人类注释者的缺失关联来构建扩展的可可验证(ECCV)标题数据集。我们在注释过程中采用五个具有不同属性的最先进的ITM模型。与原始的MS-Coco相比,我们的数据集提供了X3.6的X3.6积极图像到支撑关联和X8.5字幕到图像关联。我们还建议使用基于等级的公制映射@r,而不是流行的召回@k(r@k)。我们在现有和拟议的基准测试中重新评估了现有的25个VL模型。我们的发现是现有的基准测试,例如可可1K r@k,可可5k r@k,cxc r@1彼此高度相关,而当我们转移到eccv map@r时,排名会改变。最后,我们深入研究机器注释者选择引入的偏差的效果。源代码和数据集可从https://github.com/naver-ai/eccv-caption获得
translated by 谷歌翻译
人类具有出色的能力来推理绑架并假设超出图像的字面内容的内容。通过识别散布在整个场景中的具体视觉线索,我们几乎不禁根据我们的日常经验和对世界的知识来提出可能的推论。例如,如果我们在道路旁边看到一个“ 20英里 /小时”的标志,我们可能会假设街道位于居民区(而不是在高速公路上),即使没有房屋。机器可以执行类似的视觉推理吗?我们提出了Sherlock,这是一个带注释的103K图像的语料库,用于测试机器能力,以超出字面图像内容的绑架推理。我们采用免费观看范式:参与者首先观察并识别图像中的显着线索(例如,对象,动作),然后给定线索,然后提供有关场景的合理推论。我们总共收集了363K(线索,推理)对,该对形成了首个绑架的视觉推理数据集。使用我们的语料库,我们测试了三个互补的绑架推理轴。我们评估模型的能力:i)从大型候选人语料库中检索相关推论; ii)通过边界框来定位推论的证据,iii)比较合理的推论,以匹配人类在新收集的19k李克特级判断的诊断语料库上的判断。尽管我们发现具有多任务目标的微调夹RN50x64优于强大的基准,但模型性能与人类一致之间存在着重要的净空。可在http://visualabduction.com/上获得数据,模型和排行榜
translated by 谷歌翻译
出色的图像文本检索模型取决于高质量标记的数据。尽管现有图像文本检索数据集的构建者努力确保标题与链接的图像匹配,但它们无法阻止字幕拟合其他图像。我们观察到,如此多的匹配现象在广泛使用的检索数据集中非常普遍,其中一个标题可以描述多达178张图像。这些较大的匹配失误数据不仅使训练中的模型混淆,而且还会削弱评估精度。受视觉和文本核心任务的启发,我们提出了一个多模式的核心分类器,以确定句子是否由图像和其链接的字幕所带来。随后,我们通过将这些需要的字幕添加为图像的附加标签来修改图像文本检索数据集,并制定通用可变率策略,以教授检索模型以区分所需的字幕和其他负样本。在实验中,我们手动注释了一个需要校正的图像文本检索数据集进行评估。结果表明,所提出的元素分类器可实现约78%的精度,并始终提高图像文本检索基线的性能。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.
translated by 谷歌翻译
虽然视觉和语言模型在视觉问题回答等任务上表现良好,但在基本的人类常识性推理技能方面,它们会挣扎。在这项工作中,我们介绍了Winogavil:在线游戏,以收集视觉和语言协会(例如,狼人到满月),用作评估最先进模型的动态基准。受欢迎的纸牌游戏代号的启发,Spymaster提供了与几个视觉候选者相关的文本提示,另一个玩家必须识别它们。人类玩家因创建对竞争对手AI模型而具有挑战性的联想而获得了回报,但仍然可以由其他人类玩家解决。我们使用游戏来收集3.5k实例,发现它们对人类的直观(> 90%的Jaccard索引),但对最先进的AI模型充满挑战,其中最佳模型(Vilt)的得分为52% ,成功的位置在视觉上是显着的。我们的分析以及我们从玩家那里收集的反馈表明,收集的关联需要多种推理技能,包括一般知识,常识,抽象等。我们发布数据集,代码和交互式游戏,旨在允许未来的数据收集,可用于开发具有更好关联能力的模型。
translated by 谷歌翻译
The prevailing framework for matching multimodal inputs is based on a two-stage process: 1) detecting proposals with an object detector and 2) matching text queries with proposals. Existing two-stage solutions mostly focus on the matching step. In this paper, we argue that these methods overlook an obvious \emph{mismatch} between the roles of proposals in the two stages: they generate proposals solely based on the detection confidence (i.e., query-agnostic), hoping that the proposals contain all instances mentioned in the text query (i.e., query-aware). Due to this mismatch, chances are that proposals relevant to the text query are suppressed during the filtering process, which in turn bounds the matching performance. To this end, we propose VL-NMS, which is the first method to yield query-aware proposals at the first stage. VL-NMS regards all mentioned instances as critical objects, and introduces a lightweight module to predict a score for aligning each proposal with a critical object. These scores can guide the NMS operation to filter out proposals irrelevant to the text query, increasing the recall of critical objects, resulting in a significantly improved matching performance. Since VL-NMS is agnostic to the matching step, it can be easily integrated into any state-of-the-art two-stage matching methods. We validate the effectiveness of VL-NMS on two multimodal matching tasks, namely referring expression grounding and image-text matching. Extensive ablation studies on several baselines and benchmarks consistently demonstrate the superiority of VL-NMS.
translated by 谷歌翻译
我们解决了用草图和文本查询检索图像的问题。我们提出任务形成器(文本和草图变压器),这是一种可使用文本说明和草图作为输入的端到端训练模型。我们认为,两种输入方式都以一种单独的方式无法轻易实现的方式相互补充。任务形成器遵循延迟融合双编码方法,类似于剪辑,该方法允许有效且可扩展的检索,因为检索集可以独立于查询而独立于索引。我们从经验上证明,与传统的基于文本的图像检索相比,除文本外,使用输入草图(甚至是绘制的草图)大大增加了检索召回。为了评估我们的方法,我们在可可数据集的测试集中收集了5,000个手绘草图。收集的草图可获得https://janesjanes.github.io/tsbir/。
translated by 谷歌翻译
我们挑战AI模型,以“展示”对《纽约客》标题比赛的复杂多模式幽默的理解。具体而言,我们开发了三个精心限制的任务,以掌握图像和标题之间的潜在复杂和意外的关系,并且对人类经验的广泛品种产生了复杂和意外的寓意;这些是纽约口径卡通的标志。我们调查了直接将卡通像素和字幕输入的视觉和语言模型,以及仅通过提供图像的文本描述来规避图像处理的仅限语言模型。即使我们为卡通图像提供了丰富的多方面注释,我们也可以确定高质量的机器学习模型(例如,微调,175b参数语言模型)和人类之间的性能差距。我们公开发布我们的语料库,包括描述图像的位置/实体的注释,场景的不寻常以及对笑话的解释。
translated by 谷歌翻译
Recent visuolinguistic pre-trained models show promising progress on various end tasks such as image retrieval and video captioning. Yet, they fail miserably on the recently proposed Winoground dataset, which challenges models to match paired images and English captions, with items constructed to overlap lexically but differ in meaning (e.g., "there is a mug in some grass" vs. "there is some grass in a mug"). By annotating the dataset using new fine-grained tags, we show that solving the Winoground task requires not just compositional language understanding, but a host of other abilities like commonsense reasoning or locating small, out-of-focus objects in low-resolution images. In this paper, we identify the dataset's main challenges through a suite of experiments on related tasks (probing task, image retrieval task), data augmentation, and manual inspection of the dataset. Our analysis suggests that a main challenge in visuolinguistic models may lie in fusing visual and textual representations, rather than in compositional language understanding. We release our annotation and code at https://github.com/ajd12342/why-winoground-hard .
translated by 谷歌翻译
视觉问题回答是自然语言和愿景理解的重要任务。但是,在大多数公众视觉问题上回答了诸如VQA,CLEVR之类的数据集,这些问题是针对给定图像的特定于“她的眼睛是什么颜色?”的人类产生的。人类产生的众包问题相对简单,有时对某些实体或属性有偏见。在本文中,我们介绍了一个基于Image-Chiqa的新问题回答数据集。它包含Internet用户发布的现实查询,并结合了几个相关的开放域图像。系统应确定图像是否可以回答问题。与以前的VQA数据集不同,这些问题是现实世界中独立的查询,这些查询更加各种和无偏见。与先前的图像回程或图像捕获数据集相比,Chiqa不仅衡量了相关性,而且还可以衡量答案性,这需要更细粒度的视力和语言推理。 Chiqa包含超过40k的问题和超过200k的问题图像对。将三级2/1/0标签分配给每个对,指示完美的答案,部分答案和无关紧要。数据分析表明,Chiqa需要对语言和视觉有深入的了解,包括接地,比较和阅读。我们评估了几种最先进的视觉语言模型,例如ALBEF,表明仍然有一个很大的改进奇卡的空间。
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
在过去的十年中,计算机愿景,旨在了解视觉世界的人工智能分支,从简单地识别图像中的物体来描述图片,回答有关图像的问题,以及围绕物理空间的机器人操纵甚至产生新的视觉内容。随着这些任务和应用程序的现代化,因此依赖更多数据,用于模型培训或评估。在本章中,我们展示了新颖的互动策略可以为计算机愿景提供新的数据收集和评估。首先,我们提出了一种众群界面,以通过数量级加速付费数据收集,喂养现代视觉模型的数据饥饿性质。其次,我们探索使用自动社交干预措施增加志愿者贡献的方法。第三,我们开发一个系统,以确保人类对生成视觉模型的评估是可靠的,实惠和接地在心理物理学理论中。我们结束了人机互动的未来机会,以帮助计算机愿景。
translated by 谷歌翻译
随着网络和在线百科全书的可访问性的增加,要管理的数据量正在不断增加。例如,在Wikipedia中,有数百万页用多种语言编写。这些页面包含通常缺乏文本上下文的图像,在概念上保持浮动,因此很难找到和管理。在这项工作中,我们介绍了我们设计的系统,用于参加Kaggle上的Wikipedia图像捕捉匹配挑战,其目的是使用与图像(URL和视觉数据)相关的数据来在大量可用图像中找到正确的标题。能够执行此任务的系统将改善大型在线百科全书上多媒体内容的可访问性和完整性。具体而言,我们提出了一个由最近的变压器模型提供支持的两个模型的级联,能够有效地推断出查询图像数据和字幕之间的相关得分。我们通过广泛的实验来验证,提出的两模型方法是处理大量图像和标题的有效方法,同时保持了推理时的整体计算复杂性。我们的方法取得了显着的结果,在Kaggle Challenge的私人排行榜上获得了0.53的归一化折扣累积增益(NDCG)值。
translated by 谷歌翻译
我们提出Valse(视觉和语言结构化评估),这是一种新的基准,专为测试通用净化的视觉和语言(V&L)模型而设计,用于对特定语言现象的视野 - 语言接地能力。Valse提供涵盖各种语言构建体的六种测试套件。解决这些需要模型在视觉模型中地对语言现象,允许比迄今为止更细粒度的评估。我们使用支持有效箔的构造的方法构建Valse,并通过评估五种广泛使用的V&L模型的报告结果。我们的实验表明,目前的模型有很大的困难解决了大多数现象。因此,我们预计Valse就可以作为一种重要的基准,从语言角度来衡量预训过的V&L模型的未来进展,补充规范任务为中心的V&L评价。
translated by 谷歌翻译
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the chal-
translated by 谷歌翻译
This paper presents a detailed study of improving visual representations for vision language (VL) tasks and develops an improved object detection model to provide object-centric representations of images. Compared to the most widely used bottom-up and top-down model [2], the new model is bigger, better-designed for VL tasks, and pre-trained on much larger training corpora that combine multiple public annotated object detection datasets. Therefore, it can generate representations of a richer collection of visual objects and concepts. While previous VL research focuses mainly on improving the vision-language fusion model and leaves the object detection model improvement untouched, we show that visual features matter significantly in VL models. In our experiments we feed the visual features generated by the new object detection model into a Transformer-based VL fusion model OSCAR [21], and utilize an improved approach OSCAR+ to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks. Our results show that the new visual features significantly improve the performance across all VL tasks, creating new state-of-the-art results on seven public benchmarks. Code, models and pre-extracted features are released at https://github.com/pzzhang/VinVL. ♥ Microsoft Corporation♠ University of Washington † indicates equal contributions.
translated by 谷歌翻译
我们在这项研究中的目标是研究一个更现实的环境,在这种环境中,我们可以为细粒度的产品类别进行弱监督的多模式实例级产品检索。我们首先贡献了product1m数据集,并定义了两个实际实例级检索任务,以实现价格比较和个性化建议的评估。对于两个实例级任务,如何准确地指出视觉语言数据中提到的产品目标并有效地降低了无关紧要的内容的影响非常具有挑战性。为了解决这个问题,我们利用训练一个更有效的跨模式与模型,该模型能够自适应地能够通过使用一个实体图,其节点和边缘分别表示实体和相似性,从而可以从多模式数据中合并来自多模式数据的关键概念信息。实体。具体而言,为实例级别的商品检索提出了一种新型的实体图增强的跨模式预处理(EGE-CMP)模型,该模型明确地将基于节点的基于节点的基于节点和子图的方式显式地注入实体知识。自我监管的混合流变压器可以减少不同对象内容之间的混淆,从而有效地指导网络专注于具有真实语义的实体。实验结果很好地验证了我们的EGE-CMP的功效和概括性,表现优于几个SOTA跨模式基线,例如夹子,Uniter和Capture。
translated by 谷歌翻译
This paper focuses on analyzing and improving the commonsense ability of recent popular vision-language (VL) models. Despite the great success, we observe that existing VL-models still lack commonsense knowledge/reasoning ability (e.g., "Lemons are sour"), which is a vital component towards artificial general intelligence. Through our analysis, we find one important reason is that existing large-scale VL datasets do not contain much commonsense knowledge, which motivates us to improve the commonsense of VL-models from the data perspective. Rather than collecting a new VL training dataset, we propose a more scalable strategy, i.e., "Data Augmentation with kNowledge graph linearization for CommonsensE capability" (DANCE). It can be viewed as one type of data augmentation technique, which can inject commonsense knowledge into existing VL datasets on the fly during training. More specifically, we leverage the commonsense knowledge graph (e.g., ConceptNet) and create variants of text description in VL datasets via bidirectional sub-graph sequentialization. For better commonsense evaluation, we further propose the first retrieval-based commonsense diagnostic benchmark. By conducting extensive experiments on some representative VL-models, we demonstrate that our DANCE technique is able to significantly improve the commonsense ability while maintaining the performance on vanilla retrieval tasks. The code and data are available at https://github.com/pleaseconnectwifi/DANCE
translated by 谷歌翻译
当前机器学习的大部分基础的大型数据集提出了有关不适当内容的严重问题,例如冒犯,侮辱,威胁或可能引起焦虑。这要求增加数据集文档,例如使用数据表。它们除其他主题外,还鼓励反思数据集的组成。到目前为止,该文档是手动完成的,因此可能是乏味且容易出错的,尤其是对于大型图像数据集。在这里,我们询问了机器是否可以帮助我们反思不适当的内容的“循环”问题,回答了数据表中的问题16。为此,我们建议使用存储在预训练的变压器模型中的信息来协助我们进行文档过程。具体而言,基于社会 - 道德价值数据集的及时调整引导剪辑以识别潜在的不适当的内容,从而减少了人工的劳动。然后,我们根据使用视觉模型生成的字幕来记录使用单词云找到的不适当图像。两个流行的大规模计算机视觉数据集的文档(ImageNet和OpenImages)以这种方式产生,这表明机器确实可以帮助数据集创建者回答有关不适当图像内容的问题16。
translated by 谷歌翻译