There is considerable interest in the task of automatically generating image captions. However, evaluation is challenging. Existing automatic evaluation metrics are primarily sensitive to n-gram overlap, which is neither necessary nor sufficient for the task of simulating human judgment. We hypothesize that semantic propositional content is an important component of human caption evaluation, and propose a new automated caption evaluation metric defined over scene graphs coined SPICE. Extensive evaluations across a range of models and datasets indicate that SPICE captures human judgments over model-generated captions better than other automatic metrics (e.g., system-level correlation of 0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and 0.53 for METEOR). Furthermore, SPICE can answer questions such as which caption-generator best understands colors? and can caption-generators count?
translated by 谷歌翻译
Automatically describing an image with a sentence is a long-standing challenge in computer vision and natural language processing. Due to recent progress in object detection, attribute classification, action recognition, etc., there is renewed interest in this area. However, evaluating the quality of descriptions has proven to be challenging. We propose a novel paradigm for evaluating image descriptions that uses human consensus. This paradigm consists of three main parts: a new triplet-based method of collecting human annotations to measure consensus, a new automated metric (CIDEr) that captures consensus, and two new datasets: PASCAL-50S and ABSTRACT-50S that contain 50 sentences describing each image. Our simple metric captures human judgment of consensus better than existing metrics across sentences generated by various sources. We also evaluate five state-of-the-art image description approaches using this new protocol and provide a benchmark for future comparisons. A version of CIDEr named CIDEr-D is available as a part of MS COCO evaluation server to enable systematic evaluation and benchmarking.
translated by 谷歌翻译
视觉标题的开放性质使其成为评估的具有挑战性的区域。大多数拟议模型依赖于专业培训来改善人类关联,导致采用有限,普遍性和索引。我们介绍了“典型性”,一种新的评价制定,根植于信息理论,这是唯一适合缺乏明确的实践的问题。典型程度是我们开发新颖语义比较,SPARC的框架,以及引用的流畅评估度量。在我们的分析过程中,流利的两个单独的流利程度自然出现:风格,由公制刺激和语法捕获,以语法异常罚款的形式捕获。通过对基准数据集进行广泛的实验和消融研究,我们展示了这些语义和流畅程度的这些分解维度如何为标题差异提供更大的系统级洞察。与其他基于规则的评估指标相比,我们拟议的指标与他们的组合,SMURF,达到了人为判断的最先进的相关性。
translated by 谷歌翻译
图像字幕是当前的研究任务,用于使用场景中的对象及其关系来描述图像内容。为了应对这项任务,使用了两个重要的研究领域,人为的视觉和自然语言处理。在图像字幕中,就像在任何计算智能任务中一样,性能指标对于知道方法的性能(或坏)至关重要。近年来,已经观察到,基于n-gram的经典指标不足以捕获语义和关键含义来描述图像中的内容。为了衡量或不进行最新指标的集合,在本手稿中,我们对使用众所周知的COCO数据集进行了对几种图像字幕指标的评估以及它们之间的比较。为此,我们设计了两种情况。 1)一组人工构建字幕,以及2)比较某些最先进的图像字幕方法的比较。我们试图回答问题:当前的指标是否有助于制作高质量的标题?实际指标如何相互比较?指标真正测量什么?
translated by 谷歌翻译
在本文中,我们构建了两个自动评估度量,用于评估机器生成的标题和地面真理体型中的关联:overtyle和风格德。
translated by 谷歌翻译
我们建立了一种基于规校的图像标题模型的人类评估协议。我们的得分标准及其定义是基于MSCOCO数据集上的机器和人类生成的标题仔细开发。每个字幕沿着权衡(精确和召回)中的两个主要尺寸以及测量文本质量的其他方面(流利,简洁,包容性语言)。我们的评估表明了当前评估实践的几个关键问题。人生成的标题显示出比机器生成的字块的质量大得多,特别是在突出信息的覆盖范围内(即,召回),而所有自动度量都可以说相反。我们基于规度的标准结果表明,曲线芯片,最近使用图像特征的度量标准,与人类判断更好地相关,因为它对召回更敏感。我们希望这项工作将推动更透明的图像标题和自动指标的评估协议。
translated by 谷歌翻译
视频标题的当前度量主要基于参考和候选字幕之间的文本级别比较。然而,它们具有一些不可能的缺点,例如,它们不能在没有参考的情况下处理视频,并且由于视频到文本的一对多性质和忽视视觉相关性的一对多性质,它们可能导致偏见的评估。从人类评估者的观点来看,高质量的标题应与提供的视频一致,但不一定类似于文字或语义中的参考。灵感来自人类评估,我们提出了Emscore(基于匹配的分数),是视频字幕的一种新颖的无参考度量,其直接测量视频和候选字幕之间的相似性。受益于最近的大规模预训练模型的发展,我们利用了一个良好的预先训练的视觉语言模型来提取用于计算Emscore的视觉和语言嵌入。具体地,Emscore将粗粒(视频和标题)和细粒度(帧和单词)水平的匹配分数组合,这将考虑到视频的整体理解和详细特征。此外,考虑到潜在的信息增益,Emscore可以灵活地扩展到人类标记的参考可用的条件。最后但并非最不重要的是,我们收集Vatex-eval和ActivityNet-Foil数据集以系统地评估现有的度量标准。 Vatex-emp实验表明,Emscore具有更高的人类相关性和较低的参考依赖性。 ActivityNet-Foil实验验证Emscore可以有效地识别“幻觉”标题。将释放数据集以促进视频标题度量的开发。代码可在:https://github.com/shiyaya/emcore。
translated by 谷歌翻译
用于评估有条件自然语言生成的传统自动化指标使用单个生成的文本和最佳匹配的金标准地面真相文本之间的成对比较。当有多个基础真相可用时,分数将使用参考中的平均或最大操作进行汇总。尽管这种方法在地面真相数据中的多样性(即有条件文本的分布的分散)可以归因于噪声,例如自动语音识别中,但在地面上的多样性的情况下,它不允许进行强有力的评估。真理代表模型的信号。在这项工作中,我们认为现有的指标不适合诸如视觉描述或摘要之类的域,而地面真理在语义上是多样的,并且这些字幕中的多样性捕获了有关上下文的有用的其他信息。我们提出了一种新的范式,用于对条件语言生成模型的多键入评估以及一个新的指标家族,该指标家族使用每种少量样本集比较参考和模型生成的字幕集的分布。我们通过视觉描述中的案例研究证明了方法的实用性:我们在其中证明现有模型优化了单描述质量而不是多样性,并获得了对采样方法和温度影响如何描述质量和多样性的一些见解。
translated by 谷歌翻译
在过去的几年中,引起了独特的图像字幕(DIC)(DIC) - 生成独特的标题来描述目标图像的独特细节。最近的DIC工作建议通过将目标图像与一组语义相似的参考图像(即基于参考的DIC(REF-DIC))进行比较来生成独特的字幕。它的目的是使生成的字幕可以分开目标图像和参考图像。不幸的是,现有参考作品使用的参考图像易于区分:这些参考图像仅类似于场景级别的目标图像,并且几乎没有常见的对象,因此,即使不考虑该模型,Ref-DIC模型也可以微不足道地生成独特的字幕参考图像。为了确保Ref-DIC模型真正了解目标图像中的唯一对象(或属性),我们首先提出了两个新的Ref-DIC基准。具体而言,我们设计了一个两阶段的匹配机制,该机制严格控制对象 - /属性级别的目标和参考图像之间的相似性(相对于场景级别)。其次,为了产生独特的标题,我们开发了一个强大的基于变压器的ref-DIC基线,称为传播。它不仅从目标图像中提取视觉特征,而且还编码目标和参考图像中对象之间的差异。最后,为了获得更值得信赖的基准测试,我们提出了一个新的评估度量指标,名为Ref-DIC的Discider,评估生成的字幕的准确性和独特性。实验结果表明,我们的传统可以产生独特的标题。此外,它在不同指标上的两个新基准测试中的几个最先进的模型都优于多种最先进的模型。
translated by 谷歌翻译
新颖的对象字幕(NOC)旨在描述包含对象的图像,而无需在训练过程中观察其地面真相标题。由于缺乏字幕注释,无法通过序列到序列训练或苹果酒优化直接优化字幕模型。结果,我们提出了启用释义(P2C),这是一个针对NOC的两阶段学习框架,它将通过释义通过释义来优化输出字幕。使用P2C,字幕模型首先从仅在文本语料库中预先训练的语言模型中学习释义,从而扩展了Bank一词以提高语言流利度。为了进一步实施足够描述输入图像的视觉内容的输出字幕,我们对引入的忠诚度和充分性目标进行字幕模型执行自我贴形。由于在训练过程中没有任何地面真相标题可用于新颖的对象图像,因此我们的P2C利用交叉模式(图像文本)关联模块可以确保可以正确保留上述字幕特征。在实验中,我们不仅表明我们的P2C在NOCAPS和COCO字幕数据集上实现了最先进的性能,而且还通过替换NOC的语言和跨模式关联模型来验证学习框架的有效性和灵活性。实施详细信息和代码可在补充材料中找到。
translated by 谷歌翻译
连接视觉和语言在生成智能中起着重要作用。因此,已经致力于图像标题的大型研究工作,即用句法和语义有意义的句子描述图像。从2015年开始,该任务通常通过由Visual Encoder组成的管道和文本生成的语言模型来解决任务。在这些年来,两种组件通过对象区域,属性,介绍多模态连接,完全关注方法和伯特早期融合策略的利用而显着发展。但是,无论令人印象深刻的结果,图像标题的研究还没有达到结论性答案。这项工作旨在提供图像标题方法的全面概述,从视觉编码和文本生成到培训策略,数据集和评估度量。在这方面,我们量化地比较了许多相关的最先进的方法来确定架构和培训策略中最有影响力的技术创新。此外,讨论了问题的许多变体及其开放挑战。这项工作的最终目标是作为理解现有文献的工具,并突出显示计算机视觉和自然语言处理的研究领域的未来方向可以找到最佳的协同作用。
translated by 谷歌翻译
Controllable image captioning models generate human-like image descriptions, enabling some kind of control over the generated captions. This paper focuses on controlling the caption length, i.e. a short and concise description or a long and detailed one. Since existing image captioning datasets contain mostly short captions, generating long captions is challenging. To address the shortage of long training examples, we propose to enrich the dataset with varying-length self-generated captions. These, however, might be of varying quality and are thus unsuitable for conventional training. We introduce a novel training strategy that selects the data points to be used at different times during the training. Our method dramatically improves the length-control abilities, while exhibiting SoTA performance in terms of caption quality. Our approach is general and is shown to be applicable also to paragraph generation.
translated by 谷歌翻译
We propose BERTSCORE, an automatic evaluation metric for text generation. Analogously to common metrics, BERTSCORE computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTSCORE correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task to show that BERTSCORE is more robust to challenging examples when compared to existing metrics.
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
基于文本的图像标题(TextCAP)需要同时对视觉内容的理解并读取图像文本以生成自然语言描述。虽然一项任务可以教导机器来了解复杂的人类环境进一步鉴于我们日常环境中的文本是全部的,但它在正常标题中提出了额外的挑战。基于文本的图像直观地包含丰富和复杂的多模式关系内容,即可以从多视图而不是单个字幕来扩散图像细节。当然,我们可以介绍额外的配对训练数据以显示图像描述的多样性,这一过程是具有额外文本的文本映射对注释的劳动密集型和耗时。基于上述洞察力,我们调查如何使用未配对的培训范例来生成专注于不同图像零件的不同标题。我们提出了多模式关系图对抗性推论(魔法)框架,用于多样化和未配对的Textcap。该框架可以自适应地构建图形之间的图像和模型复杂关系的多个多模式关系图来表示描述性分集。此外,从建模的图表中开发了一种级联的生成对抗性网络,以推断图像句子特征对齐和语言相干水平中的未配对字幕。我们验证了魔法在从图像的不同关系信息项目生成不同标题时的有效性。实验结果表明,魔法可以在不使用任何图像标题训练对的情况下产生非常有前途的结果。
translated by 谷歌翻译
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions; and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, machine translation, and visual-language generation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG.
translated by 谷歌翻译
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ∼0.25M images, ∼0.76M questions, and ∼10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).
translated by 谷歌翻译
Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks.The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.
translated by 谷歌翻译
人们说:“一张照片值一千字”。那么,我们如何从图像中获取丰富的信息?我们认为,通过使用视觉线索来桥接大型的识别视觉基础模型和语言模型,我们可以无需任何额外的跨模式训练。得益于基础模型的强大零拍功能,我们首先构建图像的丰富语义表示(例如,图像标签,对象属性 /位置,字幕)作为结构化的文本提示,称为视觉线索,使用视觉基础模型。基于视觉线索,我们使用大型语言模型为视觉内容生成一系列综合描述,然后再次通过视觉模型验证,以选择与图像最合适的候选人。我们通过定量和定性测量评估生成的描述的质量。结果证明了这种结构化语义表示的有效性。
translated by 谷歌翻译
We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset -performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning.
translated by 谷歌翻译