Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep endto-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular RE-INFORCE algorithm that, rather than estimating a "baseline" to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.
translated by 谷歌翻译
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and topdown attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.
translated by 谷歌翻译
图像字幕模型通常是根据人体注释的地面真实字幕训练的,该字幕可能会产生准确但通用的字幕。为了提高字幕模型的独特性,我们首先提出了一系列使用大规模视觉语言预训练模型剪辑来评估标题的独特性。然后,我们提出了一种简单有效的训练策略,该策略通过在相似图像组中进行比较来训练模型。我们对各种现有模型进行了广泛的实验,以证明我们的策略的广泛适用性以及基于公制的结果与人类评估的一致性。通过将最佳模型的性能与现有的最新模型进行比较,我们声称我们的模型实现了针对独特性目标的新最先进的。
translated by 谷歌翻译
自动在自然语言中自动生成图像的描述称为图像字幕。这是一个积极的研究主题,位于人工智能,计算机视觉和自然语言处理中两个主要领域的交集。图像字幕是图像理解中的重要挑战之一,因为它不仅需要识别图像中的显着对象,还需要其属性及其相互作用的方式。然后,系统必须生成句法和语义上正确的标题,该标题描述了自然语言的图像内容。鉴于深度学习模型的重大进展及其有效编码大量图像并生成正确句子的能力,最近已经提出了几种基于神经的字幕方法,每种方法都试图达到更好的准确性和标题质量。本文介绍了一个基于编码器的图像字幕系统,其中编码器使用以RESNET-101作为骨干为骨干来提取图像中每个区域的空间和全局特征。此阶段之后是一个精致的模型,该模型使用注意力进行注意的机制来提取目标图像对象的视觉特征,然后确定其相互作用。解码器由一个基于注意力的复发模块和一个反思性注意模块组成,该模块会协作地将注意力应用于视觉和文本特征,以增强解码器对长期顺序依赖性建模的能力。在两个基准数据集(MSCOCO和FLICKR30K)上进行的广泛实验显示了提出的方法和生成的字幕的高质量。
translated by 谷歌翻译
Automated audio captioning is a cross-modal translation task for describing the content of audio clips with natural language sentences. This task has attracted increasing attention and substantial progress has been made in recent years. Captions generated by existing models are generally faithful to the content of audio clips, however, these machine-generated captions are often deterministic (e.g., generating a fixed caption for a given audio clip), simple (e.g., using common words and simple grammar), and generic (e.g., generating the same caption for similar audio clips). When people are asked to describe the content of an audio clip, different people tend to focus on different sound events and describe an audio clip diversely from various aspects using distinct words and grammar. We believe that an audio captioning system should have the ability to generate diverse captions, either for a fixed audio clip, or across similar audio clips. To this end, we propose an adversarial training framework based on a conditional generative adversarial network (C-GAN) to improve diversity of audio captioning systems. A caption generator and two hybrid discriminators compete and are learned jointly, where the caption generator can be any standard encoder-decoder captioning model used to generate captions, and the hybrid discriminators assess the generated captions from different criteria, such as their naturalness and semantics. We conduct experiments on the Clotho dataset. The results show that our proposed model can generate captions with better diversity as compared to state-of-the-art methods.
translated by 谷歌翻译
连接视觉和语言在生成智能中起着重要作用。因此,已经致力于图像标题的大型研究工作,即用句法和语义有意义的句子描述图像。从2015年开始,该任务通常通过由Visual Encoder组成的管道和文本生成的语言模型来解决任务。在这些年来,两种组件通过对象区域,属性,介绍多模态连接,完全关注方法和伯特早期融合策略的利用而显着发展。但是,无论令人印象深刻的结果,图像标题的研究还没有达到结论性答案。这项工作旨在提供图像标题方法的全面概述,从视觉编码和文本生成到培训策略,数据集和评估度量。在这方面,我们量化地比较了许多相关的最先进的方法来确定架构和培训策略中最有影响力的技术创新。此外,讨论了问题的许多变体及其开放挑战。这项工作的最终目标是作为理解现有文献的工具,并突出显示计算机视觉和自然语言处理的研究领域的未来方向可以找到最佳的协同作用。
translated by 谷歌翻译
Transformer-based architectures represent the state of the art in sequence modeling tasks like machine translation and language understanding. Their applicability to multi-modal contexts like image captioning, however, is still largely under-explored. With the aim of filling this gap, we present M 2 -a Meshed Transformer with Memory for Image Captioning. The architecture improves both the image encoding and the language generation steps: it learns a multi-level representation of the relationships between image regions integrating learned a priori knowledge, and uses a mesh-like connectivity at decoding stage to exploit low-and high-level features. Experimentally, we investigate the performance of the M 2 Transformer and different fully-attentive models in comparison with recurrent ones. When tested on COCO, our proposal achieves a new state of the art in single-model and ensemble configurations on the "Karpathy" test split and on the online test server. We also assess its performances when describing objects unseen in the training set. Trained models and code for reproducing the experiments are publicly
translated by 谷歌翻译
自动音频字幕是一项跨模式翻译任务,旨在为给定的音频剪辑生成自然语言描述。近年来,随着免费可用数据集的发布,该任务受到了越来越多的关注。该问题主要通过深度学习技术解决。已经提出了许多方法,例如研究不同的神经网络架构,利用辅助信息,例如关键字或句子信息来指导字幕生成,并采用了不同的培训策略,这些策略极大地促进了该领域的发展。在本文中,我们对自动音频字幕的已发表贡献进行了全面综述,从各种现有方法到评估指标和数据集。我们还讨论了公开挑战,并设想可能的未来研究方向。
translated by 谷歌翻译
Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.
translated by 谷歌翻译
Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists of maximizing the likelihood of each token in the sequence given the current (recurrent) state and the previous token. At inference, the unknown previous token is then replaced by a token generated by the model itself. This discrepancy between training and inference can yield errors that can accumulate quickly along the generated sequence. We propose a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead. Experiments on several sequence prediction tasks show that this approach yields significant improvements. Moreover, it was used successfully in our winning entry to the MSCOCO image captioning challenge, 2015.
translated by 谷歌翻译
图像标题将复杂的视觉信息转换为抽象的自然语言以获得表示的抽象自然语言,这可以帮助计算机快速了解世界。但是,由于真实环境的复杂性,它需要识别关键对象并实现其连接,并进一步生成自然语言。整个过程涉及视觉理解模块和语言生成模块,它为深度神经网络的设计带来了比其他任务的深度神经网络的更具挑战。神经架构搜索(NAS)在各种图像识别任务中显示了它的重要作用。此外,RNN在图像标题任务中起重要作用。我们介绍了一种自动调用方法,可以更好地设计图像标题的解码器模块,其中我们使用NAS自动设计称为Autornn的解码器模块。我们使用基于共享参数的加固学习方法有效地自动设计Autornn。 AutoCaption的搜索空间包括图层之间的连接和层次的操作,它可以使Autornn快递更多的架构。特别是,RNN等同于搜索空间的子集。 MSCOCO数据集上的实验表明,我们的自动驾统模型可以比传统的手工设计方法实现更好的性能。
translated by 谷歌翻译
图像字幕是当前的研究任务,用于使用场景中的对象及其关系来描述图像内容。为了应对这项任务,使用了两个重要的研究领域,人为的视觉和自然语言处理。在图像字幕中,就像在任何计算智能任务中一样,性能指标对于知道方法的性能(或坏)至关重要。近年来,已经观察到,基于n-gram的经典指标不足以捕获语义和关键含义来描述图像中的内容。为了衡量或不进行最新指标的集合,在本手稿中,我们对使用众所周知的COCO数据集进行了对几种图像字幕指标的评估以及它们之间的比较。为此,我们设计了两种情况。 1)一组人工构建字幕,以及2)比较某些最先进的图像字幕方法的比较。我们试图回答问题:当前的指标是否有助于制作高质量的标题?实际指标如何相互比较?指标真正测量什么?
translated by 谷歌翻译
视频到文本(VTT)是自动生成短视听视频剪辑的描述的任务,可以支持视觉上受损人员以了解YouTube视频的场景。变压器架构在机器翻译和图像标题中表现出具有很大的性能,缺乏对VTT的直接和可重复的应用。但是,对视频描述的不同策略和建议没有全面研究,包括利用完全自临时网络利用随附的音频。因此,我们通过开发直接变压器架构来探索来自图像标题和视频处理的有希望的方法,并将它们应用于VTT。此外,我们介绍了一种在我们呼叫分数位置编码(FPE)的变压器中同步音频和视频特征的新方法。我们在Vatex DataSet上运行多个实验,以确定适用于看不见的数据集的配置,有助于描述自然语言中的短视频剪辑,并与Vanilla变压器网络相比,通过37.13和12.83点改善苹果酒和BLE-4分数。 - MSR-VTT和MSVD数据集的最佳结果。此外,FPE有助于将苹果酒分数增加8.6%。
translated by 谷歌翻译
现有的图像字幕的方法通常从左到右生成句子逐字,并在本地上下文中受到限制,包括给定的图像和历史记录生成的单词。在解码过程中,有许多研究目的是利用全球信息,例如迭代改进。但是,它仍然探讨了如何有效,有效地纳入未来的环境。为了回答这个问题,受到非自动回归图像字幕(NAIC)的启发,可以通过修改后的掩码操作利用两侧关系,我们的目标是将此进步嫁接到常规的自动回归图像字幕(AIC)模型,同时保持推理效率而无需进行推理效率额外的时间成本。具体而言,首先对AIC和NAIC模型结合了共享的视觉编码器,迫使视觉编码器包含足够有效的未来上下文。然后鼓励AIC模型捕获NAIC模型在其不自信的单词上互换的跨层互换的因果动态,该单词遵循教师学生的范式,并通过分配校准训练目标进行了优化。经验证据表明,我们所提出的方法清楚地超过了自动指标和人类评估的最新基线,对MS COCO基准测试。源代码可在以下网址获得:https://github.com/feizc/future-caption。
translated by 谷歌翻译
Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-theart performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.
translated by 谷歌翻译
Attention mechanisms are widely used in current encoder/decoder frameworks of image captioning, where a weighted average on encoded vectors is generated at each time step to guide the caption decoding process. However, the decoder has little idea of whether or how well the attended vector and the given attention query are related, which could make the decoder give misled results. In this paper, we propose an "Attention on Attention" (AoA) module, which extends the conventional attention mechanisms to determine the relevance between attention results and queries. AoA first generates an "information vector" and an "attention gate" using the attention result and the current context, then adds another attention by applying element-wise multiplication to them and finally obtains the "attended information", the expected useful knowledge. We apply AoA to both the encoder and the decoder of our image captioning model, which we name as AoA Network (AoANet). Experiments show that AoANet outperforms all previously published methods and achieves a new state-ofthe-art performance of 129.8 CIDEr-D score on MS COCO "Karpathy" offline test split and 129.6 CIDEr-D (C40) score on the official online testing server. Code is available at https://github.com/husthuaan/AoANet.
translated by 谷歌翻译
Discriminativeness is a desirable feature of image captions: captions should describe the characteristic details of input images. However, recent high-performing captioning models, which are trained with reinforcement learning (RL), tend to generate overly generic captions despite their high performance in various other criteria. First, we investigate the cause of the unexpectedly low discriminativeness and show that RL has a deeply rooted side effect of limiting the output words to high-frequency words. The limited vocabulary is a severe bottleneck for discriminativeness as it is difficult for a model to describe the details beyond its vocabulary. Then, based on this identification of the bottleneck, we drastically recast discriminative image captioning as a much simpler task of encouraging low-frequency word generation. Hinted by long-tail classification and debiasing methods, we propose methods that easily switch off-the-shelf RL models to discriminativeness-aware models with only a single-epoch fine-tuning on the part of the parameters. Extensive experiments demonstrate that our methods significantly enhance the discriminativeness of off-the-shelf RL models and even outperform previous discriminativeness-aware methods with much smaller computational costs. Detailed analysis and human evaluation also verify that our methods boost the discriminativeness without sacrificing the overall quality of captions.
translated by 谷歌翻译
程序合成或代码生成旨在生成满足问题规范的程序。使用大规模预处理的语言模型(LMS)的最新方法显示出令人鼓舞的结果,但它们有一些关键的局限性。特别是,他们经常遵循标准监督的微调程序,仅从对自然语言问题描述和基础真相计划对培训代码生成模型。这种范式在很大程度上忽略了问题规范中的一些重要但潜在的信号,例如单位测试,因此在求解复杂的看不见的编码任务时通常会导致性能差。为了解决这些局限性,我们提出了“ Coderl”,这是通过验证的LMS和深入强化学习(RL)实现程序合成任务的新框架。具体而言,在培训期间,我们将代码生成的LM视为参与者网络,并引入批评网络,该网络经过培训,以预测生成的程序的功能正确性,并为演员提供密集的反馈信号。在推理期间,我们引入了一种新一代程序,具有关键的抽样策略,该过程允许模型根据示例单位测试和评论家分数的反馈自动重新生成程序。对于模型骨架,我们扩展了Codet5的编码器架构,具有增强的学习目标,更大的模型大小和更好的预处理数据。我们的方法不仅在具有挑战性的应用程序基准上实现了新的SOTA结果,而且还显示出强大的零弹性传输能力,并在简单的MBPP基准上具有新的SOTA结果。
translated by 谷歌翻译
可控图像标题(CIC)任务旨在在指定的控制信号上生成条件。提出了几种与结构相关的控制信号来控制句子的语义结构,例如句子长度和语音标签序列。然而,由于基于精度的奖励主要针对内容而不是语义结构,因此现有的增强培训方法不适用于结构相关的CIC模型。缺乏加固训练导致偏差和优化功能和评估度量之间的不一致。在本文中,我们提出了一种用于结构相关控制信号的新型加固训练方法:自注释培训(SAT),提高CIC模型的准确性和可控性。在SAT中,设计递归注释机制(RAM)以强制输入控制信号以匹配实际输出句子。此外,我们提出了额外的对准奖励来Finetune在SAT方法后培训的CIC模型,这进一步提高了模型的可控性。在MSCOCO基准测试中,我们对不同结构相关的控制信号和不同基线模型进行广泛的实验,结果表明了我们方法的有效性和普遍性。
translated by 谷歌翻译
Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
translated by 谷歌翻译