In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https: //github.com/kuanghuei/SCAN.
translated by 谷歌翻译
Image-text retrieval (ITR) is a challenging task in the field of multimodal information processing due to the semantic gap between different modalities. In recent years, researchers have made great progress in exploring the accurate alignment between image and text. However, existing works mainly focus on the fine-grained alignment between image regions and sentence fragments, which ignores the guiding significance of context background information. Actually, integrating the local fine-grained information and global context background information can provide more semantic clues for retrieval. In this paper, we propose a novel Hierarchical Graph Alignment Network (HGAN) for image-text retrieval. First, to capture the comprehensive multimodal features, we construct the feature graphs for the image and text modality respectively. Then, a multi-granularity shared space is established with a designed Multi-granularity Feature Aggregation and Rearrangement (MFAR) module, which enhances the semantic corresponding relations between the local and global information, and obtains more accurate feature representations for the image and text modalities. Finally, the ultimate image and text features are further refined through three-level similarity functions to achieve the hierarchical alignment. To justify the proposed model, we perform extensive experiments on MS-COCO and Flickr30K datasets. Experimental results show that the proposed HGAN outperforms the state-of-the-art methods on both datasets, which demonstrates the effectiveness and superiority of our model.
translated by 谷歌翻译
In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the At-tnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14% on the CUB dataset and 170.25% on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.
translated by 谷歌翻译
Learning fine-grained interplay between vision and language allows to a more accurate understanding for VisionLanguage tasks. However, it remains challenging to extract key image regions according to the texts for semantic alignments. Most existing works are either limited by textagnostic and redundant regions obtained with the frozen detectors, or failing to scale further due to its heavy reliance on scarce grounding (gold) data to pre-train detectors. To solve these problems, we propose Self-Locator Aided Network (SLAN) for cross-modal understanding tasks without any extra gold data. SLAN consists of a region filter and a region adaptor to localize regions of interest conditioned on different texts. By aggregating cross-modal information, the region filter selects key regions and the region adaptor updates their coordinates with text guidance. With detailed region-word alignments, SLAN can be easily generalized to many downstream tasks. It achieves fairly competitive results on five cross-modal understanding tasks (e.g., 85.7% and 69.2% on COCO image-to-text and text-to-image retrieval, surpassing previous SOTA methods). SLAN also demonstrates strong zero-shot and fine-tuned transferability to two localization tasks.
translated by 谷歌翻译
有两种流行的损失功能用于视觉检索,即三胞胎损失和对比度学习损失,这两者本质上都可以最大程度地减少负对和正对的相似性之间的差异。更具体地说,在现有的检索模型中广泛使用的硬采矿(三重态HN)的三胞胎损失很容易落入训练中的局部最小值。另一方面,广泛用于视觉的预训练中的视觉对比学习损失(VLC)已被证明可以在视觉语言检索上获得显着的性能提高,但通过使用微调的性能来实现。小型数据集上的VLC并不令人满意。本文提出了对视觉语言检索的统一损失相似性优化,为理解现有的损失功能提供了强大的工具。我们的统一损失包括VLC的硬样品挖掘策略,并引入了三胞胎损失使用的边距,以获得更好的相似性分离。结果表明,三重态HN和VLC都是我们统一损失的特殊形式。与三胞胎-HN相比,我们的统一损失具有快速的收敛速度。与VLC相比,我们的统一损失更具歧视性,可以在下游微调任务中更好地概括。图像文本和视频检索基准测试的实验表明,我们的统一损失可以显着提高最新检索模型的性能。
translated by 谷歌翻译
We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.
translated by 谷歌翻译
Joint image-text embedding is the bedrock for most Visionand-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage finegrained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OTbased WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question
translated by 谷歌翻译
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and topdown attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.
translated by 谷歌翻译
大规模的视觉预训练在各种下游任务中都表现出了令人印象深刻的进步。现有方法主要是通过图像和文本的全局表示形式的相似性或对图像和文本特征上的高级交叉模式关注来对跨模式对齐进行建模。但是,由于只有全局图像文本对齐信息,因此他们无法明确学习视觉区域和文本短语之间的细粒语义对齐。在本文中,我们介绍了Loupe,这是一种精细的语义一致性视觉语言预训练框架,该框架从新颖的游戏理论互动的角度学习了细粒度的语义对齐。为了有效地计算游戏理论相互作用,我们进一步提出了一种不确定性感知的神经Shapley交互学习模块。实验表明,Loupe在图像文本检索基准测试中实现了最新的。如果没有任何对象级的人类注释和微调,Loupe就可以在对象检测和视觉接地方面实现竞争性能。更重要的是,Loupe从大规模的原始图像文本对学习细粒语义的新方向。
translated by 谷歌翻译
图像文本匹配是在涉及对视觉和语言的共同理解的任务中发挥领导作用。在文献中,此任务通常被用作培训能够共同处理图像和文本的架构的预训练目标。但是,它具有直接的下游应用程序:跨模式检索,其中包括查找与给定查询文本或反之亦然相关的图像。解决此任务对于跨模式搜索引擎至关重要。许多最近的方法提出了针对图像文本匹配问题的有效解决方案,主要是使用最近的大型视觉语言(VL)变压器网络。但是,这些模型通常在计算上很昂贵,尤其是在推理时间。这样可以防止他们在大规模的跨模式检索场景中采用,几乎应该立即向用户提供结果。在本文中,我们建议通过提出对齐和提炼网络(Aladin)来填补有效性和效率之间的空白。阿拉丁首先通过在细粒度的图像和文本上对齐来产生高效的分数。然后,它通过提炼从细粒对齐方式获得的相关性分数来提炼共享的嵌入空间 - 可以进行有效的KNN搜索。我们在MS-Coco上取得了显着的结果,表明我们的方法可以与最先进的VL变形金刚竞争,同时快了近90倍。复制我们结果的代码可在https://github.com/mesnico/aladin上获得。
translated by 谷歌翻译
图像文本检索(ITR)在桥接视觉和舌形式方面具有挑战性。对比度学习已被大多数先前的艺术所采用。除了有限的负面图像文本对外,约束学习的能力受到手动加权负对以及对外部知识的不认识的限制。在本文中,我们提出了新型耦合多样性敏感的动量约束学习(编码器),以改善跨模式表示。首先,发明了一种新颖的多样性对比度学习(DCL)体系结构。我们引入了两种模式的动态词典,以扩大图像文本对的比例,并且通过自适应负面对加权实现多样性敏感性。此外,编码器设计了两个分支。一个人从图像/文本中学习实例级的嵌入式,它还基于其嵌入为其输入图像/文本生成伪在线聚类标签。同时,另一个分支学会从常识知识图中查询以形成两种模式的概念级描述符。之后,两个分支都利用DCL来对齐跨模式嵌入空间,而额外的伪聚类标签预测损失则用于促进第二个分支的概念级表示学习。在两个流行的基准测试(即Mscoco和Flicker30k)上进行的广泛实验,验证编码器的表现明显优于最先进的方法。
translated by 谷歌翻译
视觉检索中的大多数现有方法是通过比较其全局特征向量的两种方式,该矢量错过了足够的信息并缺乏可解释性,检测图像或视频中的对象,并将文本与依赖复杂的模型设计或建模的精细元素对齐通过较低效率遭受视觉和文本令牌的交叉注意相互作用。为了解决这些局限性,最近的一些作品简单地汇总了代币的相似性以实现细粒度的对齐方式,但它们缺乏直观的解释,并且忽略了令牌级特征和具有高级语义的全球表示之间的关系。在这项工作中,我们重新考虑细粒度的跨模式对准,并为其设计一种新的模型不合命固式配方。我们还揭开了最近的流行作品的神秘面纱,并将其纳入我们的计划。此外,受最佳运输理论的启发,我们引入了\ emph {tokenflow},这是对拟议方案的实例化。通过仅修改相似性函数,我们方法的性能与主要视频文本检索基准上具有重型模型设计的SOTA算法相当。可视化进一步表明\ emph {tokenflow}成功利用细粒度的信息并获得了更好的解释性。
translated by 谷歌翻译
在这项工作中,我们提出了一种开放式摄制对象检测方法,该方法基于图像映射对,学会了检测新颖对象类别以及给定的一组已知类别。这是一种两阶段的训练方法,首先使用位置引导的图像捕获匹配技术以弱监督的方式学习新颖和已知类别的类标签,第二个使用已知的类注释专用于对象检测任务的模型。我们表明,一个简单的语言模型比检测新对象的大型上下文化语言模型更适合。此外,我们引入了一种一致性调查技术,以更好地利用图像捕获对信息。我们的方法比较与现有的开放式检测方法相比,同时具有数据效率。源代码可从https://github.com/lmb-freiburg/locov获得。
translated by 谷歌翻译
注意机制已被用作跨视觉和语言(VL)任务的重要组成部分,以弥合视觉和文本特征之间的语义差距。尽管注意力已被广泛用于VL任务,但尚未研究其在弥合视觉和文本线索之间语义差距方面的不同注意对准计算的能力。在这项研究中,我们通过研究注意力评分计算方法,并检查其实际代表视觉区域的作用以及文本令牌对全球评估的重要性,对了解注意力对齐的作用进行全面分析。我们还分析了注意力分数计算机制的条件更多(或更少)可解释,并且可能会影响三个不同VL任务的模型性能,包括视觉问题答案,文本到图像生成,文本和图像匹配(句子和图像检索)。我们的分析是同类产品中的第一个,并提供了在VL任务的训练阶段应用的每个注意力对齐得分计算的重要性,这通常在基于注意力的交叉模态模型和/或预审前的模型中被忽略。
translated by 谷歌翻译
基于文本的人搜索是一项具有挑战性的任务,旨在搜索具有查询文本描述的图像库中具有相同身份的行人图像。近年来,基于文本的人搜索取得了良好的进步,而最先进的方法通过学习图像和文本之间的本地细粒度对应来实现出色的性能。但是,现有方法通过手工制作的拆分或外部工具从图像和文本中明确提取图像零件和文本短语,然后进行复杂的跨模式本地匹配。此外,现有方法很少考虑由图像特定信息引起的方式之间的信息不平等问题。在本文中,我们提出了一个有效的联合信息和语义对齐网络(ISANET),用于基于文本的人搜索。具体而言,我们首先设计一个特定图像的信息抑制模块,该模块分别通过关系引导定位和通道注意过滤抑制图像背景和环境因素。该设计可以有效地减轻信息不平等问题,并实现图像和文本之间的信息对齐。其次,我们建议一个隐性的本地对齐模块,以将图像和文本功能适应一组模态共享的语义主题中心,并隐式地学习图像和文本之间的本地细粒度对应关系,而无需其他监督信息和复杂的跨模式互动。此外,引入了全球一致性作为当地观点的补充。在多个数据库上进行的广泛实验证明了所提出的ISANET的有效性和优势。
translated by 谷歌翻译
以前的视觉语言预训练模型主要构建具有令牌和对象(像素)的多模式输入,然后在它们之间执行交叉模式相互作用。我们认为,只有令牌和对象的输入限制了诸如短语到区域接地之类的高级语义对齐。同时,多层次对齐本质上是一致的,并且能够协同促进表示形式学习。因此,在本文中,我们建议学习视觉预训练(MVPTR)的多级语义一致性。在MVPTR中,我们遵循两种方式的嵌套结构,以引入概念为高级语义。为了简化从多模式多级输入的学习,我们的框架分为两个阶段,第一阶段着重于模式内多级表示学习,第二阶段通过粗粒和细粒度跨模态强化了跨模式的交互语义对齐任务。除了常用的图像文本匹配和掩盖语言模型任务外,我们还引入了第一阶段蒙版概念恢复任务以增强概念表示学习,第二阶段的另外两个任务在第二阶段中,以明确鼓励跨跨层次的多层次对准方式。我们的代码可在https://github.com/junction4nako/mvp_pytorch上找到。
translated by 谷歌翻译
我们介绍了空间本地化叙述中的视频中的任务。我们的方法的关键是能够学会在与随附的叙述的视频中的大型视频中对自我监督进行空间地定位与自我监督的互动。为实现这一目标,我们提出了一种多层跨模型关注网络,可以在培训期间有效优化对比损失。我们介绍了一种分割的策略,可以通过视觉和自然语言方式计算和中间模态注意力之间的交替,这允许通过直接对比两种方式的表示来实现有效的培训。我们展示了我们对HOWTO100M教学数据集的自我训练的方法的有效性,并在YouCook2 DataSet中的本地化描述交互的新收集数据集上进行评估。我们展示了我们的方法优于替代基准,包括浅薄的共同关注和完全跨越的关注。我们还将我们的方法应用于在Flickr30k上的弱监管下的图像中的接地短语,并显示堆叠多个注意层是有效的,并且当与对区域丢失相结合时,在召回召回和指向时达到最先进的艺术状态手准确性。
translated by 谷歌翻译
The prevailing framework for matching multimodal inputs is based on a two-stage process: 1) detecting proposals with an object detector and 2) matching text queries with proposals. Existing two-stage solutions mostly focus on the matching step. In this paper, we argue that these methods overlook an obvious \emph{mismatch} between the roles of proposals in the two stages: they generate proposals solely based on the detection confidence (i.e., query-agnostic), hoping that the proposals contain all instances mentioned in the text query (i.e., query-aware). Due to this mismatch, chances are that proposals relevant to the text query are suppressed during the filtering process, which in turn bounds the matching performance. To this end, we propose VL-NMS, which is the first method to yield query-aware proposals at the first stage. VL-NMS regards all mentioned instances as critical objects, and introduces a lightweight module to predict a score for aligning each proposal with a critical object. These scores can guide the NMS operation to filter out proposals irrelevant to the text query, increasing the recall of critical objects, resulting in a significantly improved matching performance. Since VL-NMS is agnostic to the matching step, it can be easily integrated into any state-of-the-art two-stage matching methods. We validate the effectiveness of VL-NMS on two multimodal matching tasks, namely referring expression grounding and image-text matching. Extensive ablation studies on several baselines and benchmarks consistently demonstrate the superiority of VL-NMS.
translated by 谷歌翻译
最近,跨模式的预训练任务一直是一个热点,因为它在各种下文研究中广泛应用,包括检索,字幕,问题答案等。然而,退出的方法采用单媒体预训练模型来探索进行跨模式检索的联合视觉表示,这很容易遭受计算爆炸的影响。此外,尽管常规的双流结构非常有效,但它们仍然缺乏重要的跨模式相互作用,导致性能低。在这些挑战的激励下,我们提出了一个对比的跨模式知识共享预训练(Cookie),以掌握联合文本图像表示。从结构上讲,Cookie由于可接受的时间消耗而采用了传统的双流结构。为了克服上述双流结构的固有缺陷,我们精心设计了两个有效的模块。具体而言,第一个模块是一个体重共享的变压器,它构建在视觉和文本编码器的头上,旨在将语义对齐文本和图像对齐。该设计使视觉和文本路径集中在相同的语义上。另一个是三个专门设计的对比学习,旨在分享不同模型之间的知识。共享的跨模式知识大大发展了单峰表示的研究,从而促进了单模式检索任务。对多模式匹配研究的广泛实验结果,包括跨模式检索,文本匹配和图像检索揭示了我们的计算效率和我们预训练模型的统计指标的上级。
translated by 谷歌翻译
食物对人类日常生活很重要。在本文中,我们有兴趣学习长期食谱的结构表现形式,这些食谱可以使食谱生成和食品跨模式检索任务受益。与常见的视觉数据不同,这里的食物图像包含混合成分和目标食谱是漫长的段落,在那里我们没有关于结构信息的注释。为了解决上述局限性,我们提出了一种新颖的方法,可以毫无根据地学习烹饪食谱的句子级树结构。我们的方法在系统的框架中汇集了一些新颖的想法:(1)利用一种无监督的学习方法来在训练前获得句子级的树结构标签; (2)通过从(1)中学到的树结构标签的监督从图像中生成目标食谱的树; (3)将学习的树结构整合到食谱生成和食品交叉模式检索过程中。我们提出的模型可以生成优质的句子级别的树结构和连贯的食谱。我们在基准配方1M数据集上实现了最先进的食谱生成和食品交叉模式检索性能。
translated by 谷歌翻译