Vision-Language(V + L)预先润廓模型通过了解图像和文本之间的对齐来支持多媒体应用程序取得了巨大成功。虽然现有的视觉预押模型主要专注于了解文本中的图像或实体中的对象,但它们通常会忽略事件级别的对齐及其参数结构。 %在这项工作中,我们提出了一种对比的学习框架来强制执行愿景 - 语言预押模型来理解事件和相关参数(参与者)角色。为此,我们利用文本信息提取技术来获得事件结构知识,并利用多个提示函数来通过操纵事件结构来对比难度的负面描述。我们还基于最佳传输来设计事件图对齐损耗以捕获事件参数结构。此外,我们收集了一个大型活动的数据集(106,875张图片),用于预磨平,这提供了更具挑战性的图像检索基准,以评估对复杂冗长的句子的理解。实验表明,我们的零射剪辑事件优于在多媒体事件提取中的参数提取中的最先进的监督模型,从而实现了事件提取中的5±绝对f得分增益,以及显着改进零拍摄设置下的各种下游任务。
translated by 谷歌翻译
Recent years have witnessed the resurgence of knowledge engineering which is featured by the fast growth of knowledge graphs. However, most of existing knowledge graphs are represented with pure symbols, which hurts the machine's capability to understand the real world. The multi-modalization of knowledge graphs is an inevitable key step towards the realization of human-level machine intelligence. The results of this endeavor are Multi-modal Knowledge Graphs (MMKGs). In this survey on MMKGs constructed by texts and images, we first give definitions of MMKGs, followed with the preliminaries on multi-modal tasks and techniques. We then systematically review the challenges, progresses and opportunities on the construction and application of MMKGs respectively, with detailed analyses of the strength and weakness of different solutions. We finalize this survey with open research problems relevant to MMKGs.
translated by 谷歌翻译
Video event extraction aims to detect salient events from a video and identify the arguments for each event as well as their semantic roles. Existing methods focus on capturing the overall visual scene of each frame, ignoring fine-grained argument-level information. Inspired by the definition of events as changes of states, we propose a novel framework to detect video events by tracking the changes in the visual states of all involved arguments, which are expected to provide the most informative evidence for the extraction of video events. In order to capture the visual state changes of arguments, we decompose them into changes in pixels within objects, displacements of objects, and interactions among multiple arguments. We further propose Object State Embedding, Object Motion-aware Embedding and Argument Interaction Embedding to encode and track these changes respectively. Experiments on various video event extraction tasks demonstrate significant improvements compared to state-of-the-art models. In particular, on verb classification, we achieve 3.49% absolute gains (19.53% relative gains) in F1@5 on Video Situation Recognition.
translated by 谷歌翻译
Visual Entity Linking (VEL) is a task to link regions of images with their corresponding entities in Knowledge Bases (KBs), which is beneficial for many computer vision tasks such as image retrieval, image caption, and visual question answering. While existing tasks in VEL either rely on textual data to complement a multi-modal linking or only link objects with general entities, which fails to perform named entity linking on large amounts of image data. In this paper, we consider a purely Visual-based Named Entity Linking (VNEL) task, where the input only consists of an image. The task is to identify objects of interest (i.e., visual entity mentions) in images and link them to corresponding named entities in KBs. Since each entity often contains rich visual and textual information in KBs, we thus propose three different sub-tasks, i.e., visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL). In addition, we present a high-quality human-annotated visual person linking dataset, named WIKIPerson. Based on WIKIPerson, we establish a series of baseline algorithms for the solution of each sub-task, and conduct experiments to verify the quality of proposed datasets and the effectiveness of baseline methods. We envision this work to be helpful for soliciting more works regarding VNEL in the future. The codes and datasets are publicly available at https://github.com/ict-bigdatalab/VNEL.
translated by 谷歌翻译
我们在这项研究中的目标是研究一个更现实的环境,在这种环境中,我们可以为细粒度的产品类别进行弱监督的多模式实例级产品检索。我们首先贡献了product1m数据集,并定义了两个实际实例级检索任务,以实现价格比较和个性化建议的评估。对于两个实例级任务,如何准确地指出视觉语言数据中提到的产品目标并有效地降低了无关紧要的内容的影响非常具有挑战性。为了解决这个问题,我们利用训练一个更有效的跨模式与模型,该模型能够自适应地能够通过使用一个实体图,其节点和边缘分别表示实体和相似性,从而可以从多模式数据中合并来自多模式数据的关键概念信息。实体。具体而言,为实例级别的商品检索提出了一种新型的实体图增强的跨模式预处理(EGE-CMP)模型,该模型明确地将基于节点的基于节点的基于节点和子图的方式显式地注入实体知识。自我监管的混合流变压器可以减少不同对象内容之间的混淆,从而有效地指导网络专注于具有真实语义的实体。实验结果很好地验证了我们的EGE-CMP的功效和概括性,表现优于几个SOTA跨模式基线,例如夹子,Uniter和Capture。
translated by 谷歌翻译
人们说:“一张照片值一千字”。那么,我们如何从图像中获取丰富的信息?我们认为,通过使用视觉线索来桥接大型的识别视觉基础模型和语言模型,我们可以无需任何额外的跨模式训练。得益于基础模型的强大零拍功能,我们首先构建图像的丰富语义表示(例如,图像标签,对象属性 /位置,字幕)作为结构化的文本提示,称为视觉线索,使用视觉基础模型。基于视觉线索,我们使用大型语言模型为视觉内容生成一系列综合描述,然后再次通过视觉模型验证,以选择与图像最合适的候选人。我们通过定量和定性测量评估生成的描述的质量。结果证明了这种结构化语义表示的有效性。
translated by 谷歌翻译
我们介绍了一种零拍的视频字幕方法,该方法采用了两个冷冻网络:GPT-2语言模型和剪辑图像文本匹配模型。匹配分数用于引导语言模型生成一个句子,该句子的平均匹配分数高于视频帧的一个子集。与零拍图像字幕方法不同,我们的工作立即考虑整个句子。这是通过在生成过程中优化从头开始的一部分,通过在提示中修改所有其他令牌的表示,并通过迭代重复该过程,逐渐提高生成句子的特殊性和全面性来实现。我们的实验表明,生成的字幕是连贯的,并显示了广泛的现实知识。我们的代码可在以下网址找到:https://github.com/yoadtew/zero-shot-video-to-text
translated by 谷歌翻译
A fundamental characteristic common to both human vision and natural language is their compositional nature. Yet, despite the performance gains contributed by large vision and language pretraining, we find that - across 6 architectures trained with 4 algorithms on massive datasets - they exhibit little compositionality. To arrive at this conclusion, we introduce a new compositionality evaluation benchmark CREPE which measures two important aspects of compositionality identified by cognitive science literature: systematicity and productivity. To measure systematicity, CREPE consists of three test datasets. The three test sets are designed to test models trained on three of the popular training datasets: CC-12M, YFCC-15M, and LAION-400M. They contain 385K, 385K, and 373K image-text pairs and 237K, 210K, and 178K hard negative captions. To test productivity, CREPE contains 17K image-text pairs with nine different complexities plus 246K hard negative captions with atomic, swapping, and negation foils. The datasets are generated by repurposing the Visual Genome scene graphs and region descriptions and applying handcrafted templates and GPT-3. For systematicity, we find that model performance decreases consistently when novel compositions dominate the retrieval set, with Recall@1 dropping by up to 8%. For productivity, models' retrieval success decays as complexity increases, frequently nearing random chance at high complexity. These results hold regardless of model and training dataset size.
translated by 谷歌翻译
大规模的视觉预训练在各种下游任务中都表现出了令人印象深刻的进步。现有方法主要是通过图像和文本的全局表示形式的相似性或对图像和文本特征上的高级交叉模式关注来对跨模式对齐进行建模。但是,由于只有全局图像文本对齐信息,因此他们无法明确学习视觉区域和文本短语之间的细粒语义对齐。在本文中,我们介绍了Loupe,这是一种精细的语义一致性视觉语言预训练框架,该框架从新颖的游戏理论互动的角度学习了细粒度的语义对齐。为了有效地计算游戏理论相互作用,我们进一步提出了一种不确定性感知的神经Shapley交互学习模块。实验表明,Loupe在图像文本检索基准测试中实现了最新的。如果没有任何对象级的人类注释和微调,Loupe就可以在对象检测和视觉接地方面实现竞争性能。更重要的是,Loupe从大规模的原始图像文本对学习细粒语义的新方向。
translated by 谷歌翻译
Vision-Language Pretraining (VLP) and Foundation models have been the go-to recipe for achieving SoTA performance on general benchmarks. However, leveraging these powerful techniques for more complex vision-language tasks, such as cooking applications, with more structured input data, is still little investigated. In this work, we propose to leverage these techniques for structured-text based computational cuisine tasks. Our strategy, dubbed VLPCook (Structured Vision-Language Pretraining for Computational Cooking), first transforms existing image-text pairs to image and structured-text pairs. This allows to pretrain our VLPCook model using VLP objectives adapted to the strutured data of the resulting datasets, then finetuning it on downstream computational cooking tasks. During finetuning, we also enrich the visual encoder, leveraging pretrained foundation models (e.g. CLIP) to provide local and global textual context. VLPCook outperforms current SoTA by a significant margin (+3.3 Recall@1 absolute improvement) on the task of Cross-Modal Food Retrieval on the large Recipe1M dataset. Finally, we conduct further experiments on VLP to validate their importance, especially on the Recipe1M+ dataset. The code will be made publicly available.
translated by 谷歌翻译
最近的自我监督方法使用了大规模的图像文本数据集来学习强大的表示,这些表示无需填补即可将其转移到许多任务。这些方法通常假定其图像与其(简短)字幕之间存在一对一的对应关系。但是,许多任务需要有关多个图像和长文本叙述的推理,例如描述带有视觉摘要的新闻文章。因此,我们探索了一个新颖的环境,其目标是学习一个自我监督的视觉语言表示,该表示对改变文本长度和图像数量是可靠的。此外,与假设字幕的先前工作不同,我们假设图像仅包含与文本的宽松说明对应关系。为了探索这个问题,我们介绍了一个大规模的多模式数据集,其中包含31m文章,22m图像和1M视频。我们表明,对具有多个图像的更长叙述,最新的图像文本对齐方法并不强大。最后,我们介绍了一个直观的基线,该基线在GoodNews数据集上在零摄像集检索上胜过10%。
translated by 谷歌翻译
Most existing text-video retrieval methods focus on cross-modal matching between the visual content of offline videos and textual query sentences. However, in real scenarios, online videos are frequently accompanied by relevant text information such as titles, tags, and even subtitles, which can be utilized to match textual queries. This inspires us to generate associated captions from offline videos to help with existing text-video retrieval methods. To do so, we propose to use the zero-shot video captioner with knowledge of pre-trained web-scale models (e.g., CLIP and GPT-2) to generate captions for offline videos without any training. Given the captions, one question naturally arises: what can auxiliary captions do for text-video retrieval? In this paper, we present a novel framework Cap4Video, which makes use of captions from three aspects: i) Input data: The video and captions can form new video-caption pairs as data augmentation for training. ii) Feature interaction: We perform feature interaction between video and caption to yield enhanced video representations. iii) Output score: The Query-Caption matching branch can be complementary to the original Query-Video matching branch for text-video retrieval. We conduct thorough ablation studies to demonstrate the effectiveness of our method. Without any post-processing, our Cap4Video achieves state-of-the-art performance on MSR-VTT (51.4%), VATEX (66.6%), MSVD (51.8%), and DiDeMo (52.0%).
translated by 谷歌翻译
我们提出了Clip-Lite,一种通过与文本注释的特征对齐方式进行视觉表示学习的信息有效方法。与先前提出的剪辑模型相比,剪辑液在优化其对比学学习目标期间只需要一个负图像文本样本对。我们通过利用信息有效的较低限制来实现这一点,以最大化两个输入模态之间的相互信息。这允许剪辑Lite培训,在获得比夹子的更好的性能的同时具有显着减少的数据和批量尺寸。我们通过在Coco-Tablions数据集上预先绘制来评估剪贴画并对其他数据集进行测试传输。 Clip-Lite在Pascal VOC分类上获得+ 15.4%的映射绝对增益,并在ImageNet上获得A + 22.1%的前1个精度增益,同时与其他更复杂,文本监督模型相当或优越。 Clip-Lite还优于剪辑图像和文本检索,零拍分类和视觉接地。最后,通过在表示学习期间执行显式图像文本对齐,我们显示Clip-Lite可以利用语言语义来鼓励可以在下游任务中使用的无偏见的视觉表示。
translated by 谷歌翻译
以前的视觉语言预训练模型主要构建具有令牌和对象(像素)的多模式输入,然后在它们之间执行交叉模式相互作用。我们认为,只有令牌和对象的输入限制了诸如短语到区域接地之类的高级语义对齐。同时,多层次对齐本质上是一致的,并且能够协同促进表示形式学习。因此,在本文中,我们建议学习视觉预训练(MVPTR)的多级语义一致性。在MVPTR中,我们遵循两种方式的嵌套结构,以引入概念为高级语义。为了简化从多模式多级输入的学习,我们的框架分为两个阶段,第一阶段着重于模式内多级表示学习,第二阶段通过粗粒和细粒度跨模态强化了跨模式的交互语义对齐任务。除了常用的图像文本匹配和掩盖语言模型任务外,我们还引入了第一阶段蒙版概念恢复任务以增强概念表示学习,第二阶段的另外两个任务在第二阶段中,以明确鼓励跨跨层次的多层次对准方式。我们的代码可在https://github.com/junction4nako/mvp_pytorch上找到。
translated by 谷歌翻译
Vision-Language Pre-Training (VLP) has shown promising capabilities to align image and text pairs, facilitating a broad variety of cross-modal learning tasks. However, we observe that VLP models often lack the visual grounding/localization capability which is critical for many downstream tasks such as visual reasoning. In this work, we propose a novel Position-guided Text Prompt (PTP) paradigm to enhance the visual grounding ability of cross-modal models trained with VLP. Specifically, in the VLP phase, PTP divides the image into $N\times N$ blocks, and identifies the objects in each block through the widely used object detector in VLP. It then reformulates the visual grounding task into a fill-in-the-blank problem given a PTP by encouraging the model to predict the objects in the given blocks or regress the blocks of a given object, e.g. filling `P" or ``O" in aPTP ``The block P has a O". This mechanism improves the visual grounding capability of VLP models and thus helps them better handle various downstream tasks. By introducing PTP into several state-of-the-art VLP frameworks, we observe consistently significant improvements across representative cross-modal learning model architectures and several benchmarks, e.g. zero-shot Flickr30K Retrieval (+4.8 in average recall@1) for ViLT \cite{vilt} baseline, and COCO Captioning (+5.3 in CIDEr) for SOTA BLIP \cite{blip} baseline. Moreover, PTP achieves comparable results with object-detector based methods, and much faster inference speed since PTP discards its object detector for inference while the later cannot. Our code and pre-trained weight will be released at \url{https://github.com/sail-sg/ptp}.
translated by 谷歌翻译
场景图生成(SGG)是一项基本任务,旨在检测图像中对象之间的视觉关系。流行的SGG方法要求在培训集中给出所有对象类。这样的封闭设置限制了SGG的实际应用。在本文中,我们介绍了开放式视频范围场景图生成,这是一种新颖,现实且具有挑战性的环境,其中模型在一组基本对象类上进行了训练,但需要推断出看不见的目标对象类的关系。为此,我们提出了一种两步方法,该方法首先对大量的粗粒区域捕获数据进行预先培训,然后利用两种基于及时的技术来验证预先训练的模型而无需更新其参数。此外,我们的方法可以支持对完全看不见的对象类的推论,而现有方法无法处理。在三个基准数据集(视觉基因组,GQA和开放图像)上进行的广泛实验,我们的方法在OV-SGG的设置以及常规的封闭SGG上明显优于最近的强大SGG方法。
translated by 谷歌翻译
We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, processing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific modelsachieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.Preprint. Under review.
translated by 谷歌翻译
在过去的几年中,训练前模型的出现将单峰领域(例如计算机视觉(CV)和自然语言处理(NLP))带到了一个新时代。实质性的作品表明它们对下游大学任务有益,并避免从头开始训练新的模型。那么,此类预训练的模型可以应用于多模式任务吗?研究人员探索了这个问题并取得了重大进展。本文调查了视觉预训练(VLP)的最新进展和新的前沿,包括图像文本和视频文本预训练。为了使读者更好地掌握VLP,我们首先从五个方面回顾了其最新进展:功能提取,模型体系结构,培训预训练目标,预训练数据集和下游任务。然后,我们详细概述了特定的VLP模型。最后,我们讨论了VLP中的新边界。据我们所知,这是对VLP的首次调查。我们希望这项调查能够阐明VLP领域的未来研究。
translated by 谷歌翻译
人工智能(AI)的基本目标是模仿人类的核心认知活动。尽管在AI研究中取得了巨大的成功,但大多数现有方法仅具有单认知能力。为了克服这一局限性并迈出了朝着人工通用智能(AGI)迈出的坚实一步,我们开发了一个通过庞大的多模式数据进行预训练的基础模型,可以快速适应各种下游认知任务。为了实现这一目标,我们建议通过从Internet上拖延的语义相关数据进行自我监督的学习来预先培训我们的基础模型,并表明可以在各种下游任务上获得有希望的结果。特别是,使用开发的模型解剖工具,我们证明了我们的基础模型现在拥有强大的想象力。我们认为,我们的工作从我们的“弱或狭窄AI”的常见实践到“强或广泛的AI”迈出了转变的迈向AGI。
translated by 谷歌翻译
最近的文本到图像匹配模型对大型图像和句子的大公司进行了对比学习。虽然这些模型可以提供用于匹配和随后的零拍任务的强大分数,但它们不能给出给定图像的标题。在这项工作中,我们重新利用这些模型来生成在推理时间的图像时生成描述性文本,而无需进一步的训练或调整步骤。这是通过将具有大语言模型的视觉语义模型组合,从两种网络级模型中的知识中获益。由受监督标题方法获得的标题的限制性较小。此外,作为零射击学习方法,它非常灵活,我们展示了执行图像算法的能力,其中输入可以是图像或文本,输出是句子。这使得新颖的高级视觉能力,例如比较两个图像或解决视觉类比测试。
translated by 谷歌翻译