我们介绍了一种零拍的视频字幕方法,该方法采用了两个冷冻网络:GPT-2语言模型和剪辑图像文本匹配模型。匹配分数用于引导语言模型生成一个句子,该句子的平均匹配分数高于视频帧的一个子集。与零拍图像字幕方法不同,我们的工作立即考虑整个句子。这是通过在生成过程中优化从头开始的一部分,通过在提示中修改所有其他令牌的表示,并通过迭代重复该过程,逐渐提高生成句子的特殊性和全面性来实现。我们的实验表明,生成的字幕是连贯的,并显示了广泛的现实知识。我们的代码可在以下网址找到:https://github.com/yoadtew/zero-shot-video-to-text
translated by 谷歌翻译
最近的文本到图像匹配模型对大型图像和句子的大公司进行了对比学习。虽然这些模型可以提供用于匹配和随后的零拍任务的强大分数,但它们不能给出给定图像的标题。在这项工作中,我们重新利用这些模型来生成在推理时间的图像时生成描述性文本,而无需进一步的训练或调整步骤。这是通过将具有大语言模型的视觉语义模型组合,从两种网络级模型中的知识中获益。由受监督标题方法获得的标题的限制性较小。此外,作为零射击学习方法,它非常灵活,我们展示了执行图像算法的能力,其中输入可以是图像或文本,输出是句子。这使得新颖的高级视觉能力,例如比较两个图像或解决视觉类比测试。
translated by 谷歌翻译
图像标题是视觉语言理解的基本任务,其中模型将文本信息标题预测到给定输入图像。在本文中,我们提出了一种解决此任务的简单方法。我们使用剪辑编码作为标题的前缀,通过采用简单的映射网络,然后微调语言模型以生成图像标题。最近提出的剪辑模型包含丰富的语义特征,这些功能培训了文本背景,使其最适合视觉语言感知。我们的关键思想与预先接受训练的语言模型(GPT2)一起,我们获得了广泛了解视觉和文本数据。因此,我们的方法只需要相当快速的培训来产生称职的标题模型。如果没有额外的注释或预训练,它有效地为大规模和多样化的数据集生成有意义的标题。令人惊讶的是,即使仅在训练映射网络时,我们的方法也很好地运行良好,而剪辑和语言模型仍然冻结,则允许较轻的培训参数较轻的架构。通过定量评估,我们展示了我们的模型在充满挑战的概念标题和Nocaps数据集上实现了最先进的方法的可比结果,而它更简单,更快,更轻。我们的代码在https://github.com/rmokady/clip_prefix_caption中提供。
translated by 谷歌翻译
Many high-level skills that are required for computer vision tasks, such as parsing questions, comparing and contrasting semantics, and writing descriptions, are also required in other domains such as natural language processing. In this paper, we ask whether this makes it possible to learn those skills from text data and then use them to complete vision tasks without ever training on visual training data. Key to our approach is exploiting the joint embedding space of contrastively trained vision and language encoders. In practice, there can be systematic differences between embedding spaces for different modalities in contrastive models, and we analyze how these differences affect our approach and study a variety of strategies to mitigate this concern. We produce models using only text training data on three tasks: image captioning, visual entailment and visual question answering, and evaluate them on standard benchmarks using images. We find that this kind of transfer is possible and results in only a small drop in performance relative to models trained on images. We also showcase a variety of stylistic image captioning models that were trained using no image data and no human-curated language data, but instead text data from books, the web, or language models.
translated by 谷歌翻译
连接视觉和语言在生成智能中起着重要作用。因此,已经致力于图像标题的大型研究工作,即用句法和语义有意义的句子描述图像。从2015年开始,该任务通常通过由Visual Encoder组成的管道和文本生成的语言模型来解决任务。在这些年来,两种组件通过对象区域,属性,介绍多模态连接,完全关注方法和伯特早期融合策略的利用而显着发展。但是,无论令人印象深刻的结果,图像标题的研究还没有达到结论性答案。这项工作旨在提供图像标题方法的全面概述,从视觉编码和文本生成到培训策略,数据集和评估度量。在这方面,我们量化地比较了许多相关的最先进的方法来确定架构和培训策略中最有影响力的技术创新。此外,讨论了问题的许多变体及其开放挑战。这项工作的最终目标是作为理解现有文献的工具,并突出显示计算机视觉和自然语言处理的研究领域的未来方向可以找到最佳的协同作用。
translated by 谷歌翻译
在Web规模数据上预测的大型视觉和语言模型提供了对许多V&L问题无价的表示。但是,目前尚不清楚如何将它们用于以非结构化语言为特定于用户特定的视觉概念。这个问题来自多个域,从个性化图像检索到与智能设备的个性化交互。我们介绍了一个新的学习设置,称为个性化视觉和语言(PERVL),并使用两个新的基准数据集来检索和细分用户特定的“个性化”概念“野外”。在PERVL中,应该独立于下游任务(2)允许经过审慎的模型以免费语言来推论它们,并且(3)不需要个性化的负面示例。我们提出了一个用于解决PERVL的体系结构,该体系结构通过扩展了一个预审计模型的输入词汇,并用新单词嵌入新的个性化概念。然后,模型可以通过简单地在句子中使用它们来推理它们。我们证明我们的方法从几个示例中学习了个性化的视觉概念,并且可以使用丰富的文本查询有效地将它们应用于图像检索和语义细分中。
translated by 谷歌翻译
A major goal of multimodal research is to improve machine understanding of images and text. Tasks include image captioning, text-to-image generation, and vision-language representation learning. So far, research has focused on the relationships between images and text. For example, captioning models attempt to understand the semantics of images which are then transformed into text. An important question is: which annotation reflects best a deep understanding of image content? Similarly, given a text, what is the best image that can present the semantics of the text? In this work, we argue that the best text or caption for a given image is the text which would generate the image which is the most similar to that image. Likewise, the best image for a given text is the image that results in the caption which is best aligned with the original text. To this end, we propose a unified framework that includes both a text-to-image generative model and an image-to-text generative model. Extensive experiments validate our approach.
translated by 谷歌翻译
Most existing text-video retrieval methods focus on cross-modal matching between the visual content of offline videos and textual query sentences. However, in real scenarios, online videos are frequently accompanied by relevant text information such as titles, tags, and even subtitles, which can be utilized to match textual queries. This inspires us to generate associated captions from offline videos to help with existing text-video retrieval methods. To do so, we propose to use the zero-shot video captioner with knowledge of pre-trained web-scale models (e.g., CLIP and GPT-2) to generate captions for offline videos without any training. Given the captions, one question naturally arises: what can auxiliary captions do for text-video retrieval? In this paper, we present a novel framework Cap4Video, which makes use of captions from three aspects: i) Input data: The video and captions can form new video-caption pairs as data augmentation for training. ii) Feature interaction: We perform feature interaction between video and caption to yield enhanced video representations. iii) Output score: The Query-Caption matching branch can be complementary to the original Query-Video matching branch for text-video retrieval. We conduct thorough ablation studies to demonstrate the effectiveness of our method. Without any post-processing, our Cap4Video achieves state-of-the-art performance on MSR-VTT (51.4%), VATEX (66.6%), MSVD (51.8%), and DiDeMo (52.0%).
translated by 谷歌翻译
这项工作是在培训生成动作/视频识别模型上,其输出是描述视频的自由形式的特定动作标题(而不是动作类标签)。生成的方法具有实用的优势,例如生产更细粒度和人类可读的产出,并且自然而然地是开放的。为此,我们提议适应视频/动作识别的预先训练的生成视觉和语言(V&L)基础模型。据我们所知,最近有几次尝试适应了用对比度学习(例如剪辑)训练的V&L模型(例如剪辑),但据我们所知,我们提出了第一种设定实现这一目标的方法来实现生成模型的方法。我们首先表明,生成模型的直接微调生产具有严重过度拟合的动作类别。为了减轻这一点,我们介绍了REST,这是一个由两个关键组成部分组成的培训框架:一种无监督的方法,用于通过伪捕获生成和自我训练,将生成模型适应动作/视频,即不使用任何动作特定的标签; (b)基于剪辑的检索方法,用于为每个视频发现一套伪装的伪扣,以训练该模型。重要的是,我们表明这两个组件对于获得高精度都是必要的。我们评估零拍动识别的问题的休息,我们表明,与基于对比的学习方法相比,我们的方法非常有竞争力。代码将可用。
translated by 谷歌翻译
文本到图像合成的最新进展导致了较大的经过验证的变压器,具有出色的能力,可以从给定文本产生可视化。但是,这些模型不适合专门的任务,例如故事可视化,该任务要求代理商制作一系列图像,给定相应的字幕序列,形成叙述。此外,我们发现故事可视化任务无法适应新叙事中看不见的情节和角色的概括。因此,我们首先提出了故事延续的任务,其中生成的视觉故事是在源图像上进行的,从而可以更好地对具有新角色的叙述进行更好的概括。然后,我们使用特定于(a)顺序图像生成的任务特定模块和(b)从初始帧复制相关元素的任务特定模块来增强或“复古”文本对图像合成模型。然后,我们探讨了预训练模型的全模型芬太尼以及对参数适应的及时调整。我们在两个现有数据集(PororoSV和FlintStonessV)上评估了我们的方法storydall-e,并介绍了从视频吸引数据集收集的新数据集DIDEMOSV。我们还基于生成的对抗网络(GAN)开发了一个模型故事游戏,以进行故事的延续,并将其与StoryDall-E模型进行比较,以展示我们方法的优势。我们表明,我们的复古拟合方法优于基于GAN的模型,用于故事延续,并促进从源图像中复制视觉元素,从而改善了生成的视觉故事中的连续性。最后,我们的分析表明,经过审计的变压器努力理解包含几个角色的叙述。总体而言,我们的工作表明,可以验证的文本对图像合成模型可以适应复杂和低资源的任务,例如故事延续。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs). We repurpose pre-trained LLMs to be conditioned on visual input, and finetune them to create automatic video narrators. Our auto-generated narrations offer a number of advantages, including dense coverage of long videos, better temporal synchronization of the visual information and text, and much higher diversity of text. The video-text embedding learned contrastively with these additional auto-generated narrations outperforms the previous state-of-the-art on multiple first-person and third-person video tasks, both in zero-shot and finetuned setups. Most notably, LaViLa obtains an absolute gain of 10.1% on EGTEA classification and 5.9% Epic-Kitchens-100 multi-instance retrieval benchmarks. Furthermore, LaViLa trained with only half the narrations from the Ego4D dataset outperforms baseline models trained on the full set, and shows positive scaling behavior on increasing pre-training data and model size.
translated by 谷歌翻译
从图像中产生短篇小说是艰巨的。与图像字幕不同,来自图像的故事产生构成了多个挑战:保持故事连贯性,适当评估故事的质量,将生成的故事转向某种风格,并解决图像故事对的参考数据集的稀缺性,以限制训练期间的训练监督。在这项工作中,我们介绍了插件的故事讲述者(PPST),并通过以下方式改进图像到故事的生成:1)通过合并大型预培训模型,即剪辑和GPT-2来减轻数据稀缺问题,以促进通过最少的监督,流利的图像到文本一代,以及2)通过合并风格适配器来控制故事的生成,从而实现了更相关的一代。我们通过非风格,浪漫风格和动作风格的PPST进行图像到故事的生成实验,并将我们生成的故事与以前的故事进行比较三个方面的故事,即故事连贯性,图像故事相关性和风格和风格健身,使用自动和人类评估。结果表明,PPST提高了故事的连贯性,并且具有更好的图像故事相关性,但尚未充分风格。
translated by 谷歌翻译
Despite the remarkable progress of image captioning, existing captioners typically lack the controllable capability to generate desired image captions, e.g., describing the image in a rough or detailed manner, in a factual or emotional view, etc. In this paper, we show that a unified model is qualified to perform well in diverse domains and freely switch among multiple styles. Such a controllable capability is achieved by embedding the prompt learning into the image captioning framework. To be specific, we design a set of prompts to fine-tune the pre-trained image captioner. These prompts allow the model to absorb stylized data from different domains for joint training, without performance degradation in each domain. Furthermore, we optimize the prompts with learnable vectors in the continuous word embedding space, avoiding the heuristic prompt engineering and meanwhile exhibiting superior performance. In the inference stage, our model is able to generate desired stylized captions by choosing the corresponding prompts. Extensive experiments verify the controllable capability of the proposed method. Notably, we achieve outstanding performance on two diverse image captioning benchmarks including COCO Karpathy split and TextCaps using a unified model.
translated by 谷歌翻译
我们介绍了自回归文本到图像(Parti)模型的途径,该模型生成高保真的影像图像并支持涉及复杂组成和世界知识的内容丰富的合成。 Parti将文本对图像生成视为类似于机器翻译的序列到序列建模问题,图像令牌的序列是目标输出,而不是其他语言的文本令牌。这种策略自然可以利用大型语言模型的先前工作,通过扩展数据和模型尺寸,能力和性能的持续进展。我们的方法很简单:首先,Parti使用基于变压器的图像令牌VIT-VQGAN将图像编码为离散令牌的序列。其次,我们通过将编码器二次变压器模型缩放到20B参数来实现一致的质量改进,其新的最新零弹药FID得分为7.23,而MS-Coco的FIDED得分为3.22。我们对本地化叙述以及党的详细分析(P2),这是1600多个英语提示的新的整体基准,证明了Parti在各种类别和难度方面的有效性。我们还探索并突出了我们的模型的局限性,以定义和体现关注重点领域以进一步改进。有关高分辨率图像,请参见https://parti.research.google/。
translated by 谷歌翻译
视频标题的当前度量主要基于参考和候选字幕之间的文本级别比较。然而,它们具有一些不可能的缺点,例如,它们不能在没有参考的情况下处理视频,并且由于视频到文本的一对多性质和忽视视觉相关性的一对多性质,它们可能导致偏见的评估。从人类评估者的观点来看,高质量的标题应与提供的视频一致,但不一定类似于文字或语义中的参考。灵感来自人类评估,我们提出了Emscore(基于匹配的分数),是视频字幕的一种新颖的无参考度量,其直接测量视频和候选字幕之间的相似性。受益于最近的大规模预训练模型的发展,我们利用了一个良好的预先训练的视觉语言模型来提取用于计算Emscore的视觉和语言嵌入。具体地,Emscore将粗粒(视频和标题)和细粒度(帧和单词)水平的匹配分数组合,这将考虑到视频的整体理解和详细特征。此外,考虑到潜在的信息增益,Emscore可以灵活地扩展到人类标记的参考可用的条件。最后但并非最不重要的是,我们收集Vatex-eval和ActivityNet-Foil数据集以系统地评估现有的度量标准。 Vatex-emp实验表明,Emscore具有更高的人类相关性和较低的参考依赖性。 ActivityNet-Foil实验验证Emscore可以有效地识别“幻觉”标题。将释放数据集以促进视频标题度量的开发。代码可在:https://github.com/shiyaya/emcore。
translated by 谷歌翻译
探索大规模预处理的基础模型对计算机视觉具有重大兴趣,因为这些模型可以快速转移到许多下游任务中。本文介绍了对比字幕(COCA),这是一种极简主义的设计,旨在为图像文本编码器编码器基础模型预算与对比度损失和字幕损失,从而从剪辑和诸如simvlm之类的生成方法之类的对比方法中包含模型能力。与所有解码器层都参与编码器输出的标准编码器 - 模块变压器相反,可口可乐省略了解码器层的上半部分的交叉注意,以编码单峰文本表示,并串联到剩余的解码器层,这些解码器与图像编码器相交的解码器层多模式图像文本表示。除了对多模态解码器输出的字幕损失外,我们还应用了单峰图像和文本嵌入之间的对比损失,该输出可以预测文本令牌自动加压。通过共享相同的计算图,可以用最小的开销有效地计算两个培训目标。可口可乐是端到端和从头开始的网络尺度alt-text数据和带注释的图像,通过将所有标签视为文本,无缝地统一自然语言监督以进行表示。从经验上讲,可口可乐通过零拍传输或在广泛的下游任务上进行零摄像转移或最少的特定任务适应,跨越视觉识别(Imagenet,Kinetics-400/600/700,瞬间, ),交叉模式检索(MSCOCO,FLICKR30K,MSR-VTT),多模式理解(VQA,SNLI-VE,NLVR2)和图像字幕(MSCOCO,NOCAPS)。值得注意的是,在Imagenet分类方面,COCA获得了86.3%的TOP-1准确性,带有冷冻编码器和学习的分类头90.6%,以及带有填充编码器的Imagenet上的新最先进的91.0%Top-1 Top-1精度。
translated by 谷歌翻译
虽然标题模型已经获得了引人注目的结果,但在描述自然图像时,它们仍然不会涵盖现实世界概念的整个长尾分布。在本文中,我们通过在Web级自动收集的数据集上培训来解决与野外概念生成人类描述的任务。为此,我们提出了一种模型,该模型可以利用嘈杂的图像标题对,同时维持像Coco这样的传统人类注释数据集的描述性风格。我们的模型通过使用关键字和风格标记将内容从风格分开,使用单一目标是提示语言建模和比其他最近提出的更简单。在实验上,我们的模型在零拍摄设置中始终如一地占据了说明性质量和能力的现有方法。根据苹果酒公制,我们在使用外部数据时在Coco和Nocaps上获得新的最新状态。
translated by 谷歌翻译
文本对图像模型提供了前所未有的自由,可以通过自然语言指导创作。然而,尚不清楚如何行使这种自由以生成特定独特概念,修改其外观或以新角色和新颖场景构成它们的图像。换句话说,我们问:我们如何使用语言指导的模型将猫变成绘画,或者想象基于我们喜欢的玩具的新产品?在这里,我们提出了一种简单的方法,可以允许这种创造性自由。我们仅使用3-5个用户提供的概念(例如对象或样式)的图像,我们学会通过在冷冻文本到图像模型的嵌入空间中通过新的“单词”表示它。这些“单词”可以组成自然语言句子,以直观的方式指导个性化的创作。值得注意的是,我们发现有证据表明单词嵌入足以捕获独特而多样的概念。我们将我们的方法比较了各种基线,并证明它可以更忠实地描绘出一系列应用程序和任务的概念。我们的代码,数据和新单词将在以下网址提供:https://textual-inversion.github.io
translated by 谷歌翻译
Image captioning is one of the straightforward tasks that can take advantage of large-scale web-crawled data which provides rich knowledge about the visual world for a captioning model. However, since web-crawled data contains image-text pairs that are aligned at different levels, the inherent noises (e.g., misaligned pairs) make it difficult to learn a precise captioning model. While the filtering strategy can effectively remove noisy data, however, it leads to a decrease in learnable knowledge and sometimes brings about a new problem of data deficiency. To take the best of both worlds, we propose a noise-aware learning framework, which learns rich knowledge from the whole web-crawled data while being less affected by the noises. This is achieved by the proposed quality controllable model, which is learned using alignment levels of the image-text pairs as an additional control signal during training. The alignment-conditioned training allows the model to generate high-quality captions of well-aligned by simply setting the control signal to desired alignment level at inference time. Through in-depth analysis, we show that our controllable captioning model is effective in handling noise. In addition, with two tasks of zero-shot captioning and text-to-image retrieval using generated captions (i.e., self-retrieval), we also demonstrate our model can produce high-quality captions in terms of descriptiveness and distinctiveness. Code is available at \url{https://github.com/kakaobrain/noc}.
translated by 谷歌翻译