Can we teach a robot to recognize and make predictions for activities that it has never seen before? We tackle this problem by learning models for video from text. This paper presents a hierarchical model that generalizes instructional knowledge from large-scale text corpora and transfers the knowledge to video. Given a portion of an instructional video, our model recognizes and predicts coherent and plausible actions multiple steps into the future, all in rich natural language. To demonstrate the capabilities of our model, we introduce the \emph{Tasty Videos Dataset V2}, a collection of 4022 recipes for zero-shot learning, recognition and anticipation. Extensive experiments with various evaluation metrics demonstrate the potential of our method for generalization, given limited video data for training models.
translated by 谷歌翻译
Most natural videos contain numerous events. For example, in a video of a "man playing a piano", the video might also contain "another man dancing" or "a crowd clapping". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.
translated by 谷歌翻译
连接视觉和语言在生成智能中起着重要作用。因此,已经致力于图像标题的大型研究工作,即用句法和语义有意义的句子描述图像。从2015年开始,该任务通常通过由Visual Encoder组成的管道和文本生成的语言模型来解决任务。在这些年来,两种组件通过对象区域,属性,介绍多模态连接,完全关注方法和伯特早期融合策略的利用而显着发展。但是,无论令人印象深刻的结果,图像标题的研究还没有达到结论性答案。这项工作旨在提供图像标题方法的全面概述,从视觉编码和文本生成到培训策略,数据集和评估度量。在这方面,我们量化地比较了许多相关的最先进的方法来确定架构和培训策略中最有影响力的技术创新。此外,讨论了问题的许多变体及其开放挑战。这项工作的最终目标是作为理解现有文献的工具,并突出显示计算机视觉和自然语言处理的研究领域的未来方向可以找到最佳的协同作用。
translated by 谷歌翻译
The potential for agents, whether embodied or software, to learn by observing other agents performing procedures involving objects and actions is rich. Current research on automatic procedure learning heavily relies on action labels or video subtitles, even during the evaluation phase, which makes them infeasible in real-world scenarios. This leads to our question: can the human-consensus structure of a procedure be learned from a large set of long, unconstrained videos (e.g., instructional videos from YouTube) with only visual evidence? To answer this question, we introduce the problem of procedure segmentation-to segment a video procedure into category-independent procedure segments. Given that no large-scale dataset is available for this problem, we collect a large-scale procedure segmentation dataset with procedure segments temporally localized and described; we use cooking videos and name the dataset YouCook2. We propose a segment-level recurrent network for generating procedure segments by modeling the dependencies across segments. The generated segments can be used as pre-processing for other tasks, such as dense video captioning and event parsing. We show in our experiments that the proposed model outperforms competitive baselines in procedure segmentation.
translated by 谷歌翻译
We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs). We repurpose pre-trained LLMs to be conditioned on visual input, and finetune them to create automatic video narrators. Our auto-generated narrations offer a number of advantages, including dense coverage of long videos, better temporal synchronization of the visual information and text, and much higher diversity of text. The video-text embedding learned contrastively with these additional auto-generated narrations outperforms the previous state-of-the-art on multiple first-person and third-person video tasks, both in zero-shot and finetuned setups. Most notably, LaViLa obtains an absolute gain of 10.1% on EGTEA classification and 5.9% Epic-Kitchens-100 multi-instance retrieval benchmarks. Furthermore, LaViLa trained with only half the narrations from the Ego4D dataset outperforms baseline models trained on the full set, and shows positive scaling behavior on increasing pre-training data and model size.
translated by 谷歌翻译
Our experience of the world is multimodal -we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.
translated by 谷歌翻译
在本文中,我们考虑了从长时间的视频到几分钟的长视频进行分类的问题(例如,烹饪不同的食谱,烹饪不同的食谱,进行不同的家庭装修,创建各种形式的艺术和手工艺品)。准确地对这些活动进行分类,不仅需要识别构成任务的单个步骤,还需要捕获其时间依赖性。这个问题与传统的动作分类大不相同,在传统的动作分类中,模型通常在跨越几秒钟的视频上进行了优化,并且手动修剪以包含简单的原子动作。虽然步骤注释可以使模型的培训能够识别程序活动的各个步骤,但由于长时间视频中手动注释时间界的超级注释,因此该领域的现有大规模数据集不包括此类段标签。为了解决这个问题,我们建议通过利用文本知识库(Wikihow)的遥远监督来自动确定教学视频中的步骤,其中包括对执行各种复杂活动所需的步骤的详细描述。我们的方法使用语言模型来匹配视频中自动转录的语音,以在知识库中逐步描述。我们证明,经过训练的视频模型可以识别这些自动标记的步骤(无手动监督)产生了在四个下游任务上实现卓越的概括性能的表示:识别程序活动,步骤分类,步骤预测和以自我为中心的视频分类。
translated by 谷歌翻译
自动音频字幕是一项跨模式翻译任务,旨在为给定的音频剪辑生成自然语言描述。近年来,随着免费可用数据集的发布,该任务受到了越来越多的关注。该问题主要通过深度学习技术解决。已经提出了许多方法,例如研究不同的神经网络架构,利用辅助信息,例如关键字或句子信息来指导字幕生成,并采用了不同的培训策略,这些策略极大地促进了该领域的发展。在本文中,我们对自动音频字幕的已发表贡献进行了全面综述,从各种现有方法到评估指标和数据集。我们还讨论了公开挑战,并设想可能的未来研究方向。
translated by 谷歌翻译
Self-supervised learning has become increasingly important to leverage the abundance of unlabeled data available on platforms like YouTube. Whereas most existing approaches learn low-level representations, we propose a joint visual-linguistic model to learn high-level features without any explicit supervision. In particular, inspired by its recent success in language modeling, we build upon the BERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens, derived from vector quantization of video data and off-the-shelf speech recognition outputs, respectively. We use VideoBERT in numerous tasks, including action classification and video captioning. We show that it can be applied directly to openvocabulary classification, and confirm that large amounts of training data and cross-modal information are critical to performance. Furthermore, we outperform the state-of-theart on video captioning, and quantitative results verify that the model learns high-level semantic features.
translated by 谷歌翻译
食物对人类日常生活很重要。在本文中,我们有兴趣学习长期食谱的结构表现形式,这些食谱可以使食谱生成和食品跨模式检索任务受益。与常见的视觉数据不同,这里的食物图像包含混合成分和目标食谱是漫长的段落,在那里我们没有关于结构信息的注释。为了解决上述局限性,我们提出了一种新颖的方法,可以毫无根据地学习烹饪食谱的句子级树结构。我们的方法在系统的框架中汇集了一些新颖的想法:(1)利用一种无监督的学习方法来在训练前获得句子级的树结构标签; (2)通过从(1)中学到的树结构标签的监督从图像中生成目标食谱的树; (3)将学习的树结构整合到食谱生成和食品交叉模式检索过程中。我们提出的模型可以生成优质的句子级别的树结构和连贯的食谱。我们在基准配方1M数据集上实现了最先进的食谱生成和食品交叉模式检索性能。
translated by 谷歌翻译
为了为视频产生适当的标题,推理需要确定相关的概念并注意它们之间的空间关系以及剪辑中的时间发展。我们的端到端编码器视频字幕框架结合了两个基于变压器的体系结构,这是一种用于单个关节时空视频分析的改编变压器,以及用于高级文本生成的基于自我注意力的解码器。此外,我们引入了一种自适应框架选择方案,以减少所需的传入帧数,同时在训练两个变压器时保持相关内容。此外,我们通过汇总每个样本的所有基础真理标题来估计与视频字幕相关的语义概念。我们的方法在MSVD以及大规模的MSR-VTT和VATEX基准数据集上实现了最新的结果,并考虑了多个自然语言产生(NLG)指标。对多样性得分的其他评估突出了我们生成的标题结构的表现力和多样性。
translated by 谷歌翻译
最近,几种方法探索了视频中对象的检测和分类,以便以显着的结果执行零射击动作识别。在这些方法中,类对象关系用于将视觉模式与语义侧信息相关联,因为这些关系也倾向于出现在文本中。因此,Word Vector方法将在其潜在的陈述中反映它们。灵感来自这些方法,并通过视频字幕来描述不仅具有一组对象但具有上下文信息的事件的能力,我们提出了一种方法,其中录像模型称为观察者,提供不同和互补的描述性句子。我们证明,在ZSAR中,代表具有描述性句子的视频而不是深度特征是可行的,并且自然而然地减轻了域适应问题,因为我们在UCF101数据集中达到了最先进的(SOTA)性能,并且在HMDB51上竞争性能他们的训练集。我们还展示了Word Vectors不适合构建我们描述的语义嵌入空间。因此,我们建议用从互联网上获取的搜索引擎获取的文档提取的句子代表课程,而没有任何人类评估描述的描述。最后,我们构建了在多个文本数据集上预先培训的基于BERT的eMbedder的共享语义空间。我们表明,这种预训练对于弥合语义差距至关重要。对于这两种类型的信息,视觉和语义,对此空间的投影很简单,因为它们是句子,使得在此共享空间中的最近邻居规则能够分类。我们的代码可在https://github.com/valterlej/zsarcap上找到。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
密集的视频字幕(DVC)旨在生成多句子描述,以阐明视频中的多个事件,这是具有挑战性,需要的视觉一致性,疑惑一致性和语言多样性。现有方法主要生成各个视频段的标题,缺乏适应全局视觉上下文和快速发展的视觉内容和文本描述之间的渐进对齐,这导致冗余和拼接描述。在本文中,我们介绍了信息流的概念,以模拟跨视频序列和标题的渐进信息。通过设计跨模型信息流对准机制,捕获和对齐的视觉和文本信息流,其在事件/主题演化上以更丰富的上下文和动态赋予标题处理。基于跨模型信息流对准模块,我们进一步提出了DVCFlow框架,它由全球本地视觉编码器组成,用于捕获每个视频段的全局功能和本地特征,以及用于产生标题的预先培训的标题生成器。对流行的ActivityNet标题和Youcookii数据集的广泛实验表明,我们的方法显着优于竞争基础,并根据主题和客观测试产生更多人类文本。
translated by 谷歌翻译
The remarkable success of deep learning in various domains relies on the availability of large-scale annotated datasets. However, obtaining annotations is expensive and requires great effort, which is especially challenging for videos. Moreover, the use of human-generated annotations leads to models with biased learning and poor domain generalization and robustness. As an alternative, self-supervised learning provides a way for representation learning which does not require annotations and has shown promise in both image and video domains. Different from the image domain, learning video representations are more challenging due to the temporal dimension, bringing in motion and other environmental dynamics. This also provides opportunities for video-exclusive ideas that advance self-supervised learning in the video and multimodal domain. In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain. We summarize these methods into four different categories based on their learning objectives: 1) pretext tasks, 2) generative learning, 3) contrastive learning, and 4) cross-modal agreement. We further introduce the commonly used datasets, downstream evaluation tasks, insights into the limitations of existing works, and the potential future directions in this area.
translated by 谷歌翻译
近期和越来越越来越多的视频 - 语言研究的兴趣已经推动了大规模数据集的开发,可实现数据密集型机器学习技术。相比之下,在评估这些数据集的适应性时,已经进行了有限的努力进行视频 - 语言接地任务。最近的作品已经开始发现这些数据集中的重大限制,这表明最先进的技术通常会过度地覆盖到隐藏的数据集偏差。在这项工作中,我们呈现MAD(电影音频描述),这是一种新颖的基准,从扩充现有视频数据集的范式,其中包含文本注释,并专注于爬行和对齐主流电影的可用音频描述。 MAD包含超过384,000个自然语言句子,该句子接地为超过1,200小时的视频,并且在视频 - 语言接地数据集中展示目前诊断的偏差显着减少。疯狂的收集策略使新颖且更具挑战性的视频 - 语言接地版本,其中短时间时刻(通常秒长)必须在多样化的长型视频中准确地接地,可以持续长达三个小时。
translated by 谷歌翻译
本文对过去二十年来对自然语言生成(NLG)的研究提供了全面的审查,特别是与数据到文本生成和文本到文本生成深度学习方法有关,以及NLG的新应用技术。该调查旨在(a)给出关于NLG核心任务的最新综合,以及该领域采用的建筑;(b)详细介绍各种NLG任务和数据集,并提请注意NLG评估中的挑战,专注于不同的评估方法及其关系;(c)强调一些未来的强调和相对近期的研究问题,因为NLG和其他人工智能领域的协同作用而增加,例如计算机视觉,文本和计算创造力。
translated by 谷歌翻译
Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or "temporally deep", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are "doubly deep" in that they can be compositional in spatial and temporal "layers". Such models may have advantages when target concepts are complex and/or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and/or optimized.
translated by 谷歌翻译
通用视频摘要是一种传播全部故事并具有最重要的场景的视频的销钉版本。然而,视频中场景的重要性通常是主观的,并且用户应该可以选择通过使用自然语言来定制摘要来指定对它们重要的内容。此外,用于全自动通用摘要的现有模型没有利用可用的语言模型,可以作为显着性的有效性。这项工作引入了剪辑 - 它,一个框架,用于解决通用和查询的视频摘要,通常在文献中单独接近。我们提出了一种语言引导的多模式变压器,该变压器学习基于它们相对于彼此的重要性以及与用户定义的查询(用于查询集中的摘要)或自动生成的密集视频字幕的关联(用于泛型视频摘要)。我们的模型可以通过培训延伸到无监督的环境,而没有地理监督。我们以标准视频摘要数据集(TVSUM和SUMME)和查询视频摘要数据集(QFVS)在标准视频摘要数据集(TVSUM和SUMPE)上的重大边际而先前的工作。特别是,我们在转移环境中取得了大量的改进,证明了我们的方法的强大泛化能力。
translated by 谷歌翻译
The goal of building dialogue agents that can converse with humans naturally has been a long-standing dream of researchers since the early days of artificial intelligence. The well-known Turing Test proposed to judge the ultimate validity of an artificial intelligence agent on the indistinguishability of its dialogues from humans'. It should come as no surprise that human-level dialogue systems are very challenging to build. But, while early effort on rule-based systems found limited success, the emergence of deep learning enabled great advance on this topic. In this thesis, we focus on methods that address the numerous issues that have been imposing the gap between artificial conversational agents and human-level interlocutors. These methods were proposed and experimented with in ways that were inspired by general state-of-the-art AI methodologies. But they also targeted the characteristics that dialogue systems possess.
translated by 谷歌翻译