We consider the question: what can be learnt by looking at and listening to a large number of unlabelled videos? There is a valuable, but so far untapped, source of information contained in the video itself -the correspondence between the visual and the audio streams, and we introduce a novel "Audio-Visual Correspondence" learning task that makes use of this. Training visual and audio networks from scratch, without any additional supervision other than the raw unconstrained videos themselves, is shown to successfully solve this task, and, more interestingly, result in good visual and audio representations. These features set the new state-of-the-art on two sound classification benchmarks, and perform on par with the state-of-the-art selfsupervised approaches on ImageNet classification. We also demonstrate that the network is able to localize objects in both modalities, as well as perform fine-grained recognition tasks.
translated by 谷歌翻译
In this paper our objectives are, first, networks that can embed audio and visual inputs into a common space that is suitable for cross-modal retrieval; and second, a network that can localize the object that sounds in an image, given the audio signal. We achieve both these objectives by training from unlabelled video using only audio-visual correspondence (AVC) as the objective function. This is a form of crossmodal self-supervision from video. To this end, we design new network architectures that can be trained for cross-modal retrieval and localizing the sound source in an image, by using the AVC task. We make the following contributions: (i) show that audio and visual embeddings can be learnt that enable both within-mode (e.g. audio-to-audio) and between-mode retrieval; (ii) explore various architectures for the AVC task, including those for the visual stream that ingest a single image, or multiple images, or a single image and multi-frame optical flow; (iii) show that the semantic object that sounds within an image can be localized (using only the sound, no motion or flow information); and (iv) give a cautionary tale on how to avoid undesirable shortcuts in the data preparation.
translated by 谷歌翻译
There is a natural correlation between the visual and auditive elements of a video. In this work we leverage this connection to learn general and effective models for both audio and video analysis from self-supervised temporal synchronization. We demonstrate that a calibrated curriculum learning scheme, a careful choice of negative examples, and the use of a contrastive loss are critical ingredients to obtain powerful multi-sensory representations from models optimized to discern temporal synchronization of audio-video pairs. Without further finetuning, the resulting audio features achieve performance superior or comparable to the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual subnet provides a very effective initialization to improve the accuracy of video-based action recognition models: compared to learning from scratch, our self-supervised pretraining yields a remarkable gain of +19.9% in action recognition accuracy on UCF101 and a boost of +17.7% on HMDB51.
translated by 谷歌翻译
在本文中,我们考虑了视听同步的问题应用于视频`in-wild'(即,超越语音的一般类)。作为一项新任务,我们识别并策划具有高视听相关性的测试集,即VGG-SOCK SYNC。我们比较了一些专门设计的基于变压器的架构变体,用于模拟任意长度的音频和视觉信号,同时显着降低训练期间的内存要求。我们进一步对策划数据集进行了深入的分析,并定义了开放域视听同步的评估度量。我们在标准唇读语音基准测试中应用我们的方法,LRS2和LRS3,在各个方面的消融。最后,我们在新的VGG-SOCKC SYNC视频数据集中设置了与超过160个不同类别的通用视听同步的第一个基准。在所有情况下,我们所提出的模型通过显着的保证金优于以前的最先进。
translated by 谷歌翻译
The thud of a bouncing ball, the onset of speech as lips open -when visual and audio events occur together, it suggests that there might be a common, underlying event that produced both signals. In this paper, we argue that the visual and audio components of a video signal should be modeled jointly using a fused multisensory representation. We propose to learn such a representation in a self-supervised way, by training a neural network to predict whether video frames and audio are temporally aligned. We use this learned representation for three applications: (a) sound source localization, i.e. visualizing the source of sound in a video; (b) audio-visual action recognition; and (c) on/offscreen audio source separation, e.g. removing the off-screen translator's voice from a foreign official's speech. Code, models, and video results are available on our webpage: http://andrewowens.com/multisensory.
translated by 谷歌翻译
Videos are a rich source of multi-modal supervision. In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: visual, audio and language streams. To this end, we introduce the notion of a multimodal versatile network -a network that can ingest multiple modalities and whose representations enable downstream tasks in multiple modalities. In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and audio modalities can be maintained, whilst also integrating text into a common embedding. Driven by versatility, we also introduce a novel process of deflation, so that the networks can be effortlessly applied to the visual data in the form of video or a static image. We demonstrate how such networks trained on large collections of unlabelled video data can be applied on video, video-text, image and audio tasks. Equipped with these representations, we obtain state-of-the-art performance on multiple challenging benchmarks including UCF101, HMDB51, Kinetics600, Audioset and ESC-50 when compared to previous self-supervised work. Our models are publicly available [1, 2, 3]. * Equal contribution. † Work done during an internship at DeepMind. 34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
我们在没有监督的情况下解决了学习对象探测器的问题。与弱监督的对象检测不同,我们不假设图像级类标签。取而代之的是,我们使用音频组件来“教”对象检测器,从视听数据中提取监督信号。尽管此问题与声音源本地化有关,但它更难,因为检测器必须按类型对对象进行分类,列举对象的每个实例,并且即使对象保持沉默,也可以这样做。我们通过首先设计一个自制的框架来解决这个问题,该框架具有一个对比目标,该目标共同学会了分类和本地化对象。然后,在不使用任何监督的情况下,我们只需使用这些自我监督的标签和盒子来训练基于图像的对象检测器。因此,对于对象检测和声音源定位的任务,我们优于先前的无监督和弱监督的检测器。我们还表明,我们可以将该探测器与每个伪级标签的标签保持一致,并展示我们的方法如何学习检测超出仪器(例如飞机和猫)的通用对象。
translated by 谷歌翻译
在我们的日常生活中,视听场景是普遍存在的。对于人类来说是常见的常见地定位不同的探测物体,但是对于在没有类别注释的情况下实现类感知的声音对象本地化的机器非常具有挑战性,即,本地化声音对象并识别其类别。为了解决这个问题,我们提出了一个两阶段的逐步学习框架,以仅使用音频和视觉之间的对应方式本地化和识别复杂的视听方案中的探测对象。首先,我们建议通过单一源案例中通过粗粒化的视听对应来确定声音区域。然后,声音区域中的视觉功能被利用为候选对象表示,以建立类别表示对象字典,用于表达视觉字符提取。我们在鸡尾酒会方案中生成类感知对象本地化映射,并使用视听对应来抑制静音区域来引用此字典。最后,我们使用类别级视听一致性作为达到细粒度音频和探测物体分布对齐的监督。关于现实和综合视频的实验表明,我们的模型在本地化和识别物体方面是优越的,以及滤除静音。我们还将学习的视听网络转移到无监督的对象检测任务中,获得合理的性能。
translated by 谷歌翻译
我们提出了一个新的基准数据集,即Sapsucker Woods 60(SSW60),用于推进视听细颗粒分类的研究。尽管我们的社区在图像上的细粒度视觉分类方面取得了长足的进步,但音频和视频细颗粒分类的对应物相对尚未探索。为了鼓励在这个领域的进步,我们已经仔细构建了SSW60数据集,以使研究人员能够以三种不同的方式对相同的类别进行分类:图像,音频和视频。该数据集涵盖了60种鸟类,由现有数据集以及全新的专家策划音频和视频数据集组成。我们通过使用最先进的变压器方法进行了彻底基准的视听分类性能和模态融合实验。我们的发现表明,视听融合方法的性能要比仅使用基于图像或音频的方法来进行视频分类任务要好。我们还提出了有趣的模态转移实验,这是由SSW60的独特构造所涵盖的三种不同模态所实现的。我们希望SSW60数据集和伴随的基线在这个迷人的地区进行研究。
translated by 谷歌翻译
我们提出了一种为给定视频推荐音乐曲目的方法,反之亦然,基于它们的时间对齐及其在艺术层面上的信件。我们提出了一种自我监督的方法,该方法直接从数据中学习了这一对应,而无需任何人类注释。为了捕获解决任务所需的高级概念,我们建议使用每种模式的变压器网络对视频和音乐信号的长期时间上下文进行建模。实验表明,这种方法强烈胜过不利用时间上下文的替代方案。我们的贡献的结合提高了先前最高现状的检索准确性高达10倍。这种强大的改进使我们能够引入广泛的分析和应用。例如,我们可以根据视觉定义的属性来调节音乐检索。
translated by 谷歌翻译
Visual and audio modalities are highly correlated, yet they contain different information. Their strong correlation makes it possible to predict the semantics of one from the other with good accuracy. Their intrinsic differences make cross-modal prediction a potentially more rewarding pretext task for self-supervised learning of video and audio representations compared to within-modality learning. Based on this intuition, we propose Cross-Modal Deep Clustering (XDC), a novel selfsupervised method that leverages unsupervised clustering in one modality (e.g., audio) as a supervisory signal for the other modality (e.g., video). This cross-modal supervision helps XDC utilize the semantic correlation and the differences between the two modalities. Our experiments show that XDC outperforms single-modality clustering and other multi-modal variants. XDC achieves state-of-the-art accuracy among self-supervised methods on multiple video and audio benchmarks. Most importantly, our video model pretrained on large-scale unlabeled data significantly outperforms the same model pretrained with full-supervision on ImageNet and Kinetics for action recognition on HMDB51 and UCF101. To the best of our knowledge, XDC is the first self-supervised learning method that outperforms large-scale fully-supervised pretraining for action recognition on the same architecture.
translated by 谷歌翻译
我们建议探索一个称为视听分割(AVS)的新问题,其中的目标是输出在图像帧时产生声音的对象的像素级映射。为了促进这项研究,我们构建了第一个视频分割基准(AVSBENCH),为声音视频中的声音对象提供像素的注释。使用此基准测试了两个设置:1)具有单个声源的半监督音频分割和2)完全监督的音频段段,并带有多个声源。为了解决AVS问题,我们提出了一种新颖的方法,该方法使用时间像素的视听相互作用模块注入音频语义作为视觉分割过程的指导。我们还设计正规化损失,以鼓励训练期间的视听映射。 AVSBench上的定量和定性实验将我们的方法与相关任务中的几种现有方法进行了比较,这表明所提出的方法有望在音频和像素视觉语义之间建立桥梁。代码可从https://github.com/opennlplab/avsbench获得。
translated by 谷歌翻译
双耳音频提供具有沉浸式空间声音体验的人类听众,但大多数现有视频缺乏双耳录音。我们提出了一种音频空间化方法,它借鉴视频中的视觉信息,以将其单声道(单通道)音频转换为双耳音频。现有方法利用直接从视频帧提取的可视化功能,我们的方法明确地解除了视觉流中存在的几何线索以指导学习过程。特别是,我们开发了一种多任务框架,通过考虑底层室脉冲响应,从而为底层室的脉冲响应而学习几何感知功能,从声音源的位置,以及声音几何形状的一致性随着时间的推移。此外,我们介绍了一个新的大型视频数据集,具有逼真的双链条音频,用于真实世界扫描环境。在两个数据集上,我们展示了我们方法的功效,这实现了最先进的结果。
translated by 谷歌翻译
我们呈现了一个用于学习视听表示的自我监督的框架。在我们的框架中引入了一种小说概念,其中除了学习模态和标准的“同步的”跨模型关系之外,riscross也会学习“异步”的跨模式关系。我们展示通过放松音频和视觉模态之间的时间同步性,网络了解强劲的时间不变的表示。我们的实验表明,音频和视觉方式的强大增强,可放松交叉模态时间同步优化性能。要预先绘制我们提出的框架,我们使用具有不同大小,动力学,动力学-400和augioset的不同数据集。学习的表示是在许多下游任务中评估的,即行动识别,声音分类和检索。 Crisscross显示了动作识别的最先进的性能(UCF101和HMDB51)和声音分类(ESC50)。将公开可用的代码和预赠品模型。
translated by 谷歌翻译
机器学习正在改变视频编辑行业。计算机视觉的最新进展已升级视频编辑任务,例如智能重新构图,旋转镜,颜色分级或应用数字化妆。但是,大多数解决方案都集中在视频操作和VFX上。这项工作介绍了视频编辑,数据集和基准测试的解剖结构,以促进AI辅助视频编辑研究。我们的基准套件专注于视频编辑任务,除了视觉效果之外,例如自动录像组织和辅助视频组装。为了对这些方面进行研究,我们注释了超过150万的标签,并从196176年从电影场景中取样了摄影作品。我们为每个任务建立竞争性基线方法和详细分析。我们希望我们的作品能够对AI辅助视频编辑的未经展开的领域进行创新的研究。
translated by 谷歌翻译
我们介绍了视觉匹配任务,其中音频剪辑被转换为听起来像是在目标环境中记录的。鉴于目标环境的图像和源音频的波形,目标是重新合成音频,以匹配目标室声音的可见几何形状和材料所建议的。为了解决这一新颖的任务,我们提出了一个跨模式变压器模型,该模型使用视听注意力将视觉属性注入音频并生成真实的音频输出。此外,我们设计了一个自我监督的训练目标,尽管他们缺乏声学上不匹配的音频,但可以从野外网络视频中学习声学匹配。我们证明,我们的方法成功地将人类的言语转化为图像中描绘的各种现实环境,表现优于传统的声学匹配和更严格的监督基线。
translated by 谷歌翻译
注释音乐节拍在繁琐的过程中是很长的。为了打击这个问题,我们为节拍跟踪和下拍估算提出了一种新的自我监督的学习借口任务。这项任务利用SPLEETER,一个音频源分离模型,将歌曲的鼓从其其余的信号分开。第一组信号用作阳性,并通过延长否定,用于对比学习预培训。另一方面,鼓的信号用作锚点。使用此借口任务进行全卷积和复发模型时,学习了一个开始功能。在某些情况下,发现此功能被映射到歌曲中的周期元素。我们发现,当一个节拍跟踪训练集非常小(少于10个示例)时,预先训练的模型随机初始化模型表现优于随机初始化的模型。当不是这种情况时,预先训练导致了一个学习速度,导致模型过度训练集。更一般地说,这项工作定义了音乐自我监督学习领域的新观点。尤其是使用音频源分离作为自我监督的基本分量的作品之一。
translated by 谷歌翻译
这项工作的目的是通过利用视频中的音频和视觉流的自然共同发生来研究语音重建(视频到音频)对语音重建(视频到音频)的影响。我们提出了Lipsound2,其包括编码器 - 解码器架构和位置感知注意机制,可直接将面部图像序列映射到熔化谱图,而无需任何人类注释。提出的Lipsound2模型首先在$ 2400H的$ 2400h多语言(例如英语和德语)视听数据(VoxceleB2)上进行预先培训。为了验证所提出的方法的概括性,我们将在与以前的方法相比,微调在域特定数据集(网格,TCD-Timit)上进行预先训练的模型,以实现对语音质量和可懂度的显着提高扬声器依赖和依赖的设置。除了英语外,我们还在CMLR数据集上进行中文语音重建,以验证对转移性的影响。最后,我们通过微调在预先训练的语音识别系统上产生生成的音频并在英语和中文基准数据集中实现最先进的性能来培训级联唇读(视频到文本)系统。
translated by 谷歌翻译
Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on largescale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3% to 63.9%), but only a surprisingly modest improvement compared to single-frame models (59.3% to 60.9%). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3% up from 43.9%).
translated by 谷歌翻译
在本文中,我们解决了包含人脸和声音的视频中的唇彩同步问题。我们的方法是基于确定视频中的嘴唇运动和声音是否同步,具体取决于其视听对应得分。我们提出了一个基于视听的跨模式变压器模型,该模型在标准的唇读语音基准数据集LRS2上胜过音频视频同步任务中的几个基线模型。尽管现有的方法主要集中在语音视频中的唇部同步上,但我们也考虑了歌声的特殊情况。由于持续的元音声音,唱歌声音是同步的更具挑战性的用例。我们还研究了在唱歌语音的背景下在语音数据集中训练的LIP同步模型的相关性。最后,我们使用在唱歌语音分离任务中通过唇部同步模型学到的冷冻视觉特征,以优于训练有素的端到端的基线音频视觉模型。演示,源代码和预训练的模型可在https://ipcv.github.io/vocalist/上找到。
translated by 谷歌翻译