We present Masked Audio-Video Learners (MAViL) to train audio-visual representations. Our approach learns with three complementary forms of self-supervision: (1) reconstruction of masked audio and video input data, (2) intra- and inter-modal contrastive learning with masking, and (3) self-training by reconstructing joint audio-video contextualized features learned from the first two objectives. Pre-training with MAViL not only enables the model to perform well in audio-visual classification and retrieval tasks but also improves representations of each modality in isolation, without using information from the other modality for fine-tuning or inference. Empirically, MAViL sets a new state-of-the-art on AudioSet (53.1 mAP) and VGGSound (67.1% accuracy). For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on these benchmarks. Code will be available soon.
translated by 谷歌翻译
We present Fast Language-Image Pre-training (FLIP), a simple and more efficient method for training CLIP. Our method randomly masks out and removes a large portion of image patches during training. Masking allows us to learn from more image-text pairs given the same wall-clock time and contrast more samples per iteration with similar memory footprint. It leads to a favorable trade-off between accuracy and training time. In our experiments on 400 million image-text pairs, FLIP improves both accuracy and speed over the no-masking baseline. On a large diversity of downstream tasks, FLIP dominantly outperforms the CLIP counterparts trained on the same data. Facilitated by the speedup, we explore the scaling behavior of increasing the model size, data size, or training length, and report encouraging results and comparisons. We hope that our work will foster future research on scaling vision-language learning.
translated by 谷歌翻译
作为计算机视觉的重要领域,对象跟踪形成了两个独立的社区,分别研究单个对象跟踪(SOT)和多个对象跟踪(MOT)。但是,由于两个任务的不同训练数据集和跟踪对象,因此在一个跟踪方案中的当前方法不容易适应另一种方法。尽管unitrack \ cite {wang2021Diverent}表明,具有多个头部的共享外观模型可用于处理单个跟踪任务,但它无法利用大规模跟踪数据集进行训练,并且在单个对象跟踪上执行良好的训练。在这项工作中,我们提出了统一的变压器跟踪器(UTT),以通过一个范式在不同方案中解决跟踪问题。在我们的UTT中开发了轨道变压器,以跟踪SOT和MOT中的目标。利用目标和跟踪框架功能之间的相关性以定位目标。我们证明SOT和MOT任务都可以在此框架内解决。该模型可以同时通过在单个任务数据集中优化SOT和MOT目标,同时端到端训练。广泛的实验是在几个基准测试基准上进行的,该基准具有在SOT和MOT数据集上训练的统一模型。代码将在https://github.com/flowerfan/trackron上找到。
translated by 谷歌翻译
While today's video recognition systems parse snapshots or short clips accurately, they cannot connect the dots and reason across a longer range of time yet. Most existing video architectures can only process <5 seconds of a video without hitting the computation or memory bottlenecks. In this paper, we propose a new strategy to overcome this challenge. Instead of trying to process more frames at once like most existing methods, we propose to process videos in an online fashion and cache "memory" at each iteration. Through the memory, the model can reference prior context for long-term modeling, with only a marginal cost. Based on this idea, we build MeMViT, a Memory-augmented Multiscale Vision Transformer, that has a temporal support 30x longer than existing models with only 4.5% more compute; traditional methods need >3,000% more compute to do the same. On a wide range of settings, the increased temporal support enabled by MeMViT brings large gains in recognition accuracy consistently. MeMViT obtains state-of-the-art results on the AVA, EPIC-Kitchens-100 action classification, and action anticipation datasets. Code and models are available at https://github.com/facebookresearch/memvit.
translated by 谷歌翻译
我们呈现蒙版特征预测(MaskFeat),用于自我监督的视频模型的预训练。我们的方法首先随机地掩盖输入序列的一部分,然后预测蒙面区域的特征。我们研究五种不同类型的功能,找到面向导向渐变(HOG)的直方图,手工制作的特征描述符,在性能和效率方面尤其良好。我们观察到猪中的局部对比标准化对于良好的结果至关重要,这与使用HOG进行视觉识别的早期工作符合。我们的方法可以学习丰富的视觉知识和基于大规模的变压器的模型。在不使用额外的模型重量或监督的情况下,在未标记视频上预先培训的MaskFeat在动力学-400上使用MVIT-L达到86.7%的前所未有的结果,在动力学-600,88.3%上,88.3%,在动力学-700,88.8地图上SSV2上的75.0%。 MaskFeat进一步推广到图像输入,其可以被解释为具有单个帧的视频,并在想象中获得竞争结果。
translated by 谷歌翻译
在本文中,我们将多尺度视觉变压器(MVIT)作为图像和视频分类的统一架构,以及对象检测。我们提出了一种改进的MVIT版本,它包含分解的相对位置嵌入和残余汇集连接。我们以五种尺寸实例化此架构,并评估Imagenet分类,COCO检测和动力学视频识别,在此优先效果。我们进一步比较了MVITS的汇集注意力来窗口注意力机制,其中它在准确性/计算中优于后者。如果没有钟声,MVIT在3个域中具有最先进的性能:ImageNet分类的准确性为88.8%,Coco对象检测的56.1盒AP和动力学-400视频分类的86.1%。代码和模型将公开可用。
translated by 谷歌翻译
我们介绍了一个开源深学习库的Pytorchvideo,为各种视频理解任务提供了丰富的模块化,高效,可重复的组件,包括分类,检测,自我监督学习和低级处理。该库涵盖了一系列视频理解工具,包括复制最先进的性能的多模式数据加载,转换和模型。Pytorchvideo进一步支持硬件加速,从而实现移动设备上的实时推断。图书馆基于Pytorch,可以由任何培训框架使用;例如,pytorchlightning,pyslowfast或优雅的愿景。pytorchvideo在https://pytorchvideo.org/提供
translated by 谷歌翻译
We present Multiscale Vision Transformers (MViT) for video and image recognition, by connecting the seminal idea of multiscale feature hierarchies with transformer models. Multiscale Transformers have several channel-resolution scale stages. Starting from the input resolution and a small channel dimension, the stages hierarchically expand the channel capacity while reducing the spatial resolution. This creates a multiscale pyramid of features with early layers operating at high spatial resolution to model simple low-level visual information, and deeper layers at spatially coarse, but complex, high-dimensional features. We evaluate this fundamental architectural prior for modeling the dense nature of visual signals for a variety of video recognition tasks where it outperforms concurrent vision transformers that rely on large scale external pre-training and are 5-10× more costly in computation and parameters. We further remove the temporal dimension and apply our model for image classification where it outperforms prior work on vision transformers. Code is available at: https: //github.com/facebookresearch/SlowFast.
translated by 谷歌翻译
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning [29] as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.
translated by 谷歌翻译
To understand the world, we humans constantly need to relate the present to the past, and put events in context. In this paper, we enable existing video models to do the same. We propose a long-term feature bank-supportive information extracted over the entire span of a video-to augment state-of-the-art video models that otherwise would only view short clips of 2-5 seconds. Our experiments demonstrate that augmenting 3D convolutional networks with a long-term feature bank yields state-of-the-art results on three challenging video datasets: AVA, EPIC-Kitchens, and Charades. Code is available online. 1 1 https://github.com/facebookresearch/ video-long-term-feature-banks Input clip (4 seconds) Target frame
translated by 谷歌翻译