While today's video recognition systems parse snapshots or short clips accurately, they cannot connect the dots and reason across a longer range of time yet. Most existing video architectures can only process <5 seconds of a video without hitting the computation or memory bottlenecks. In this paper, we propose a new strategy to overcome this challenge. Instead of trying to process more frames at once like most existing methods, we propose to process videos in an online fashion and cache "memory" at each iteration. Through the memory, the model can reference prior context for long-term modeling, with only a marginal cost. Based on this idea, we build MeMViT, a Memory-augmented Multiscale Vision Transformer, that has a temporal support 30x longer than existing models with only 4.5% more compute; traditional methods need >3,000% more compute to do the same. On a wide range of settings, the increased temporal support enabled by MeMViT brings large gains in recognition accuracy consistently. MeMViT obtains state-of-the-art results on the AVA, EPIC-Kitchens-100 action classification, and action anticipation datasets. Code and models are available at https://github.com/facebookresearch/memvit.
translated by 谷歌翻译
We present Multiscale Vision Transformers (MViT) for video and image recognition, by connecting the seminal idea of multiscale feature hierarchies with transformer models. Multiscale Transformers have several channel-resolution scale stages. Starting from the input resolution and a small channel dimension, the stages hierarchically expand the channel capacity while reducing the spatial resolution. This creates a multiscale pyramid of features with early layers operating at high spatial resolution to model simple low-level visual information, and deeper layers at spatially coarse, but complex, high-dimensional features. We evaluate this fundamental architectural prior for modeling the dense nature of visual signals for a variety of video recognition tasks where it outperforms concurrent vision transformers that rely on large scale external pre-training and are 5-10× more costly in computation and parameters. We further remove the temporal dimension and apply our model for image classification where it outperforms prior work on vision transformers. Code is available at: https: //github.com/facebookresearch/SlowFast.
translated by 谷歌翻译
To understand the world, we humans constantly need to relate the present to the past, and put events in context. In this paper, we enable existing video models to do the same. We propose a long-term feature bank-supportive information extracted over the entire span of a video-to augment state-of-the-art video models that otherwise would only view short clips of 2-5 seconds. Our experiments demonstrate that augmenting 3D convolutional networks with a long-term feature bank yields state-of-the-art results on three challenging video datasets: AVA, EPIC-Kitchens, and Charades. Code is available online. 1 1 https://github.com/facebookresearch/ video-long-term-feature-banks Input clip (4 seconds) Target frame
translated by 谷歌翻译
在本文中,我们将多尺度视觉变压器(MVIT)作为图像和视频分类的统一架构,以及对象检测。我们提出了一种改进的MVIT版本,它包含分解的相对位置嵌入和残余汇集连接。我们以五种尺寸实例化此架构,并评估Imagenet分类,COCO检测和动力学视频识别,在此优先效果。我们进一步比较了MVITS的汇集注意力来窗口注意力机制,其中它在准确性/计算中优于后者。如果没有钟声,MVIT在3个域中具有最先进的性能:ImageNet分类的准确性为88.8%,Coco对象检测的56.1盒AP和动力学-400视频分类的86.1%。代码和模型将公开可用。
translated by 谷歌翻译
我们呈现了基于纯变压器的视频分类模型,在图像分类中最近的近期成功进行了借鉴。我们的模型从输入视频中提取了时空令牌,然后由一系列变压器层编码。为了处理视频中遇到的令牌的长序列,我们提出了我们模型的几种有效的变体,它们将输入的空间和时间维构建。虽然已知基于变换器的模型只有在可用的大型训练数据集时才有效,但我们展示了我们如何在训练期间有效地规范模型,并利用预先训练的图像模型能够在相对小的数据集上训练。我们进行彻底的消融研究,并在包括动力学400和600,史诗厨房,东西的多个视频分类基准上实现最先进的结果,其中 - 基于深度3D卷积网络的现有方法表现出优先的方法。为了促进进一步的研究,我们在https://github.com/google-research/scenic/tree/main/scenic/projects/vivit发布代码
translated by 谷歌翻译
在视频的每一帧中,流式传输视频识别原因及其动作。良好的流识别模型捕获了长期动态和视频的短期变化。不幸的是,在大多数现有方法中,计算复杂性随所考虑的动力学的长度线性或二次增长。此问题在基于变压器的体系结构中特别明显。为了解决这个问题,我们通过内核镜头重新制定了视频变压器中的跨注意事项,并应用了两种暂时的平滑核:盒子内核或拉普拉斯内核。最终的流动注意力可以从框架到框架重新重新计算,并且仅需要恒定的时间更新每个帧。基于这个想法,我们构建了一种时间平滑变压器Testra,它具有恒定的缓存和计算开销的任意输入。具体而言,它的运行$ 6 \ times $ $ $比基于滑动窗口的同等滑动变压器的运行速度快,在流设置中具有2,048帧。此外,由于时间跨度的增加,Testra在Thumos'14和Epic-Kitchen-100上取得了最新的结果,这是两个标准的在线操作检测和动作预期数据集。 Testra的实时版本优于Thumos'14数据集上的所有事先方法。
translated by 谷歌翻译
We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of framelevel patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: https://github.com/ facebookresearch/TimeSformer.
translated by 谷歌翻译
视频理解需要在多种时空分辨率下推理 - 从短的细粒度动作到更长的持续时间。虽然变压器架构最近提出了最先进的,但它们没有明确建模不同的时空分辨率。为此,我们为视频识别(MTV)提供了多视图变压器。我们的模型由单独的编码器组成,表示输入视频的不同视图,以横向连接,以跨视图熔断信息。我们对我们的模型提供了彻底的消融研究,并表明MTV在一系列模型尺寸范围内的准确性和计算成本方面始终如一地表现优于单视对应力。此外,我们在五个标准数据集上实现最先进的结果,并通过大规模预制来进一步提高。我们将释放代码和备用检查点。
translated by 谷歌翻译
We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA. Code has been made available at: https://github.com/ facebookresearch/SlowFast.
translated by 谷歌翻译
时间动作定位中的大多数现代方法将此问题分为两个部分:(i)短期特征提取和(ii)远程时间边界定位。由于处理长期未修剪的视频引起的GPU内存成本很高,因此许多方法通过冷冻骨干或使用小型空间视频分辨率来牺牲短期功能提取器的代表力。由于最近的视频变压器模型,其中许多具有二次记忆复杂性,这个问题变得更糟。为了解决这些问题,我们提出了TallFormer,这是一种具有长期内存的记忆效率和端到端的可训练时间动作定位变压器。我们的长期记忆机制消除了在每个训练迭代期间处理数百个冗余视频帧的需求,从而大大减少了GPU的记忆消耗和训练时间。这些效率节省使我们(i)可以使用功能强大的视频变压器提取器,而无需冷冻主链或减少空间视频分辨率,而(ii)也保持了远距离的时间边界定位能力。只有RGB框架作为输入,没有外部动作识别分类器,TallFormer的表现优于先前的最先前的边距,在Thumos14上获得了59.1%的平均地图,而ActivityNet-1.3的平均地图为35.6%。该代码可公开:https://github.com/klauscc/tallformer。
translated by 谷歌翻译
Anticipating future actions based on video observations is an important task in video understanding, which would be useful for some precautionary systems that require response time to react before an event occurs. Since the input in action anticipation is only pre-action frames, models do not have enough information about the target action; moreover, similar pre-action frames may lead to different futures. Consequently, any solution using existing action recognition models can only be suboptimal. Recently, researchers have proposed using a longer video context to remedy the insufficient information in pre-action intervals, as well as the self-attention to query past relevant moments to address the anticipation problem. However, the indirect use of video input features as the query might be inefficient, as it only serves as the proxy to the anticipation goal. To this end, we propose an inductive attention model, which transparently uses prior prediction as the query to derive the anticipation result by induction from past experience. Our method naturally considers the uncertainty of multiple futures via the many-to-many association. On the large-scale egocentric video datasets, our model not only shows consistently better performance than state of the art using the same backbone, and is competitive to the methods that employ a stronger backbone, but also superior efficiency in less model parameters.
translated by 谷歌翻译
最近,视频变压器在视频理解方面取得了巨大成功,超过了CNN性能;然而,现有的视频变换器模型不会明确地模拟对象,尽管对象对于识别操作至关重要。在这项工作中,我们呈现对象区域视频变换器(Orvit),一个\ emph {对象为中心}方法,它与直接包含对象表示的块扩展视频变压器图层。关键的想法是从早期层开始融合以对象形式的表示,并将它们传播到变压器层中,从而影响整个网络的时空表示。我们的orvit块由两个对象级流组成:外观和动态。在外观流中,“对象区域关注”模块在修补程序上应用自我关注和\ emph {对象区域}。以这种方式,Visual对象区域与统一修补程序令牌交互,并通过上下文化对象信息来丰富它们。我们通过单独的“对象 - 动态模块”进一步模型对象动态,捕获轨迹交互,并显示如何集成两个流。我们在四个任务和五个数据集中评估我们的模型:在某事物中的某些问题和几次射击动作识别,以及在AVA上的某些时空动作检测,以及在某种东西上的标准动作识别 - 某种东西 - 东西,潜水48和EPIC-Kitchen100。我们在考虑的所有任务和数据集中展示了强大的性能改进,展示了将对象表示的模型的值集成到变压器体系结构中。对于代码和预用模型,请访问项目页面\ url {https://roeiherz.github.io/orvit/}
translated by 谷歌翻译
This paper presents X3D, a family of efficient video networks that progressively expand a tiny 2D image classification architecture along multiple network axes, in space, time, width and depth. Inspired by feature selection methods in machine learning, a simple stepwise network expansion approach is employed that expands a single axis in each step, such that good accuracy to complexity trade-off is achieved. To expand X3D to a specific target complexity, we perform progressive forward expansion followed by backward contraction. X3D achieves state-of-the-art performance while requiring 4.8× and 5.5× fewer multiply-adds and parameters for similar accuracy as previous work. Our most surprising finding is that networks with high spatiotemporal resolution can perform well, while being extremely light in terms of network width and parameters. We report competitive accuracy at unprecedented efficiency on video classification and detection benchmarks. Code will be available at: https: //github.com/facebookresearch/SlowFast.
translated by 谷歌翻译
Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced with the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey we analyze main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled as input-level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity.
translated by 谷歌翻译
我们呈现蒙版特征预测(MaskFeat),用于自我监督的视频模型的预训练。我们的方法首先随机地掩盖输入序列的一部分,然后预测蒙面区域的特征。我们研究五种不同类型的功能,找到面向导向渐变(HOG)的直方图,手工制作的特征描述符,在性能和效率方面尤其良好。我们观察到猪中的局部对比标准化对于良好的结果至关重要,这与使用HOG进行视觉识别的早期工作符合。我们的方法可以学习丰富的视觉知识和基于大规模的变压器的模型。在不使用额外的模型重量或监督的情况下,在未标记视频上预先培训的MaskFeat在动力学-400上使用MVIT-L达到86.7%的前所未有的结果,在动力学-600,88.3%上,88.3%,在动力学-700,88.8地图上SSV2上的75.0%。 MaskFeat进一步推广到图像输入,其可以被解释为具有单个帧的视频,并在想象中获得竞争结果。
translated by 谷歌翻译
大多数现代视频识别模型旨在在短视频剪辑上运行(例如,长度为5-10)。因此,将此类模型应用于长时间的电影理解任务是一项挑战,通常需要复杂的长期时间推理。最近引入的视频变形金刚通过使用远程时间自我注意来部分解决此问题。但是,由于自我注意力的二次成本,这种模型通常是昂贵且不切实际的。取而代之的是,我们提出了Vis4mer,这是一种有效的远程视频模型,结合了自我注意力的优势和最近引入的结构化状态空间序列(S4)层。我们的模型使用标准的变压器编码器进行短距离时空特征提取,以及多尺度的时间S4解码器,用于随后的远程时间推理。通过逐步减少每个解码器层处的时空特征分辨率和通道维度,Vis4mer在视频中学习了复杂的长期时空依赖性。此外,比相应的基于纯的自我注意力的模型,Vis4mer的价格更快为$ 2.63 \ times $ $,$ 8 \ times $ $ GPU内存。此外,Vis4mer实现最先进的结果,在长期视频理解(LVU)基准中,$ 9 $ 9 $长的电影视频分类任务中的$ 6 $。此外,我们表明我们的方法成功地将其推广到其他领域,从而在早餐和硬币程序活动数据集中取得了竞争成果。该代码可在以下网址公开获取:https://github.com/md-mohaiminul/vis4mer。
translated by 谷歌翻译
虽然变形金机对视频识别任务的巨大潜力具有较强的捕获远程依赖性的强大能力,但它们经常遭受通过对视频中大量3D令牌的自我关注操作引起的高计算成本。在本文中,我们提出了一种新的变压器架构,称为双重格式,可以有效且有效地对视频识别进行时空关注。具体而言,我们的Dualformer将完全时空注意力分层到双级级联级别,即首先在附近的3D令牌之间学习细粒度的本地时空交互,然后捕获查询令牌之间的粗粒度全局依赖关系。粗粒度全球金字塔背景。不同于在本地窗口内应用时空分解或限制关注计算以提高效率的现有方法,我们本地 - 全球分层策略可以很好地捕获短期和远程时空依赖项,同时大大减少了钥匙和值的数量在注意计算提高效率。实验结果表明,对抗现有方法的五个视频基准的经济优势。特别是,Dualformer在动态-400/600上设置了新的最先进的82.9%/ 85.2%,大约1000g推理拖鞋,比具有相似性能的现有方法至少3.2倍。
translated by 谷歌翻译
我们提出了块茎:一种简单的时空视频动作检测解决方案。与依赖于离线演员检测器或手工设计的演员位置假设的现有方法不同,我们建议通过同时执行动作定位和识别从单个表示来直接检测视频中的动作微管。块茎学习一组管芯查询,并利用微调模块来模拟视频剪辑的动态时空性质,其有效地加强了与在时空空间中的演员位置假设相比的模型容量。对于包含过渡状态或场景变更的视频,我们提出了一种上下文意识的分类头来利用短期和长期上下文来加强行动分类,以及用于检测精确的时间动作程度的动作开关回归头。块茎直接产生具有可变长度的动作管,甚至对长视频剪辑保持良好的结果。块茎在常用的动作检测数据集AVA,UCF101-24和JHMDB51-21上优于先前的最先进。
translated by 谷歌翻译
自2020年推出以来,Vision Transformers(VIT)一直在稳步打破许多视觉任务的记录,通常被描述为``全部'''替换Convnet。而且对于嵌入式设备不友好。此外,最近的研究表明,标准的转话如果经过重新设计和培训,可以在准确性和可伸缩性方面与VIT竞争。在本文中,我们采用Convnet的现代化结构来设计一种新的骨干,以采取行动,以采取行动特别是我们的主要目标是为工业产品部署服务,例如仅支持标准操作的FPGA董事会。因此,我们的网络仅由2D卷积组成,而无需使用任何3D卷积,远程注意插件或变压器块。在接受较少的时期(5x-10x)训练时,我们的骨干线超过了(2+1)D和3D卷积的方法,并获得可比的结果s在两个基准数据集上具有vit。
translated by 谷歌翻译
We present a simple approach which can turn a ViT encoder into an efficient video model, which can seamlessly work with both image and video inputs. By sparsely sampling the inputs, the model is able to do training and inference from both inputs. The model is easily scalable and can be adapted to large-scale pre-trained ViTs without requiring full finetuning. The model achieves SOTA results and the code will be open-sourced.
translated by 谷歌翻译