Temporal action detection (TAD) with end-to-end training often suffers from the pain of huge demand for computing resources due to long video duration. In this work, we propose an efficient temporal action detector (ETAD) that can train directly from video frames with extremely low GPU memory consumption. Our main idea is to minimize and balance the heavy computation among features and gradients in each training iteration. We propose to sequentially forward the snippet frame through the video encoder, and backward only a small necessary portion of gradients to update the encoder. To further alleviate the computational redundancy in training, we propose to dynamically sample only a small subset of proposals during training. Moreover, various sampling strategies and ratios are studied for both the encoder and detector. ETAD achieves state-of-the-art performance on TAD benchmarks with remarkable efficiency. On ActivityNet-1.3, training ETAD in 18 hours can reach 38.25% average mAP with only 1.3 GB memory consumption per video under end-to-end training. Our code will be publicly released.
translated by 谷歌翻译
时间动作定位中的大多数现代方法将此问题分为两个部分:(i)短期特征提取和(ii)远程时间边界定位。由于处理长期未修剪的视频引起的GPU内存成本很高,因此许多方法通过冷冻骨干或使用小型空间视频分辨率来牺牲短期功能提取器的代表力。由于最近的视频变压器模型,其中许多具有二次记忆复杂性,这个问题变得更糟。为了解决这些问题,我们提出了TallFormer,这是一种具有长期内存的记忆效率和端到端的可训练时间动作定位变压器。我们的长期记忆机制消除了在每个训练迭代期间处理数百个冗余视频帧的需求,从而大大减少了GPU的记忆消耗和训练时间。这些效率节省使我们(i)可以使用功能强大的视频变压器提取器,而无需冷冻主链或减少空间视频分辨率,而(ii)也保持了远距离的时间边界定位能力。只有RGB框架作为输入,没有外部动作识别分类器,TallFormer的表现优于先前的最先前的边距,在Thumos14上获得了59.1%的平均地图,而ActivityNet-1.3的平均地图为35.6%。该代码可公开:https://github.com/klauscc/tallformer。
translated by 谷歌翻译
Temporal action detection (TAD) is extensively studied in the video understanding community by generally following the object detection pipeline in images. However, complex designs are not uncommon in TAD, such as two-stream feature extraction, multi-stage training, complex temporal modeling, and global context fusion. In this paper, we do not aim to introduce any novel technique for TAD. Instead, we study a simple, straightforward, yet must-known baseline given the current status of complex design and low detection efficiency in TAD. In our simple baseline (termed BasicTAD), we decompose the TAD pipeline into several essential components: data sampling, backbone design, neck construction, and detection head. We extensively investigate the existing techniques in each component for this baseline, and more importantly, perform end-to-end training over the entire pipeline thanks to the simplicity of design. As a result, this simple BasicTAD yields an astounding and real-time RGB-Only baseline very close to the state-of-the-art methods with two-stream inputs. In addition, we further improve the BasicTAD by preserving more temporal and spatial information in network representation (termed as PlusTAD). Empirical results demonstrate that our PlusTAD is very efficient and significantly outperforms the previous methods on the datasets of THUMOS14 and FineAction. Meanwhile, we also perform in-depth visualization and error analysis on our proposed method and try to provide more insights on the TAD problem. Our approach can serve as a strong baseline for future TAD research. The code and model will be released at https://github.com/MCG-NJU/BasicTAD.
translated by 谷歌翻译
时间动作检测(TAD)旨在确定未修剪视频中每个动作实例的语义标签和边界。先前的方法通过复杂的管道来解决此任务。在本文中,我们提出了一个具有简单集的预测管道的端到端时间动作检测变压器(TADTR)。给定一组名为“动作查询”的可学习嵌入,Tadtr可以从每个查询的视频中自适应提取时间上下文,并直接预测动作实例。为了适应TAD的变压器,我们提出了三个改进,以提高其所在地意识。核心是一个时间可变形的注意模块,在视频中有选择地参加一组稀疏的密钥片段。片段的完善机制和动作回归头旨在完善预测实例的边界和信心。 TADTR需要比以前的检测器更低的计算成本,同时保留了出色的性能。作为一个独立的检测器,它在Thumos14(56.7%地图)和HACS段(32.09%地图)上实现了最先进的性能。结合一个额外的动作分类器,它在ActivityNet-1.3上获得了36.75%的地图。我们的代码可在\ url {https://github.com/xlliu7/tadtr}上获得。
translated by 谷歌翻译
时间动作本地化在视频分析中起着重要作用,该视频分析旨在将动作定位和分类在未修剪视频中。先前的方法通常可以预测单个时间尺度的特征空间上的动作。但是,低级量表的时间特征缺乏足够的语义来进行动作分类,而高级尺度则无法提供动作边界的丰富细节。为了解决这个问题,我们建议预测多个颞尺度特征空间的动作。具体而言,我们使用不同尺度的精致特征金字塔将语义从高级尺度传递到低级尺度。此外,为了建立整个视频的长时间尺度,我们使用时空变压器编码器来捕获视频帧的远程依赖性。然后,具有远距离依赖性的精制特征被送入分类器以进行粗糙的动作预测。最后,为了进一步提高预测准确性,我们建议使用框架级别的自我注意模块来完善每个动作实例的分类和边界。广泛的实验表明,所提出的方法可以超越Thumos14数据集上的最先进方法,并在ActivityNet1.3数据集上实现可比性的性能。与A2NET(tip20,avg \ {0.3:0.7 \}),sub-action(csvt2022,avg \ {0.1:0.5 \})和afsd(cvpr21,avg \ {0.3:0.7 \}) ,提出的方法分别可以提高12.6 \%,17.4 \%和2.2 \%
translated by 谷歌翻译
现有的时间动作检测(TAD)方法依赖于每个视频产生大量的建议。这导致由于提案生成和/或主张行动实例评估以及最终的高计算成本而导致复杂的模型设计。在这项工作中,我们首次提出了一个带有全局分割掩码(TAG)的无建议的时间动作检测模型。我们的核心想法是以完整的视频长度共同学习每个操作实例的全局细分面具。标签模型与基于常规建议的方法有显着不同,通过关注全球时间表示学习,直接在没有建议的情况下直接检测本地起点和终点的行动点。此外,通过对TAD进行整体建模,而不是在单个建议级别上进行本地建模,标签需要更简单的模型体系结构,计算成本较低。广泛的实验表明,尽管设计更简单,但标签的表现优于现有的TAD方法,在两个基准上实现了新的最新性能。重要的是,训练的速度更快约20倍,推理效率更高。我们的标签的Pytorch实现可在https://github.com/sauradip/tags上获得。
translated by 谷歌翻译
数据冗余在深神经网络(DNN)的输入和中间结果中无处不在。它为提高DNN性能和效率提供了许多重要的机会,并在大量工作中探索了。这些研究在几年中都在许多场所散布。他们关注的目标范围从图像到视频和文本,以及他们用于检测和利用数据冗余的技术在许多方面也有所不同。尚无对许多努力进行系统的检查和摘要,使研究人员很难对先前的工作,最新技术,差异和共享原则以及尚未探索的领域和方向进行全面看法。本文试图填补空白。它调查了有关该主题的数百篇论文,引入了一种新颖的分类法,以将各种技术纳入一个单一的分类框架,对用于利用数据冗余的主要方法进行了全面描述,以改善数据的多种DNN,并指出一组未来探索的研究机会。
translated by 谷歌翻译
动作检测的任务旨在在每个动作实例中同时推论动作类别和终点的本地化。尽管Vision Transformers推动了视频理解的最新进展,但由于在长时间的视频剪辑中,设计有效的架构以进行动作检测是不平凡的。为此,我们提出了一个有效的层次时空时空金字塔变压器(STPT)进行动作检测,这是基于以下事实:变压器中早期的自我注意力层仍然集中在局部模式上。具体而言,我们建议在早期阶段使用本地窗口注意来编码丰富的局部时空时空表示,同时应用全局注意模块以捕获后期的长期时空依赖性。通过这种方式,我们的STPT可以用冗余的大大减少来编码区域和依赖性,从而在准确性和效率之间进行有希望的权衡。例如,仅使用RGB输入,提议的STPT在Thumos14上获得了53.6%的地图,超过10%的I3D+AFSD RGB模型超过10%,并且对使用其他流量的额外流动功能的表现较少,该流量具有31%的GFLOPS ,它是一个有效,有效的端到端变压器框架,用于操作检测。
translated by 谷歌翻译
Temporal action proposal generation is an important yet challenging problem, since temporal proposals with rich action content are indispensable for analysing real-world videos with long duration and high proportion irrelevant content. This problem requires methods not only generating proposals with precise temporal boundaries, but also retrieving proposals to cover truth action instances with high recall and high overlap using relatively fewer proposals. To address these difficulties, we introduce an effective proposal generation method, named Boundary-Sensitive Network (BSN), which adopts "local to global" fashion. Locally, BSN first locates temporal boundaries with high probabilities, then directly combines these boundaries as proposals. Globally, with Boundary-Sensitive Proposal feature, BSN retrieves proposals by evaluating the confidence of whether a proposal contains an action within its region. We conduct experiments on two challenging datasets: ActivityNet-1.3 and THUMOS14, where BSN outperforms other state-of-the-art temporal action proposal generation methods with high recall and high temporal precision. Finally, further experiments demonstrate that by combining existing action classifiers, our method significantly improves the state-of-the-art temporal action detection performance.
translated by 谷歌翻译
在计算机视觉中长期以来一直研究了时间行动定位。现有的最先进的动作定位方法将每个视频划分为多个动作单位(即,在一级方法中的两级方法和段中的提案),然后单独地对每个视频进行操作,而不明确利用他们在学习期间的关系。在本文中,我们声称,动作单位之间的关系在行动定位中发挥着重要作用,并且更强大的动作探测器不仅应捕获每个动作单元的本地内容,还应允许更广泛的视野与相关的上下文它。为此,我们提出了一般图表卷积模块(GCM),可以轻松插入现有的动作本地化方法,包括两阶段和单级范式。具体而言,我们首先构造一个图形,其中每个动作单元被表示为节点,并且两个动作单元之间作为边缘之间的关系。在这里,我们使用两种类型的关系,一个类型的关系,用于捕获不同动作单位之间的时间连接,另一类是用于表征其语义关系的另一个关系。特别是对于两级方法中的时间连接,我们进一步探索了两种不同的边缘,一个连接重叠动作单元和连接周围但脱节的单元的另一个。在我们构建的图表上,我们将图形卷积网络(GCNS)应用于模拟不同动作单位之间的关系,这能够了解更有信息的表示来增强动作本地化。实验结果表明,我们的GCM始终如一地提高了现有行动定位方法的性能,包括两阶段方法(例如,CBR和R-C3D)和一级方法(例如,D-SSAD),验证我们的一般性和有效性GCM。
translated by 谷歌翻译
Detecting actions in untrimmed videos is an important yet challenging task. In this paper, we present the structured segment network (SSN), a novel framework which models the temporal structure of each action instance via a structured temporal pyramid. On top of the pyramid, we further introduce a decomposed discriminative model comprising two classifiers, respectively for classifying actions and determining completeness. This allows the framework to effectively distinguish positive proposals from background or incomplete ones, thus leading to both accurate recognition and localization. These components are integrated into a unified network that can be efficiently trained in an end-to-end fashion. Additionally, a simple yet effective temporal action proposal scheme, dubbed temporal actionness grouping (TAG) is devised to generate high quality action proposals. On two challenging benchmarks, THUMOS14 and ActivityNet, our method remarkably outperforms previous state-of-the-art methods, demonstrating superior accuracy and strong adaptivity in handling actions with various temporal structures. 1
translated by 谷歌翻译
基于自我注意力的变压器模型已显示出令人印象深刻的图像分类和对象检测结果,并且最近用于视频理解。受此成功的启发,我们研究了变压器网络在视频中的时间动作本地化的应用。为此,我们提出了ActionFormer,这是一个简单而强大的模型,可在不使用动作建议或依靠预定义的锚点窗口中识别其及时识别其类别并识别其类别。 ActionFormer将多尺度特征表示与局部自我发作相结合,并使用轻加权解码器对每个时刻进行分类并估算相应的动作边界。我们表明,这种精心策划的设计会在先前的工作中进行重大改进。如果没有铃铛和口哨声,ActionFormer在Thumos14上的TIOU = 0.5的地图达到了71.0%的地图,表现优于最佳先前模型的绝对百分比14.1。此外,ActionFormer在ActivityNet 1.3(平均地图36.6%)和Epic-Kitchens 100(+先前工作的平均地图+13.5%)上显示出很强的结果。我们的代码可从http://github.com/happyharrycn/actionformer_release获得。
translated by 谷歌翻译
时间动作本地化(TAL)是识别视频中一组动作的任务,该任务涉及将开始和终点定位并对每个操作实例进行分类。现有方法通过使用预定义的锚窗或启发式自下而上的边界匹配策略来解决此任务,这些策略是推理时间的主要瓶颈。此外,主要的挑战是由于缺乏全球上下文信息而无法捕获远程动作。在本文中,我们介绍了一个无锚的框架,称为HTNET,该框架预测了一组<开始时间,结束时间,类,类>三胞胎,这些视频基于变压器体系结构。在预测粗边界之后,我们通过背景特征采样(BFS)模块和分层变压器对其进行完善,这使我们的模型能够汇总全局上下文信息,并有效利用视频中固有的语义关系。我们演示了我们的方法如何在两个TAL基准数据集上定位准确的动作实例并实现最先进的性能:Thumos14和ActivityNet 1.3。
translated by 谷歌翻译
Temporal action proposal generation is an challenging and promising task which aims to locate temporal regions in real-world videos where action or event may occur. Current bottom-up proposal generation methods can generate proposals with precise boundary, but cannot efficiently generate adequately reliable confidence scores for retrieving proposals. To address these difficulties, we introduce the Boundary-Matching (BM) mechanism to evaluate confidence scores of densely distributed proposals, which denote a proposal as a matching pair of starting and ending boundaries and combine all densely distributed BM pairs into the BM confidence map. Based on BM mechanism, we propose an effective, efficient and end-to-end proposal generation method, named Boundary-Matching Network (BMN), which generates proposals with precise temporal boundaries as well as reliable confidence scores simultaneously. The two-branches of BMN are jointly trained in an unified framework. We conduct experiments on two challenging datasets: THUMOS-14 and ActivityNet-1.3, where BMN shows significant performance improvement with remarkable efficiency and generalizability. Further, combining with existing action classifier, BMN can achieve stateof-the-art temporal action detection performance.
translated by 谷歌翻译
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (2 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models will be available for further research.
translated by 谷歌翻译
已经发现,旨在在未修剪视频的开始和终点范围内发现的时间动作实例的时间动作提案生成可以在很大程度上受益于适当的时间和语义上下文的剥削。最新的努力致力于通过自我发项模块来考虑基于时间的环境和基于相似性的语义上下文。但是,他们仍然遭受混乱的背景信息和有限的上下文特征学习的困扰。在本文中,我们提出了一个基于金字塔区域的新型插槽注意(PRSLOT)模块来解决这些问题。我们的PRSLOT模块不使用相似性计算,而是直接以编码器方式来学习本地关系,并基于注意力输入功能(称为\ textit {slot}}的注意力输入功能,生成了局部区域的表示。具体而言,在输入段级级别上,PRSLOT模块将目标段作为\ textIt {query},其周围区域为\ textit {key},然后通过聚集每个\ textit {query-key}插槽来生成插槽表示。具有平行金字塔策略的本地摘要上下文。基于PRSLOT模块,我们提出了一种基于金字塔区域的新型插槽注意网络,称为PRSA-NET,以学习具有丰富的时间和语义上下文的统一视觉表示,以获得更好的建议生成。广泛的实验是在两个广泛采用的Thumos14和ActivityNet-1.3基准上进行的。我们的PRSA-NET优于其他最先进的方法。特别是,我们将AR@100从以前的最佳50.67%提高到56.12%,以生成提案,并在0.5 TIOU下将地图从51.9 \%\%提高到58.7 \%\%\%\%\%,以在Thumos14上进行动作检测。 \ textit {代码可在} \ url {https://github.com/handhand123/prsa-net}中获得
translated by 谷歌翻译
最近的研究通过将基于Trimap的图像垫子的成功扩展到视频域,在视频垫子上取得了长足进展。在本文中,我们将此任务推向了更实用的设置,并提出了仅使用一个用户宣传的Trimap来强制执行视频底表的单个TRIMAP视频效果网络(OTVM)。 OTVM的一个关键是Trimap传播和α预测的关节建模。从基线构架传播和α预测网络开始,我们的OTVM将两个网络与alpha-Trimap修补模块结合在一起,以促进信息流。我们还提出了一种端到端培训策略,以充分利用联合模型。与先前的解耦方法相比,我们的联合建模极大地提高了三张式传播的时间稳定性。我们在两个最新的视频底变基准测试中评估了我们的模型,深度视频垫子和视频图108,以及优于大量利润率的最先进(MSE改善分别为56.4%和56.7%)。源代码和模型可在线获得:https://github.com/hongje/otvm。
translated by 谷歌翻译
时间动作检测旨在定位视频中的行动边界。基于边界匹配的当前方法枚举并计算生成提案的所有可能的边界匹配。然而,这些方法忽略了边界预测中的远程上下文聚集。同时,由于相邻匹配的类似语义,局部语义聚集的密集产生的匹配不能改善语义丰富和歧视。在本文中,我们提出了名为Dual Contence聚合网络(DCAN)的端到端提议生成方法以聚合两个级别的上下文,即边界级别和提议级别,用于产生高质量的动作提案,从而提高性能时间作用检测。具体而言,我们设计了多路径时间上下文聚合(MTCA),以实现边界级别的平滑上下文聚合和对边界的精确评估。对于匹配评估,粗细匹配(CFM)旨在聚合上下文,并将匹配的映射从粗内进行精细化。我们对ActivityNet V1.3和Thumos-14进行了广泛的实验。 DCAN在ActivityNet V1.3上获得35.39%的平均地图,在Thumos-14上达到地图54.14%,展示DCAN可以产生高质量的提案,实现最先进的性能。我们在https://github.com/cg1177/dcan发布代码。
translated by 谷歌翻译
时间动作本地化的主要挑战是在未修剪的视频中从各种共同出现的成分(例如上下文和背景)中获取细微的人类行为。尽管先前的方法通过设计高级动作探测器取得了重大进展,但它们仍然遭受这些共发生的成分,这些成分通常占据视频中实际动作内容。在本文中,我们探讨了视频片段的两个正交但互补的方面,即动作功能和共存功能。尤其是,我们通过在视频片段中解开这两种功能并重新组合它们来生成具有更明显的动作信息以进行准确的动作本地化的新功能表示形式,从而开发了一项新颖的辅助任务。我们称我们的方法重新处理,该方法首先显式将动作内容分解并正规化其共发生的特征,然后合成新的动作主导的视频表示形式。对Thumos14和ActivityNet V1.3的广泛实验结果和消融研究表明,我们的新表示形式与简单的动作检测器相结合可以显着改善动作定位性能。
translated by 谷歌翻译
Existing Temporal Action Detection (TAD) methods typically take a pre-processing step in converting an input varying-length video into a fixed-length snippet representation sequence, before temporal boundary estimation and action classification. This pre-processing step would temporally downsample the video, reducing the inference resolution and hampering the detection performance in the original temporal resolution. In essence, this is due to a temporal quantization error introduced during the resolution downsampling and recovery. This could negatively impact the TAD performance, but is largely ignored by existing methods. To address this problem, in this work we introduce a novel model-agnostic post-processing method without model redesign and retraining. Specifically, we model the start and end points of action instances with a Gaussian distribution for enabling temporal boundary inference at a sub-snippet level. We further introduce an efficient Taylor-expansion based approximation, dubbed as Gaussian Approximated Post-processing (GAP). Extensive experiments demonstrate that our GAP can consistently improve a wide variety of pre-trained off-the-shelf TAD models on the challenging ActivityNet (+0.2% -0.7% in average mAP) and THUMOS (+0.2% -0.5% in average mAP) benchmarks. Such performance gains are already significant and highly comparable to those achieved by novel model designs. Also, GAP can be integrated with model training for further performance gain. Importantly, GAP enables lower temporal resolutions for more efficient inference, facilitating low-resource applications. The code will be available in https://github.com/sauradip/GAP
translated by 谷歌翻译