Tracking objects over long videos effectively means solving a spectrum of problems, from short-term association for un-occluded objects to long-term association for objects that are occluded and then reappear in the scene. Methods tackling these two tasks are often disjoint and crafted for specific scenarios, and top-performing approaches are often a mix of techniques, which yields engineering-heavy solutions that lack generality. In this work, we question the need for hybrid approaches and introduce SUSHI, a unified and scalable multi-object tracker. Our approach processes long clips by splitting them into a hierarchy of subclips, which enables high scalability. We leverage graph neural networks to process all levels of the hierarchy, which makes our model unified across temporal scales and highly general. As a result, we obtain significant improvements over state-of-the-art on four diverse datasets. Our code and models will be made available.
translated by 谷歌翻译
图提供了一种自然的方式来制定多个对象跟踪(MOT)和多个对象跟踪和分割(MOTS),逐个检测范式中。但是,他们还引入了学习方法的主要挑战,因为定义可以在这种结构化领域运行的模型并不是微不足道的。在这项工作中,我们利用MOT的经典网络流程公式来定义基于消息传递网络(MPN)的完全微分框架。通过直接在图形域上操作,我们的方法可以在整个检测和利用上下文特征上全球推理。然后,它共同预测了数据关联问题的最终解决方案和场景中所有对象的分割掩码,同时利用这两个任务之间的协同作用。我们在几个公开可用的数据集中获得跟踪和细分的最新结果。我们的代码可在github.com/ocetintas/mpntrackseg上找到。
translated by 谷歌翻译
Existing Multiple Object Tracking (MOT) methods design complex architectures for better tracking performance. However, without a proper organization of input information, they still fail to perform tracking robustly and suffer from frequent identity switches. In this paper, we propose two novel methods together with a simple online Message Passing Network (MPN) to address these limitations. First, we explore different integration methods for the graph node and edge embeddings and put forward a new IoU (Intersection over Union) guided function, which improves long term tracking and handles identity switches. Second, we introduce a hierarchical sampling strategy to construct sparser graphs which allows to focus the training on more difficult samples. Experimental results demonstrate that a simple online MPN with these two contributions can perform better than many state-of-the-art methods. In addition, our association method generalizes well and can also improve the results of private detection based methods.
translated by 谷歌翻译
大多数(3D)多对象跟踪方法依赖于数据关联的外观提示。相比之下,我们研究了仅通过编码3D空间中对象之间的几何关系作为数据驱动数据关联的线索,我们才能达到多远。我们将3D检测编码为图中的节点,其中对象之间的空间和时间成对关系是通过图边缘上的局部极性坐标编码的。这种表示使我们的几何关系不变到全球变换和平滑的轨迹变化,尤其是在非全面运动下。这使我们的图形神经网络可以学会有效地编码时间和空间交互,并充分利用上下文和运动提示,以通过将数据关联作为边缘分类来获得最终场景解释。我们在Nuscenes数据集上建立了一个新的最先进的方法,更重要的是,我们的方法在不同位置(波士顿,新加坡,Karlsruhe)和数据集(Nuscenes和Kitti)中跨越了我们的方法。
translated by 谷歌翻译
长期以来,多对象跟踪中最常见的范式是逐个检测(TBD),首先检测到对象,然后通过视频帧关联。对于关联,大多数模型用于运动和外观提示。尽管仍然依靠这些提示,但最新的方法(例如,注意力)表明对训练数据和整体复杂框架的需求不断增加。我们声称1)如果采用某些关键的设计选择,可以从很少的培训数据中获得强大的提示,2)鉴于这些强大的提示,标准的基于匈牙利匹配的关联足以获得令人印象深刻的结果。我们的主要见解是确定允许标准重新识别网络在基于外观的跟踪方面表现出色的关键组件。我们广泛地分析了其故障案例,并表明我们的外观特征与简单运动模型的结合导致了强大的跟踪结果。我们的模型在MOT17和MOT20数据集上实现了最新的性能,在IDF1中最多可超过5.4pp,在IDF1和HOTA中的4.4pp优于先前的最新跟踪器。我们将在本文接受后发布代码和模型。
translated by 谷歌翻译
The problem of tracking multiple objects in a video sequence poses several challenging tasks. For tracking-bydetection, these include object re-identification, motion prediction and dealing with occlusions. We present a tracker (without bells and whistles) that accomplishes tracking without specifically targeting any of these tasks, in particular, we perform no training or optimization on tracking data. To this end, we exploit the bounding box regression of an object detector to predict the position of an object in the next frame, thereby converting a detector into a Tracktor. We demonstrate the potential of Tracktor and provide a new state-of-the-art on three multi-object tracking benchmarks by extending it with a straightforward re-identification and camera motion compensation.We then perform an analysis on the performance and failure cases of several state-of-the-art tracking methods in comparison to our Tracktor. Surprisingly, none of the dedicated tracking methods are considerably better in dealing with complex tracking scenarios, namely, small and occluded objects or missing detections. However, our approach tackles most of the easy tracking scenarios. Therefore, we motivate our approach as a new tracking paradigm and point out promising future research directions. Overall, Tracktor yields superior tracking performance than any current tracking method and our analysis exposes remaining and unsolved tracking challenges to inspire future research directions.
translated by 谷歌翻译
The tracking-by-detection paradigm today has become the dominant method for multi-object tracking and works by detecting objects in each frame and then performing data association across frames. However, its sequential frame-wise matching property fundamentally suffers from the intermediate interruptions in a video, such as object occlusions, fast camera movements, and abrupt light changes. Moreover, it typically overlooks temporal information beyond the two frames for matching. In this paper, we investigate an alternative by treating object association as clip-wise matching. Our new perspective views a single long video sequence as multiple short clips, and then the tracking is performed both within and between the clips. The benefits of this new approach are two folds. First, our method is robust to tracking error accumulation or propagation, as the video chunking allows bypassing the interrupted frames, and the short clip tracking avoids the conventional error-prone long-term track memory management. Second, the multiple frame information is aggregated during the clip-wise matching, resulting in a more accurate long-range track association than the current frame-wise matching. Given the state-of-the-art tracking-by-detection tracker, QDTrack, we showcase how the tracking performance improves with our new tracking formulation. We evaluate our proposals on two tracking benchmarks, TAO and MOT17 that have complementary characteristics and challenges each other.
translated by 谷歌翻译
多摄像机多对象跟踪目前在计算机视野中引起了注意力,因为它在现实世界应用中的卓越性能,如具有拥挤场景或巨大空间的视频监控。在这项工作中,我们提出了一种基于空间升降的多乳制型配方的数学上优雅的多摄像多对象跟踪方法。我们的模型利用单摄像头跟踪器产生的最先进的TOOTWLET作为提案。由于这些Tracklet可能包含ID-Switch错误,因此我们通过从3D几何投影获得的新型预簇来完善它们。因此,我们派生了更好的跟踪图,没有ID交换机,更精确的数据关联阶段的亲和力成本。然后通过求解全局提升的多乳制型制剂,将轨迹与多摄像机轨迹匹配,该组件包含位于同一相机和相互相机间的Tracklet上的短路和远程时间交互。在Wildtrack DataSet的实验结果是近乎完美的结果,在校园上表现出最先进的追踪器,同时在PETS-09数据集上处于校准状态。我们将在接受纸质时进行我们的实施。
translated by 谷歌翻译
我们提出了一种新型的图形神经网络(GNN)方法,用于高通量显微镜视频中的细胞跟踪。通过将整个延时序列建模为直接图,其中细胞实例由其节点及其边缘表示,我们通过查找图中的最大路径来提取整个细胞轨迹。这是由纳入端到端深度学习框架中的几个关键贡献来完成的。我们利用深度度量学习算法来提取细胞特征向量,以区分不同生物细胞的实例并组装相同的细胞实例。我们引入了一种新的GNN块类型,该类型可以对节点和边缘特征向量进行相互更新,从而促进基础消息传递过程。消息传递概念的范围由GNN块的数量确定,这是至关重要的,因为它可以在连续的框架中实现节点和边缘之间的“节点和边缘”之间的“流动”。最后,我们解决了边缘分类问题,并使用已确定的活动边缘来构建单元格的轨道和谱系树。我们通过将其应用于不同细胞类型,成像设置和实验条件的2D和3D数据集,来证明所提出的细胞跟踪方法的强度。我们表明,我们的框架在大多数评估的数据集上都优于当前最新方法。该代码可在我们的存储库中获得:https://github.com/talbenha/cell-tracker-gnn。
translated by 谷歌翻译
数据关联是遵循逐个检测范式跟踪的任何多个对象跟踪方法(MOT)方法的关键组件。为了生成完整的轨迹,这种方法采用数据关联过程来在每个时间步长期间建立检测和现有目标之间的分配。最近的数据关联方法试图解决多维线性分配任务或网络流量最小化问题,或者要么通过多个假设跟踪解决。但是,在推论过程中,每个序列帧都需要计算最佳分配的优化步骤,并在任何给定的解决方案中添加显着的计算复杂性。为此,在这项工作的背景下,我们介绍了基于变压器的作业决策网络(TADN),该决策网络(TADN)可以解决数据关联,而无需在推理过程中进行任何明确的优化。特别是,TADN可以在网络的单个正向传球中直接推断检测和活动目标之间的分配对。我们已经将TADN整合到了一个相当简单的MOT框架中,我们设计了一种新颖的培训策略,用于有效的端到端培训,并在两个流行的基准上展示了我们在线视觉跟踪MOT的高潜力,即Mot17和Mot17和UA-DETRAC。我们提出的方法在大多数评估指标中的最新方法都优于最先进的方法,尽管它作为跟踪器的简单性质缺乏重要的辅助组件,例如闭塞处理或重新识别。我们的方法的实现可在https://github.com/psaltaath/tadn-mot上公开获得。
translated by 谷歌翻译
To track the 3D locations and trajectories of the other traffic participants at any given time, modern autonomous vehicles are equipped with multiple cameras that cover the vehicle's full surroundings. Yet, camera-based 3D object tracking methods prioritize optimizing the single-camera setup and resort to post-hoc fusion in a multi-camera setup. In this paper, we propose a method for panoramic 3D object tracking, called CC-3DT, that associates and models object trajectories both temporally and across views, and improves the overall tracking consistency. In particular, our method fuses 3D detections from multiple cameras before association, reducing identity switches significantly and improving motion modeling. Our experiments on large-scale driving datasets show that fusion before association leads to a large margin of improvement over post-hoc fusion. We set a new state-of-the-art with 12.6% improvement in average multi-object tracking accuracy (AMOTA) among all camera-based methods on the competitive NuScenes 3D tracking benchmark, outperforming previously published methods by 6.5% in AMOTA with the same 3D detector.
translated by 谷歌翻译
对人类对象相互作用的理解在第一人称愿景(FPV)中至关重要。遵循相机佩戴者操纵的对象的视觉跟踪算法可以提供有效的信息,以有效地建模此类相互作用。在过去的几年中,计算机视觉社区已大大提高了各种目标对象和场景的跟踪算法的性能。尽管以前有几次尝试在FPV域中利用跟踪器,但仍缺少对最先进跟踪器的性能的有条理分析。这项研究差距提出了一个问题,即应使用当前的解决方案``现成''还是应进行更多特定领域的研究。本文旨在为此类问题提供答案。我们介绍了FPV中单个对象跟踪的首次系统研究。我们的研究广泛分析了42个算法的性能,包括通用对象跟踪器和基线FPV特定跟踪器。分析是通过关注FPV设置的不同方面,引入新的绩效指标以及与FPV特定任务有关的。这项研究是通过引入Trek-150(由150个密集注释的视频序列组成的新型基准数据集)来实现的。我们的结果表明,FPV中的对象跟踪对当前的视觉跟踪器构成了新的挑战。我们强调了导致这种行为的因素,并指出了可能的研究方向。尽管遇到了困难,但我们证明了跟踪器为需要短期对象跟踪的FPV下游任务带来好处。我们预计,随着新的和FPV特定的方法学会得到研究,通用对象跟踪将在FPV中受欢迎。
translated by 谷歌翻译
为视频中的每个像素分配语义类和跟踪身份的任务称为视频Panoptic分段。我们的工作是第一个在真实世界中瞄准这项任务,需要在空间和时间域中的密集解释。由于此任务的地面真理难以获得,但是,现有数据集是合成构造的或仅在短视频剪辑中稀疏地注释。为了克服这一点,我们介绍了一个包含两个数据集,Kitti-Step和Motchallenge步骤的新基准。数据集包含长视频序列,提供具有挑战性的示例和用于研究长期像素精确分割和在真实条件下跟踪的测试床。我们进一步提出了一种新的评估度量分割和跟踪质量(STQ),其相当余额平衡该任务的语义和跟踪方面,并且更适合评估任意长度的序列。最后,我们提供了几个基线来评估此新具有挑战性数据集的现有方法的状态。我们已将我们的数据集,公制,基准服务器和基准公开提供,并希望这将激发未来的研究。
translated by 谷歌翻译
Transformer models have shown great success handling long-range interactions, making them a promising tool for modeling video. However they lack inductive biases and scale quadratically with input length. These limitations are further exacerbated when dealing with the high dimensionality introduced with the temporal dimension. While there are surveys analyzing the advances of Transformers for vision, none focus on an in-depth analysis of video-specific designs. In this survey we analyze main contributions and trends of works leveraging Transformers to model video. Specifically, we delve into how videos are handled as input-level first. Then, we study the architectural changes made to deal with video more efficiently, reduce redundancy, re-introduce useful inductive biases, and capture long-term temporal dynamics. In addition we provide an overview of different training regimes and explore effective self-supervised learning strategies for video. Finally, we conduct a performance comparison on the most common benchmark for Video Transformers (i.e., action classification), finding them to outperform 3D ConvNets even with less computational complexity.
translated by 谷歌翻译
基于LIDAR的3D对象检测的先前工作主要集中在单帧范式上。在本文中,我们建议通过利用多个帧的时间信息(即点云视频)来检测3D对象。我们从经验上将时间信息分为短期和长期模式。为了编码短期数据,我们提出了一个网格消息传递网络(GMPNET),该网络将每个网格(即分组点)视为节点,并用邻居网格构造K-NN图。为了更新网格的功能,gmpnet迭代从其邻居那里收集信息,从而从附近的框架中挖掘了运动提示。为了进一步汇总长期框架,我们提出了一个细心的时空变压器GRU(AST-GRU),其中包含空间变压器注意(STA)模块和颞变压器注意(TTA)模块。 STA和TTA增强了香草gru,以专注于小物体并更好地对齐运动对象。我们的整体框架支持点云中的在线和离线视频对象检测。我们基于普遍的基于锚和锚的探测器实现算法。关于挑战性的Nuscenes基准的评估结果显示了我们方法的出色表现,在提交论文时,在没有任何铃铛和哨声的情况下在排行榜上获得了第一个。
translated by 谷歌翻译
3D多对象跟踪旨在唯一,始终如一地识别所有移动实体。尽管在此设置中提供了丰富的时空信息,但当前的3D跟踪方法主要依赖于抽象的信息和有限的历史记录,例如单帧对象边界框。在这项工作中,我们开发了对交通场景的整体表示,该场景利用了现场演员的空间和时间信息。具体而言,我们通过将跟踪的对象表示为时空点和边界框的序列来重新将跟踪作为时空问题,并在悠久的时间历史上进行重新制定。在每个时间戳上,我们通过对对象历史记录的完整顺序进行的细化来改善跟踪对象的位置和运动估计。通过共同考虑时间和空间,我们的代表自然地编码了基本的物理先验,例如对象持久性和整个时间的一致性。我们的时空跟踪框架在Waymo和Nuscenes基准测试中实现了最先进的性能。
translated by 谷歌翻译
相机的估计与一组图像相关联的估计通常取决于图像之间的特征匹配。相比之下,我们是第一个通过使用对象区域来指导姿势估计问题而不是显式语义对象检测来应对这一挑战的人。我们提出了姿势炼油机网络(PosErnet),一个轻量级的图形神经网络,以完善近似的成对相对摄像头姿势。posernet利用对象区域之间的关联(简洁地表示为边界框),跨越了多个视图到全球完善的稀疏连接的视图图。我们在不同尺寸的图表上评估了7个尺寸的数据集,并展示了该过程如何有益于基于优化的运动平均算法,从而相对于基于边界框获得的初始估计,将旋转的中值误差提高了62度。代码和数据可在https://github.com/iit-pavis/posernet上找到。
translated by 谷歌翻译
最近的多目标跟踪(MOT)系统利用高精度的对象探测器;然而,培训这种探测器需要大量标记的数据。虽然这种数据广泛适用于人类和车辆,但其他动物物种显着稀缺。我们目前稳健的置信跟踪(RCT),一种算法,旨在保持鲁棒性能,即使检测质量差。与丢弃检测置信信息的先前方法相比,RCT采用基本上不同的方法,依赖于精确的检测置信度值来初始化曲目,扩展轨道和滤波器轨道。特别地,RCT能够通过有效地使用低置信度检测(以及单个物体跟踪器)来最小化身份切换,以保持对象的连续轨道。为了评估在存在不可靠的检测中的跟踪器,我们提出了一个挑战的现实世界水下鱼跟踪数据集,Fishtrac。在对FISHTRAC以及UA-DETRAC数据集的评估中,我们发现RCT在提供不完美的检测时优于其他算法,包括最先进的深单和多目标跟踪器以及更经典的方法。具体而言,RCT具有跨越方法的最佳平均热量,可以成功返回所有序列的结果,并且具有比其他方法更少的身份交换机。
translated by 谷歌翻译
深度学习技术导致了通用对象检测领域的显着突破,近年来产生了很多场景理解的任务。由于其强大的语义表示和应用于场景理解,场景图一直是研究的焦点。场景图生成(SGG)是指自动将图像映射到语义结构场景图中的任务,这需要正确标记检测到的对象及其关系。虽然这是一项具有挑战性的任务,但社区已经提出了许多SGG方法并取得了良好的效果。在本文中,我们对深度学习技术带来了近期成就的全面调查。我们审查了138个代表作品,涵盖了不同的输入方式,并系统地将现有的基于图像的SGG方法从特征提取和融合的角度进行了综述。我们试图通过全面的方式对现有的视觉关系检测方法进行连接和系统化现有的视觉关系检测方法,概述和解释SGG的机制和策略。最后,我们通过深入讨论当前存在的问题和未来的研究方向来完成这项调查。本调查将帮助读者更好地了解当前的研究状况和想法。
translated by 谷歌翻译
The recent trend in multiple object tracking (MOT) is jointly solving detection and tracking, where object detection and appearance feature (or motion) are learned simultaneously. Despite competitive performance, in crowded scenes, joint detection and tracking usually fail to find accurate object associations due to missed or false detections. In this paper, we jointly model counting, detection and re-identification in an end-to-end framework, named CountingMOT, tailored for crowded scenes. By imposing mutual object-count constraints between detection and counting, the CountingMOT tries to find a balance between object detection and crowd density map estimation, which can help it to recover missed detections or reject false detections. Our approach is an attempt to bridge the gap of object detection, counting, and re-Identification. This is in contrast to prior MOT methods that either ignore the crowd density and thus are prone to failure in crowded scenes, or depend on local correlations to build a graphical relationship for matching targets. The proposed MOT tracker can perform online and real-time tracking, and achieves the state-of-the-art results on public benchmarks MOT16 (MOTA of 77.6), MOT17 (MOTA of 78.0%) and MOT20 (MOTA of 70.2%).
translated by 谷歌翻译