The appearance of an object can be fleeting when it transforms. As eggs are broken or paper is torn, their color, shape and texture can change dramatically, preserving virtually nothing of the original except for the identity itself. Yet, this important phenomenon is largely absent from existing video object segmentation (VOS) benchmarks. In this work, we close the gap by collecting a new dataset for Video Object Segmentation under Transformations (VOST). It consists of more than 700 high-resolution videos, captured in diverse environments, which are 20 seconds long on average and densely labeled with instance masks. A careful, multi-step approach is adopted to ensure that these videos focus on complex object transformations, capturing their full temporal extent. We then extensively evaluate state-of-the-art VOS methods and make a number of important discoveries. In particular, we show that existing methods struggle when applied to this novel task and that their main limitation lies in over-reliance on static appearance cues. This motivates us to propose a few modifications for the top-performing baseline that improve its capabilities by better modeling spatio-temporal information. But more broadly, the hope is to stimulate discussion on learning more robust video object representations.
translated by 谷歌翻译
我们介绍了遮阳板,一个新的像素注释的新数据集和一个基准套件,用于在以自我为中心的视频中分割手和活动对象。遮阳板注释Epic-kitchens的视频,其中带有当前视频分割数据集中未遇到的新挑战。具体而言,我们需要确保像素级注释作为对象经历变革性相互作用的短期和长期一致性,例如洋葱被剥皮,切成丁和煮熟 - 我们旨在获得果皮,洋葱块,斩波板,刀,锅以及表演手的准确像素级注释。遮阳板引入了一条注释管道,以零件为ai驱动,以进行可伸缩性和质量。总共,我们公开发布257个对象类的272K手册语义面具,990万个插值密集口罩,67K手动关系,涵盖36小时的179个未修剪视频。除了注释外,我们还引入了视频对象细分,互动理解和长期推理方面的三个挑战。有关数据,代码和排行榜:http://epic-kitchens.github.io/visor
translated by 谷歌翻译
对人类对象相互作用的理解在第一人称愿景(FPV)中至关重要。遵循相机佩戴者操纵的对象的视觉跟踪算法可以提供有效的信息,以有效地建模此类相互作用。在过去的几年中,计算机视觉社区已大大提高了各种目标对象和场景的跟踪算法的性能。尽管以前有几次尝试在FPV域中利用跟踪器,但仍缺少对最先进跟踪器的性能的有条理分析。这项研究差距提出了一个问题,即应使用当前的解决方案``现成''还是应进行更多特定领域的研究。本文旨在为此类问题提供答案。我们介绍了FPV中单个对象跟踪的首次系统研究。我们的研究广泛分析了42个算法的性能,包括通用对象跟踪器和基线FPV特定跟踪器。分析是通过关注FPV设置的不同方面,引入新的绩效指标以及与FPV特定任务有关的。这项研究是通过引入Trek-150(由150个密集注释的视频序列组成的新型基准数据集)来实现的。我们的结果表明,FPV中的对象跟踪对当前的视觉跟踪器构成了新的挑战。我们强调了导致这种行为的因素,并指出了可能的研究方向。尽管遇到了困难,但我们证明了跟踪器为需要短期对象跟踪的FPV下游任务带来好处。我们预计,随着新的和FPV特定的方法学会得到研究,通用对象跟踪将在FPV中受欢迎。
translated by 谷歌翻译
视频分割,即将视频帧分组到多个段或对象中,在广泛的实际应用中扮演关键作用,例如电影中的视觉效果辅助,自主驾驶中的现场理解,以及视频会议中的虚拟背景创建,名称一些。最近,由于计算机愿景中的联系复兴,一直存在众多深度学习的方法,这一直专用于视频分割并提供引人注目的性能。在这项调查中,通过引入各自的任务设置,背景概念,感知需要,开发历史,以及开发历史,综合审查这一领域的两种基本研究,即在视频和视频语义分割中,即视频和视频语义分割中的通用对象分段(未知类别)。主要挑战。我们还提供关于两种方法和数据集的代表文学的详细概述。此外,我们在基准数据集中呈现了审查方法的定量性能比较。最后,我们指出了这一领域的一套未解决的开放问题,并提出了进一步研究的可能机会。
translated by 谷歌翻译
This paper introduces a video dataset of spatiotemporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips.AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6% mAP, underscoring the need for developing new approaches for video understanding.
translated by 谷歌翻译
为视频中的每个像素分配语义类和跟踪身份的任务称为视频Panoptic分段。我们的工作是第一个在真实世界中瞄准这项任务,需要在空间和时间域中的密集解释。由于此任务的地面真理难以获得,但是,现有数据集是合成构造的或仅在短视频剪辑中稀疏地注释。为了克服这一点,我们介绍了一个包含两个数据集,Kitti-Step和Motchallenge步骤的新基准。数据集包含长视频序列,提供具有挑战性的示例和用于研究长期像素精确分割和在真实条件下跟踪的测试床。我们进一步提出了一种新的评估度量分割和跟踪质量(STQ),其相当余额平衡该任务的语义和跟踪方面,并且更适合评估任意长度的序列。最后,我们提供了几个基线来评估此新具有挑战性数据集的现有方法的状态。我们已将我们的数据集,公制,基准服务器和基准公开提供,并希望这将激发未来的研究。
translated by 谷歌翻译
本文提出了一个自我监督的目标,用于学习表征,将对象定位在遮挡下 - 一种被称为对象永久的属性。一个中心问题是在全部阻塞的情况下选择学习信号。我们没有直接监督看不见的对象的位置,而是提出一个自制的目标,该目标既不需要人类注释,也不需要对对象动态的假设。我们表明,对象永久性可以通过优化内存的时间连贯性来出现:我们沿着马尔可夫沿着记忆的时空图,每个时间步骤中的状态都是序列编码器中的非马克维亚特征。这导致了存储器表示,该内存表示存储遮挡的对象并预测其运动,以更好地本地化。最终的模型在数个复杂性和现实主义的数据集上的现有方法优于现有方法,尽管需要最少的监督,从而广泛适用。
translated by 谷歌翻译
Over the years, datasets and benchmarks have proven their fundamental importance in computer vision research, enabling targeted progress and objective comparisons in many fields. At the same time, legacy datasets may impend the evolution of a field due to saturated algorithm performance and the lack of contemporary, high quality data. In this work we present a new benchmark dataset and evaluation methodology for the area of video object segmentation. The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motionblur and appearance changes. Each video is accompanied by densely annotated, pixel-accurate and per-frame ground truth segmentation. In addition, we provide a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics that measure the spatial extent of the segmentation, the accuracy of the silhouette contours and the temporal coherence. The results uncover strengths and weaknesses of current approaches, opening up promising directions for future works.
translated by 谷歌翻译
本文的目的是一个模型,能够在视频中发现,跟踪和细分多个移动对象。我们做出四个贡献:首先,我们引入了一个以对象为中心的分段模型,具有深度订购的层表示。这是使用摄入光流的变压器体系结构的变体来实现的,每个查询向量为整个视频指定对象及其层。该模型可以有效地发现多个移动对象并处理相互阻塞。其次,我们引入了一条可扩展的管道,用于生成具有多个对象的合成训练数据,从而大大降低了对劳动密集型注释的要求,并支持SIM2REAL概括;第三,我们表明该模型能够学习对象的持久性和时间形状的一致性,并能够预测Amodal分割掩码。第四,我们评估了标准视频细分基准测试模型,戴维斯,MOCA,SEGTRACK,FBMS-59,并实现最新的无监督分割性能,甚至优于几种监督方法。通过测试时间适应,我们观察到进一步的性能提高。
translated by 谷歌翻译
In this paper we present a new computer vision task, named video instance segmentation. The goal of this new task is simultaneous detection, segmentation and tracking of instances in videos. In words, it is the first time that the image instance segmentation problem is extended to the video domain. To facilitate research on this new task, we propose a large-scale benchmark called YouTube-VIS, which consists of 2,883 high-resolution YouTube videos, a 40-category label set and 131k high-quality instance masks.In addition, we propose a novel algorithm called Mask-Track R-CNN for this task. Our new method introduces a new tracking branch to Mask R-CNN to jointly perform the detection, segmentation and tracking tasks simultaneously. Finally, we evaluate the proposed method and several strong baselines on our new dataset. Experimental results clearly demonstrate the advantages of the proposed algorithm and reveal insight for future improvement. We believe the video instance segmentation task will motivate the community along the line of research for video understanding.
translated by 谷歌翻译
Generic motion understanding from video involves not only tracking objects, but also perceiving how their surfaces deform and move. This information is useful to make inferences about 3D shape, physical properties and object interactions. While the problem of tracking arbitrary physical points on surfaces over longer video clips has received some attention, no dataset or benchmark for evaluation existed, until now. In this paper, we first formalize the problem, naming it tracking any point (TAP). We introduce a companion benchmark, TAP-Vid, which is composed of both real-world videos with accurate human annotations of point tracks, and synthetic videos with perfect ground-truth point tracks. Central to the construction of our benchmark is a novel semi-automatic crowdsourced pipeline which uses optical flow estimates to compensate for easier, short-term motion like camera shake, allowing annotators to focus on harder sections of video. We validate our pipeline on synthetic data and propose a simple end-to-end point tracking model TAP-Net, showing that it outperforms all prior methods on our benchmark when trained on synthetic data.
translated by 谷歌翻译
我们的视频是否可以在场景中存在沉重的遮挡时感知对象?为了回答这个问题,我们收集一个名为OVIS的大型数据集,用于遮挡视频实例分段,即同时检测,段和跟踪遮挡场景中的实例。 OVIS由25个语义类别的296K高质量的掩码组成,通常发生对象遮挡。虽然我们的人类视觉系统可以通过语境推理和关联来理解那些被遮挡的情况,但我们的实验表明当前的视频理解系统不能。在ovis数据集上,最先进的算法实现的最高AP仅为16.3,这揭示了我们仍然处于创建对象,实例和视频中的新生阶段。我们还提出了一个简单的即插即用模块,执行时间特征校准,以补充闭塞引起的缺失对象线索。基于MaskTrack R-CNN和SIPMASK构建,我们在OVIS数据集中获得了显着的AP改进。 ovis数据集和项目代码可在http://songbai.site/ovis获得。
translated by 谷歌翻译
视觉世界可以以稀疏相互作用的不同实体来嘲笑。在动态视觉场景中发现这种组合结构已被证明对端到端的计算机视觉方法有挑战,除非提供明确的实例级别的监督。利用运动提示的基于老虎机的模型最近在学习代表,细分和跟踪对象的情况下没有直接监督显示了巨大的希望,但是它们仍然无法扩展到复杂的现实世界多对象视频。为了弥合这一差距,我们从人类发展中汲取灵感,并假设以深度信号形式的场景几何形状的信息可以促进以对象为中心的学习。我们介绍了一种以对象为中心的视频模型SAVI ++,该模型经过训练,可以预测基于插槽的视频表示的深度信号。通过进一步利用模型缩放的最佳实践,我们能够训练SAVI ++以细分使用移动摄像机记录的复杂动态场景,其中包含在自然主义背景上具有不同外观的静态和移动对象,而无需进行分割监督。最后,我们证明,通过使用从LIDAR获得的稀疏深度信号,Savi ++能够从真实World Waymo Open DataSet中的视频中学习新兴对象细分和跟踪。
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
This paper presents a new large scale multi-person tracking dataset -- \texttt{PersonPath22}, which is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20 datasets. The lack of large scale training and test data for this task has limited the community's ability to understand the performance of their tracking systems on a wide range of scenarios and conditions such as variations in person density, actions being performed, weather, and time of day. \texttt{PersonPath22} dataset was specifically sourced to provide a wide variety of these conditions and our annotations include rich meta-data such that the performance of a tracker can be evaluated along these different dimensions. The lack of training data has also limited the ability to perform end-to-end training of tracking systems. As such, the highest performing tracking systems all rely on strong detectors trained on external image datasets. We hope that the release of this dataset will enable new lines of research that take advantage of large scale video based training data.
translated by 谷歌翻译
In this paper, we present the Multi-view Extended Videos with Identities (MEVID) dataset for large-scale, video person re-identification (ReID) in the wild. To our knowledge, MEVID represents the most-varied video person ReID dataset, spanning an extensive indoor and outdoor environment across nine unique dates in a 73-day window, various camera viewpoints, and entity clothing changes. Specifically, we label the identities of 158 unique people wearing 598 outfits taken from 8, 092 tracklets, average length of about 590 frames, seen in 33 camera views from the very large-scale MEVA person activities dataset. While other datasets have more unique identities, MEVID emphasizes a richer set of information about each individual, such as: 4 outfits/identity vs. 2 outfits/identity in CCVID, 33 viewpoints across 17 locations vs. 6 in 5 simulated locations for MTA, and 10 million frames vs. 3 million for LS-VID. Being based on the MEVA video dataset, we also inherit data that is intentionally demographically balanced to the continental United States. To accelerate the annotation process, we developed a semi-automatic annotation framework and GUI that combines state-of-the-art real-time models for object detection, pose estimation, person ReID, and multi-object tracking. We evaluate several state-of-the-art methods on MEVID challenge problems and comprehensively quantify their robustness in terms of changes of outfit, scale, and background location. Our quantitative analysis on the realistic, unique aspects of MEVID shows that there are significant remaining challenges in video person ReID and indicates important directions for future research.
translated by 谷歌翻译
互动对象理解,或者我们可以对对象做些什么以及计算机愿景的长期目标。在本文中,我们通过观察野外的自我高端视频的人类手来解决这个问题。我们展示了观察人类的手与之交互以及如何提供相关数据和必要的监督。参加双手,容易定位并稳定积极的物体以进行学习,并揭示发生与对象的交互的地方。分析手显示我们可以对物体做些什么以及如何做些。我们在史诗厨房数据集上应用这些基本原则,并成功地学习了国家敏感的特征,以及互动区域和提供了麦克拉斯的地区),纯粹是通过观察在EGoCentric视频中的手。
translated by 谷歌翻译
在本文中,我们介绍了Siammask,这是一个实时使用相同简单方法实时执行视觉对象跟踪和视频对象分割的框架。我们通过通过二进制细分任务来增强其损失,从而改善了流行的全面暹罗方法的离线培训程序。离线训练完成后,SiamMask只需要一个单个边界框来初始化,并且可以同时在高框架速率下进行视觉对象跟踪和分割。此外,我们表明可以通过简单地以级联的方式重新使用多任务模型来扩展框架以处理多个对象跟踪和细分。实验结果表明,我们的方法具有较高的处理效率,每秒约55帧。它可以在视觉对象跟踪基准测试中产生实时最新结果,同时以高速进行视频对象分割基准测试以高速显示竞争性能。
translated by 谷歌翻译
尽管视频实例细分(VIS)已经取得了迅速的进步,但当前的方法难以预测具有准确边界细节的高质量面具。此外,预测的分割经常会随着时间的流逝而波动,表明时间一致性线索被忽略或不充分利用。在本文中,我们着手解决这些问题,目的是实现VIS的高度详细且更具时间稳定的面具预测。我们首先提出了视频蒙版转换方法(VMT)方法,得益于高效的视频变压器结构,能够利用细粒度的高分辨率功能。我们的VMT检测和组在视频段中每个曲目的稀疏易用错误时空区域稀疏,然后使用局部和实例级别的提示对其进行完善。其次,我们确定流行的YouTube-VIS数据集的粗边界注释构成了一个主要限制因素。因此,根据我们的VMT体系结构,我们通过迭代培训和自我纠正设计了一种自动注释细化方法。为了基准VIS的高质量掩码预测,我们介绍了HQ-YTVIS数据集,该数据集由手动重新注销的测试集和我们的自动完善培训数据组成。我们将VMT与HQ-YTVI的最新最新方法以及YouTube-VIS,OVIS和BDD100K MOTS基准进行了比较。实验结果清楚地证明了我们方法通过捕获精确的细节来分割复杂和动态对象的功效和有效性。
translated by 谷歌翻译
虽然深度学习方法近年来取得了高级视频对象识别性能,但在视频中感知封闭对象仍然是一个非常具有挑战性的任务。为促进遮挡理解的发展,我们在遮挡方案中收集一个名为OVIS的大规模数据集,用于遮挡方案中的视频实例分段。 ovis由296K高质量的屏幕和901个遮挡场景组成。虽然我们的人类视觉系统可以通过语境推理和关联来感知那些遮挡物体,但我们的实验表明当前的视频了解系统不能。在ovis数据集上,所有基线方法都遇到了大约80%的大约80%的大约80%,这表明仍然有很长的路要走在复杂的真实情景中理解模糊物体和视频。为了促进对视频理解系统的新范式研究,我们基于OVI数据集启动了挑战。提交的顶级执行算法已经比我们的基线实现了更高的性能。在本文中,我们将介绍OVIS数据集,并通过分析基线的结果和提交的方法来进一步剖析。可以在http://songbai.site/ovis找到ovis数据集和挑战信息。
translated by 谷歌翻译