我们提出了一个数据集,该数据集包含具有唯一对象标识(IDS)的对象注释,用于高效视频编码(HEVC)V1常见测试条件(CTC)序列。准备了13个序列的地面实际注释并作为称为SFU-HW-Tracks-V1的数据集发布。对于每个视频帧,地面真相注释包括对象类ID,对象ID和边界框位置及其维度。数据集可用于评估未压缩视频序列上的对象跟踪性能,并研究视频压缩与对象跟踪之间的关系。
translated by 谷歌翻译
Manually analyzing spermatozoa is a tremendous task for biologists due to the many fast-moving spermatozoa, causing inconsistencies in the quality of the assessments. Therefore, computer-assisted sperm analysis (CASA) has become a popular solution. Despite this, more data is needed to train supervised machine learning approaches in order to improve accuracy and reliability. In this regard, we provide a dataset called VISEM-Tracking with 20 video recordings of 30s of spermatozoa with manually annotated bounding-box coordinates and a set of sperm characteristics analyzed by experts in the domain. VISEM-Tracking is an extension of the previously published VISEM dataset. In addition to the annotated data, we provide unlabeled video clips for easy-to-use access and analysis of the data. As part of this paper, we present baseline sperm detection performances using the YOLOv5 deep learning model trained on the VISEM-Tracking dataset. As a result, the dataset can be used to train complex deep-learning models to analyze spermatozoa. The dataset is publicly available at https://zenodo.org/record/7293726.
translated by 谷歌翻译
Panoptic图像分割是计算机视觉任务,即在图像中查找像素组并为其分配语义类别和对象实例标识符。由于其在机器人技术和自动驾驶中的关键应用,图像细分的研究变得越来越流行。因此,研究社区依靠公开可用的基准数据集来推进计算机视觉中的最新技术。但是,由于将图像标记为高昂的成本,因此缺乏适合全景分割的公开地面真相标签。高标签成本还使得将现有数据集扩展到视频域和多相机设置是一项挑战。因此,我们介绍了Waymo Open DataSet:全景视频全景分割数据集,这是一个大型数据集,它提供了用于自主驾驶的高质量的全景分割标签。我们使用公开的Waymo打开数据集生成数据集,利用各种相机图像集。随着时间的推移,我们的标签是一致的,用于视频处理,并且在车辆上安装的多个摄像头保持一致,以了解全景的理解。具体而言,我们为28个语义类别和2,860个时间序列提供标签,这些标签由在三个不同地理位置驾驶的自动驾驶汽车上安装的五个摄像机捕获,从而导致总共标记为100k标记的相机图像。据我们所知,这使我们的数据集比现有的数据集大量数据集大的数量级。我们进一步提出了一个新的基准,用于全景视频全景分割,并根据DeepLab模型家族建立许多强大的基准。我们将公开制作基准和代码。在https://waymo.com/open上找到数据集。
translated by 谷歌翻译
In this paper we present a new computer vision task, named video instance segmentation. The goal of this new task is simultaneous detection, segmentation and tracking of instances in videos. In words, it is the first time that the image instance segmentation problem is extended to the video domain. To facilitate research on this new task, we propose a large-scale benchmark called YouTube-VIS, which consists of 2,883 high-resolution YouTube videos, a 40-category label set and 131k high-quality instance masks.In addition, we propose a novel algorithm called Mask-Track R-CNN for this task. Our new method introduces a new tracking branch to Mask R-CNN to jointly perform the detection, segmentation and tracking tasks simultaneously. Finally, we evaluate the proposed method and several strong baselines on our new dataset. Experimental results clearly demonstrate the advantages of the proposed algorithm and reveal insight for future improvement. We believe the video instance segmentation task will motivate the community along the line of research for video understanding.
translated by 谷歌翻译
本文旨在解决多个对象跟踪(MOT),这是计算机视觉中的一个重要问题,但由于许多实际问题,尤其是阻塞,因此仍然具有挑战性。确实,我们提出了一种新的实时深度透视图 - 了解多个对象跟踪(DP-MOT)方法,以解决MOT中的闭塞问题。首先提出了一个简单但有效的主题深度估计(SODE),以在2D场景中自动以无监督的方式自动订购检测到的受试者的深度位置。使用SODE的输出,提出了一个新的活动伪3D KALMAN滤波器,即具有动态控制变量的Kalman滤波器的简单但有效的扩展,以动态更新对象的运动。此外,在数据关联步骤中提出了一种新的高阶关联方法,以合并检测到的对象之间的一阶和二阶关系。与标准MOT基准的最新MOT方法相比,提出的方法始终达到最先进的性能。
translated by 谷歌翻译
对象运动和对象外观是多个对象跟踪(MOT)应用中的常用信息,用于将帧跨越帧的检测相关联,或用于联合检测和跟踪方法的直接跟踪预测。然而,不仅是这两种类型的信息通常是单独考虑的,而且它们也没有帮助直接从当前感兴趣帧中使用视觉信息的用法。在本文中,我们提出了PatchTrack,一种基于变压器的联合检测和跟踪系统,其使用当前感兴趣的帧帧的曲线预测曲目。我们使用卡尔曼滤波器从前一帧预测当前帧中的现有轨道的位置。从预测边界框裁剪的补丁被发送到变压器解码器以推断新曲目。通过利用在补丁中编码的对象运动和对象外观信息,所提出的方法将更多地关注新曲目更有可能发生的位置。我们展示了近期MOT基准的Patchtrack的有效性,包括MOT16(MOTA 73.71%,IDF1 65.77%)和MOT17(MOTA 73.59%,IDF1 65.23%)。结果在https://motchallenge.net/method/mot=4725&chl=10上发布。
translated by 谷歌翻译
增加对肉类产品的需求与农业劳动力短缺相结合,导致需要开发新的实时解决方案来有效监控动物。使用跟踪逐方法连续定位单个猪进行了重大进展。然而,这些方法由于单个固定摄像机而不能以足够的分辨率覆盖整个地板的椭圆形钢笔。我们通过使用多个相机来解决这个问题,使得相邻摄像机的视野重叠,它们在一起跨越整个楼层。当猪从一个摄像机视图到相邻相机的视图时,避免跟踪中的断裂需要相互作用的切换。我们在地板上识别相邻的相机和共用猪位置,在地板上使用视图间的界面定位。我们的实验涉及两个生长良好的钢笔,每个成长型猪,每个猪,以及三个RGB相机。我们的算法首先使用基于深度学习的对象检测模型(YOLO)来检测猪,并使用多目标跟踪算法(DevelSort)创建其本地跟踪ID。然后,我们使用相互相互作用的共享位置来匹配多个视图,并为在整个跟踪中保存的每只猪生成全局ID。为了评估我们的方法,我们提供了五种两分钟的长视频序列,具有完全注释的全球标识。我们在单个摄像头视图中跟踪猪,多目标跟踪精度和精度分别为65.0%和54.3%,实现了74.0%的相机切换精度。我们在https://github.com/aifarms/multi-camera-pig-tracking中开源我们的代码和注释数据集
translated by 谷歌翻译
How would you fairly evaluate two multi-object tracking algorithms (i.e. trackers), each one employing a different object detector? Detectors keep improving, thus trackers can make less effort to estimate object states over time. Is it then fair to compare a new tracker employing a new detector with another tracker using an old detector? In this paper, we propose a novel performance measure, named Tracking Effort Measure (TEM), to evaluate trackers that use different detectors. TEM estimates the improvement that the tracker does with respect to its input data (i.e. detections) at frame level (intra-frame complexity) and sequence level (inter-frame complexity). We evaluate TEM over well-known datasets, four trackers and eight detection sets. Results show that, unlike conventional tracking evaluation measures, TEM can quantify the effort done by the tracker with a reduced correlation on the input detections. Its implementation is publicly available online at https://github.com/vpulab/MOT-evaluation.
translated by 谷歌翻译
Covid-19大流行导致了前所未有的全球公共卫生危机。鉴于其固有的性质,建议社会疏散措施作为遏制这种大流行传播的主要策略。因此,识别违反这些协议的情况,对削减疾病的传播并促进可持续生活方式具有影响。本文提出了一种基于电脑视觉的基于计算机视觉的系统,分析了CCTV镜头,为Covid-19传播提供了威胁水平评估。该系统努力捕获跨越多个帧的CCTV镜头的信息内容,以识别各个帧的各种违反社会偏移协议的实例,以及跨空间的识别,以及组行为的识别。该功能主要是通过利用基于时间图的基础结构来实现CCTV镜头的信息和对全能解释图的策略并量化给定场景的威胁级别的策略。在一系列场景中测试并验证各个组件,并针对人类专家意见进行了完整的系统。结果反映了威胁水平对人,其物理接近,相互作用,防护服和群体动力学的依赖。系统性能的准确性为76%,从而在城市进行了可部署的威胁监控系统,以允许社会中的正常和可持续性。
translated by 谷歌翻译
多摄像机跟踪系统在需要高质量跟踪结果的应用中获得普及,例如摩擦结账,因为单眼多物体跟踪(MOT)系统由于闭塞而在杂乱和拥挤的环境中经常失败。通过恢复部分3D信息,多个高度重叠的相机可以显着减轻问题。但是,使用不同的相机设置和背景创建高质量多摄像头跟踪数据集的成本在该域中的数据集比例限制了数据集尺度。在本文中,我们在自动注释系统的帮助下提供了五种不同环境的大型密集标记的多摄像头跟踪数据集。该系统使用重叠和校准的深度和RGB相机来构建高性能3D跟踪器,可自动生成3D跟踪结果。使用摄像机参数将3D跟踪结果投影到每个RGB摄像头视图以创建2D跟踪结果。然后,我们手动检查并更正3D跟踪结果以确保标签质量,比完全手动注释便宜得多。我们使用两个实时多相机跟踪器和具有不同设置的人重新识别(REID)模型进行了广泛的实验。该数据集在杂乱和拥挤的环境中提供了更可靠的多摄像头,多目标跟踪系统的基准。此外,我们的结果表明,在此数据集中调整跟踪器和REID模型显着提高了它们的性能。我们的数据集将在接受这项工作后公开发布。
translated by 谷歌翻译
多目标跟踪(MOT)的典型管道是使用探测器进行对象本地化,并在重新识别(RE-ID)之后进行对象关联。该管道通过对象检测和重新ID的最近进展部分而部分地激励,并且部分地通过现有的跟踪数据集中的偏差激励,其中大多数物体倾向于具有区分外观和RE-ID模型足以建立关联。为了响应这种偏见,我们希望重新强调多目标跟踪的方法也应该在对象外观不充分辨别时起作用。为此,我们提出了一个大型数据集,用于多人跟踪,人类具有相似的外观,多样化的运动和极端关节。由于数据集包含主要组跳舞视频,我们将其命名为“DanceTrack”。我们预计DanceTrack可以提供更好的平台,以开发更多的MOT算法,这些算法依赖于视觉识别并更依赖于运动分析。在我们的数据集上,我们在数据集上基准测试了几个最先进的追踪器,并在与现有基准测试中遵守DanceTrack的显着性能下降。 DataSet,项目代码和竞争服务器播放:\ url {https://github.com/danceTrack}。
translated by 谷歌翻译
Multi-animal tracking (MAT), a multi-object tracking (MOT) problem, is crucial for animal motion and behavior analysis and has many crucial applications such as biology, ecology and animal conservation. Despite its importance, MAT is largely under-explored compared to other MOT problems such as multi-human tracking due to the scarcity of dedicated benchmarks. To address this problem, we introduce AnimalTrack, a dedicated benchmark for multi-animal tracking in the wild. Specifically, AnimalTrack consists of 58 sequences from a diverse selection of 10 common animal categories. On average, each sequence comprises of 33 target objects for tracking. In order to ensure high quality, every frame in AnimalTrack is manually labeled with careful inspection and refinement. To our best knowledge, AnimalTrack is the first benchmark dedicated to multi-animal tracking. In addition, to understand how existing MOT algorithms perform on AnimalTrack and provide baselines for future comparison, we extensively evaluate 14 state-of-the-art representative trackers. The evaluation results demonstrate that, not surprisingly, most of these trackers become degenerated due to the differences between pedestrians and animals in various aspects (e.g., pose, motion, and appearance), and more efforts are desired to improve multi-animal tracking. We hope that AnimalTrack together with evaluation and analysis will foster further progress on multi-animal tracking. The dataset and evaluation as well as our analysis will be made available at https://hengfan2010.github.io/projects/AnimalTrack/.
translated by 谷歌翻译
Visual object analysis researchers are increasingly experimenting with video, because it is expected that motion cues should help with detection, recognition, and other analysis tasks. This paper presents the Cambridge-driving Labeled Video Database (CamVid) as the first collection of videos with object class semantic labels, complete with metadata. The database provides ground truth labels that associate each pixel with one of 32 semantic classes. The database addresses the need for experimental data to quantitatively evaluate emerging algorithms. While most videos are filmed with fixed-position CCTV-style cameras, our data was captured from the perspective of a driving automobile. The driving scenario increases the number and heterogeneity of the observed object classes. Over 10 min of high quality 30 Hz footage is being provided, with corresponding semantically labeled images at 1 Hz and in part, 15 Hz. The CamVid Database offers four contributions that are relevant to object analysis researchers. First, the per-pixel semantic segmentation of over 700 images was specified manually, and was then inspected and confirmed by a second person for accuracy. Second, the high-quality and large resolution color video images in the database represent valuable extended duration digitized footage to those interested in driving scenarios or ego-motion. Third, we filmed calibration sequences for the camera color response and intrinsics, and computed a 3D camera pose for each frame in the sequences. Finally, in support of expanding this or other databases, we present custom-made labeling software for assisting users who wish to paint precise class-labels for other images and videos. We evaluate the relevance of the database by measuring the performance of an algorithm from each of three distinct domains: multi-class object recognition, pedestrian detection, and label propagation.
translated by 谷歌翻译
人重新识别(RE-ID)旨在在相机网络中寻找感兴趣的人(查询)。在经典的重新设置中,查询查询在包含整个身体的正确裁剪图像的画廊中。最近,引入了实时重新ID设置,以更好地代表Re-ID的实际应用上下文。它包括在简短的视频中搜索查询,其中包含整个场景帧。最初的实时重新ID基线使用行人探测器来构建大型搜索库和经典的重新ID模型,以在画廊中找到查询。但是,产生的画廊太大,包含低质量的图像,从而降低了现场重新ID性能。在这里,我们提出了一种称为贸易的新现场重新ID方法,以产生较低的高质量画廊。贸易首先使用跟踪算法来识别画廊中同一个人的图像序列。随后,使用异常检测模型选择每个轨道的单个良好代表。贸易已在PRID-2011数据集的实时重新ID版本上进行了验证,并显示出比基线的显着改进。
translated by 谷歌翻译
Tracking has traditionally been the art of following interest points through space and time. This changed with the rise of powerful deep networks. Nowadays, tracking is dominated by pipelines that perform object detection followed by temporal association, also known as tracking-by-detection. We present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art. Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame. Given this minimal input, CenterTrack localizes objects and predicts their associations with the previous frame. That's it. CenterTrack is simple, online (no peeking into the future), and real-time. It achieves 67.8% MOTA on the MOT17 challenge at 22 FPS and 89.4% MOTA on the KITTI tracking benchmark at 15 FPS, setting a new state of the art on both datasets. CenterTrack is easily extended to monocular 3D tracking by regressing additional 3D attributes. Using monocular video input, it achieves 28.3% AMOTA@0.2 on the newly released nuScenes 3D tracking benchmark, substantially outperforming the monocular baseline on this benchmark while running at 28 FPS.
translated by 谷歌翻译
The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing selfdriving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the overall viability of the technology. In an effort to help align the research community's contributions with real-world selfdriving problems, we introduce a new large-scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest cam-era+LiDAR dataset available based on our proposed geographical coverage metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open.
translated by 谷歌翻译
对人类对象相互作用的理解在第一人称愿景(FPV)中至关重要。遵循相机佩戴者操纵的对象的视觉跟踪算法可以提供有效的信息,以有效地建模此类相互作用。在过去的几年中,计算机视觉社区已大大提高了各种目标对象和场景的跟踪算法的性能。尽管以前有几次尝试在FPV域中利用跟踪器,但仍缺少对最先进跟踪器的性能的有条理分析。这项研究差距提出了一个问题,即应使用当前的解决方案``现成''还是应进行更多特定领域的研究。本文旨在为此类问题提供答案。我们介绍了FPV中单个对象跟踪的首次系统研究。我们的研究广泛分析了42个算法的性能,包括通用对象跟踪器和基线FPV特定跟踪器。分析是通过关注FPV设置的不同方面,引入新的绩效指标以及与FPV特定任务有关的。这项研究是通过引入Trek-150(由150个密集注释的视频序列组成的新型基准数据集)来实现的。我们的结果表明,FPV中的对象跟踪对当前的视觉跟踪器构成了新的挑战。我们强调了导致这种行为的因素,并指出了可能的研究方向。尽管遇到了困难,但我们证明了跟踪器为需要短期对象跟踪的FPV下游任务带来好处。我们预计,随着新的和FPV特定的方法学会得到研究,通用对象跟踪将在FPV中受欢迎。
translated by 谷歌翻译
本文介绍了一种名为Polytrack的新方法,用于使用边界多边形的快速多目标跟踪和分段。PolyTrack通过产生其中心键盘的热插拔来检测物体。对于它们中的每一个,通过在每个实例上计算限定多边形而不是传统边界框来完成粗略分割。通过将两个连续帧作为输入来完成跟踪,并计算在第一帧中检测到的每个对象的中心偏移,以预测其在第二帧中的位置。还应用了卡尔曼滤波器以减少ID交换机的数量。由于我们的目标应用程序是自动化驾驶系统,因此我们在城市环境视频上应用了方法。我们在MOTS和Kittimots数据集上培训和评估多轨。结果表明,跟踪多边形可以是边界框和掩模跟踪的良好替代品。Polytrack代码可在https://github.com/gafaua/polytrack上获得。
translated by 谷歌翻译
我们提出了一种在视频中跟踪多人的新方法。与雇用2D表示的过去的方法不同,我们专注于使用位于三维空间的人的3D表示。为此,我们开发一种方法,人体网状和外观恢复(HMAR),除了提取人的3D几何形状作为SMPL网格之外,还提取作为网格三角形上的纹理图的外观。这用作对视点和构成更改具有稳健性的外观的3D表示。给定视频剪辑,我们首先使用HMAR提取3D外观,姿势和位置信息来检测对应的边界框。然后将这些嵌入向量发送到变压器,该变压器在序列的持续时间内执行表示的时空聚合。由此产生的表示的相似性用于求解将每个人分配给ROCKET的关联。我们评估我们在Posetrack,MUPOT和AVA数据集中的方法。我们发现3D表示比2D表示更有效,以便在这些设置中跟踪,我们获得最先进的性能。代码和结果可用于:https://brjathu.github.io/t3dp。
translated by 谷歌翻译
从多个相机角度捕获事件可以为观众提供该事件最完整,最有趣的图片。为了适合广播,人类导演需要决定在每个时间点显示什么。随着摄像头的数量,这可能会变得笨拙。全向或广角摄像机的引入使事件更加完整地捕获,这使导演更加困难。在本文中,提出了一个系统,即鉴于事件的多个超高分辨率视频流,可以生成视觉上令人愉悦的镜头序列,以遵循事件的相关动作。由于算法是通用的,因此可以应用于以人类为特征的大多数情况。当需要实时广播时,提出的方法允许在线处理,以及当优先级的相机操作质量时,离线处理。对象检测用于检测输入流中人类和其他感兴趣的对象。检测到的感兴趣的人以及基于电影惯例的一组规则,用于确定要显示哪个视频流以及该流的哪一部分实际上是构造的。用户可以提供许多确定这些规则如何解释的设置。该系统能够通过消除镜头扭曲来处理不同广角视频流的输入。对于多种不同的情况,使用用户研究表明,提议的自动导演能够以美学上令人愉悦的视频构图和类似人类的镜头切换行为来捕获事件。
translated by 谷歌翻译