Visual object analysis researchers are increasingly experimenting with video, because it is expected that motion cues should help with detection, recognition, and other analysis tasks. This paper presents the Cambridge-driving Labeled Video Database (CamVid) as the first collection of videos with object class semantic labels, complete with metadata. The database provides ground truth labels that associate each pixel with one of 32 semantic classes. The database addresses the need for experimental data to quantitatively evaluate emerging algorithms. While most videos are filmed with fixed-position CCTV-style cameras, our data was captured from the perspective of a driving automobile. The driving scenario increases the number and heterogeneity of the observed object classes. Over 10 min of high quality 30 Hz footage is being provided, with corresponding semantically labeled images at 1 Hz and in part, 15 Hz. The CamVid Database offers four contributions that are relevant to object analysis researchers. First, the per-pixel semantic segmentation of over 700 images was specified manually, and was then inspected and confirmed by a second person for accuracy. Second, the high-quality and large resolution color video images in the database represent valuable extended duration digitized footage to those interested in driving scenarios or ego-motion. Third, we filmed calibration sequences for the camera color response and intrinsics, and computed a 3D camera pose for each frame in the sequences. Finally, in support of expanding this or other databases, we present custom-made labeling software for assisting users who wish to paint precise class-labels for other images and videos. We evaluate the relevance of the database by measuring the performance of an algorithm from each of three distinct domains: multi-class object recognition, pedestrian detection, and label propagation.
translated by 谷歌翻译
The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
translated by 谷歌翻译
多媒体异常数据集在自动监视中发挥着至关重要的作用。它们具有广泛的应用程序,从异常对象/情况检测到检测危及生命事件的检测。该字段正在接收大量的1.5多年的巨大研究兴趣,因此,已经创建了越来越多地专用于异常动作和对象检测的数据集。点击这些公共异常数据集使研究人员能够生成和比较具有相同输入数据的各种异常检测框架。本文介绍了各种视频,音频以及基于异常检测的应用的综合调查。该调查旨在解决基于异常检测的多媒体公共数据集缺乏全面的比较和分析。此外,它可以帮助研究人员选择最佳可用数据集,用于标记框架。此外,我们讨论了现有数据集和未来方向洞察中开发多峰异常检测数据集的差距。
translated by 谷歌翻译
TU Dresden www.cityscapes-dataset.net train/val -fine annotation -3475 images train -coarse annotation -20 000 images test -fine annotation -1525 images
translated by 谷歌翻译
由于价格合理的可穿戴摄像头和大型注释数据集的可用性,在过去几年中,Egintric Vision(又名第一人称视觉-FPV)的应用程序在过去几年中蓬勃发展。可穿戴摄像机的位置(通常安装在头部上)允许准确记录摄像头佩戴者在其前面的摄像头,尤其是手和操纵物体。这种内在的优势可以从多个角度研究手:将手及其部分定位在图像中;了解双手涉及哪些行动和活动;并开发依靠手势的人类计算机界面。在这项调查中,我们回顾了使用以自我为中心的愿景专注于手的文献,将现有方法分类为:本地化(其中的手或部分在哪里?);解释(手在做什么?);和应用程序(例如,使用以上为中心的手提示解决特定问题的系统)。此外,还提供了带有手基注释的最突出的数据集的列表。
translated by 谷歌翻译
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the chal-
translated by 谷歌翻译
视频分割,即将视频帧分组到多个段或对象中,在广泛的实际应用中扮演关键作用,例如电影中的视觉效果辅助,自主驾驶中的现场理解,以及视频会议中的虚拟背景创建,名称一些。最近,由于计算机愿景中的联系复兴,一直存在众多深度学习的方法,这一直专用于视频分割并提供引人注目的性能。在这项调查中,通过引入各自的任务设置,背景概念,感知需要,开发历史,以及开发历史,综合审查这一领域的两种基本研究,即在视频和视频语义分割中,即视频和视频语义分割中的通用对象分段(未知类别)。主要挑战。我们还提供关于两种方法和数据集的代表文学的详细概述。此外,我们在基准数据集中呈现了审查方法的定量性能比较。最后,我们指出了这一领域的一套未解决的开放问题,并提出了进一步研究的可能机会。
translated by 谷歌翻译
计算机视觉在智能运输系统(ITS)和交通监视中发挥了重要作用。除了快速增长的自动化车辆和拥挤的城市外,通过实施深层神经网络的实施,可以使用视频监视基础架构进行自动和高级交通管理系统(ATM)。在这项研究中,我们为实时交通监控提供了一个实用的平台,包括3D车辆/行人检测,速度检测,轨迹估算,拥塞检测以及监视车辆和行人的相互作用,都使用单个CCTV交通摄像头。我们适应了定制的Yolov5深神经网络模型,用于车辆/行人检测和增强的排序跟踪算法。还开发了基于混合卫星的基于混合卫星的逆透视图(SG-IPM)方法,用于摄像机自动校准,从而导致准确的3D对象检测和可视化。我们还根据短期和长期的时间视频数据流开发了层次结构的交通建模解决方案,以了解脆弱道路使用者的交通流量,瓶颈和危险景点。关于现实世界情景和与最先进的比较的几项实验是使用各种交通监控数据集进行的,包括从高速公路,交叉路口和城市地区收集的MIO-TCD,UA-DETRAC和GRAM-RTM,在不同的照明和城市地区天气状况。
translated by 谷歌翻译
对人类对象相互作用的理解在第一人称愿景(FPV)中至关重要。遵循相机佩戴者操纵的对象的视觉跟踪算法可以提供有效的信息,以有效地建模此类相互作用。在过去的几年中,计算机视觉社区已大大提高了各种目标对象和场景的跟踪算法的性能。尽管以前有几次尝试在FPV域中利用跟踪器,但仍缺少对最先进跟踪器的性能的有条理分析。这项研究差距提出了一个问题,即应使用当前的解决方案``现成''还是应进行更多特定领域的研究。本文旨在为此类问题提供答案。我们介绍了FPV中单个对象跟踪的首次系统研究。我们的研究广泛分析了42个算法的性能,包括通用对象跟踪器和基线FPV特定跟踪器。分析是通过关注FPV设置的不同方面,引入新的绩效指标以及与FPV特定任务有关的。这项研究是通过引入Trek-150(由150个密集注释的视频序列组成的新型基准数据集)来实现的。我们的结果表明,FPV中的对象跟踪对当前的视觉跟踪器构成了新的挑战。我们强调了导致这种行为的因素,并指出了可能的研究方向。尽管遇到了困难,但我们证明了跟踪器为需要短期对象跟踪的FPV下游任务带来好处。我们预计,随着新的和FPV特定的方法学会得到研究,通用对象跟踪将在FPV中受欢迎。
translated by 谷歌翻译
The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by A preliminary version of this paper appeared in the IEEE International Conference on Computer Vision (Baker et al. 2007).
translated by 谷歌翻译
A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available -current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval.
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译
This paper presents a new large scale multi-person tracking dataset -- \texttt{PersonPath22}, which is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20 datasets. The lack of large scale training and test data for this task has limited the community's ability to understand the performance of their tracking systems on a wide range of scenarios and conditions such as variations in person density, actions being performed, weather, and time of day. \texttt{PersonPath22} dataset was specifically sourced to provide a wide variety of these conditions and our annotations include rich meta-data such that the performance of a tracker can be evaluated along these different dimensions. The lack of training data has also limited the ability to perform end-to-end training of tracking systems. As such, the highest performing tracking systems all rely on strong detectors trained on external image datasets. We hope that the release of this dataset will enable new lines of research that take advantage of large scale video based training data.
translated by 谷歌翻译
为视频中的每个像素分配语义类和跟踪身份的任务称为视频Panoptic分段。我们的工作是第一个在真实世界中瞄准这项任务,需要在空间和时间域中的密集解释。由于此任务的地面真理难以获得,但是,现有数据集是合成构造的或仅在短视频剪辑中稀疏地注释。为了克服这一点,我们介绍了一个包含两个数据集,Kitti-Step和Motchallenge步骤的新基准。数据集包含长视频序列,提供具有挑战性的示例和用于研究长期像素精确分割和在真实条件下跟踪的测试床。我们进一步提出了一种新的评估度量分割和跟踪质量(STQ),其相当余额平衡该任务的语义和跟踪方面,并且更适合评估任意长度的序列。最后,我们提供了几个基线来评估此新具有挑战性数据集的现有方法的状态。我们已将我们的数据集,公制,基准服务器和基准公开提供,并希望这将激发未来的研究。
translated by 谷歌翻译
智能城市应用程序(例如智能交通路由或事故预防)依赖计算机视觉方法来确切的车辆定位和跟踪。由于精确标记的数据缺乏,从多个摄像机中检测和跟踪3D的车辆被证明是探索挑战的。我们提出了一个庞大的合成数据集,用于多个重叠和非重叠摄像头视图中的多个车辆跟踪和分割。与现有的数据集不同,该数据集仅为2D边界框提供跟踪地面真实,我们的数据集还包含适用于相机和世界坐标中的3D边界框的完美标签,深度估计以及实例,语义和泛型细分。该数据集由17个小时的标记视频材料组成,从64个不同的一天,雨,黎明和夜幕播放的340张摄像机录制,使其成为迄今为止多目标多型多相机跟踪的最广泛数据集。我们提供用于检测,车辆重新识别以及单摄像机跟踪的基准。代码和数据公开可用。
translated by 谷歌翻译
通过流行和通用的计算机视觉挑战来判断,如想象成或帕斯卡VOC,神经网络已经证明是在识别任务中特别准确。然而,最先进的准确性通常以高计算价格出现,需要硬件加速来实现实时性能,而使用案例(例如智能城市)需要实时分析固定摄像机的图像。由于网络带宽的数量,这些流将生成,我们不能依赖于卸载计算到集中云。因此,预期分布式边缘云将在本地处理图像。但是,边缘是由性质资源约束的,这给了可以执行的计算复杂性限制。然而,需要边缘与准确的实时视频分析之间的会面点。专用轻量级型号在每相机基础上可能有所帮助,但由于相机的数量增长,除非该过程是自动的,否则它很快就会变得不可行。在本文中,我们展示并评估COVA(上下文优化的视频分析),这是一个框架,可以帮助在边缘相机中自动专用模型专业化。 COVA通过专业化自动提高轻质模型的准确性。此外,我们讨论和审查过程中涉及的每个步骤,以了解每个人所带来的不同权衡。此外,我们展示了静态相机的唯一假设如何使我们能够制定一系列考虑因素,这大大简化了问题的范围。最后,实验表明,最先进的模型,即能够概括到看不见的环境,可以有效地用作教师以以恒定的计算成本提高较小网络的教师,提高精度。结果表明,我们的COVA可以平均提高预先训练的型号的准确性,平均为21%。
translated by 谷歌翻译
我们介绍了遮阳板,一个新的像素注释的新数据集和一个基准套件,用于在以自我为中心的视频中分割手和活动对象。遮阳板注释Epic-kitchens的视频,其中带有当前视频分割数据集中未遇到的新挑战。具体而言,我们需要确保像素级注释作为对象经历变革性相互作用的短期和长期一致性,例如洋葱被剥皮,切成丁和煮熟 - 我们旨在获得果皮,洋葱块,斩波板,刀,锅以及表演手的准确像素级注释。遮阳板引入了一条注释管道,以零件为ai驱动,以进行可伸缩性和质量。总共,我们公开发布257个对象类的272K手册语义面具,990万个插值密集口罩,67K手动关系,涵盖36小时的179个未修剪视频。除了注释外,我们还引入了视频对象细分,互动理解和长期推理方面的三个挑战。有关数据,代码和排行榜:http://epic-kitchens.github.io/visor
translated by 谷歌翻译
Figure 1: We introduce datasets for 3D tracking and motion forecasting with rich maps for autonomous driving. Our 3D tracking dataset contains sequences of LiDAR measurements, 360 • RGB video, front-facing stereo (middle-right), and 6-dof localization. All sequences are aligned with maps containing lane center lines (magenta), driveable region (orange), and ground height. Sequences are annotated with 3D cuboid tracks (green). A wider map view is shown in the bottom-right.
translated by 谷歌翻译
环绕视图相机是用于自动驾驶的主要传感器,用于近场感知。它是主要用于停车可视化和自动停车的商用车中最常用的传感器之一。四个带有190 {\ deg}视场覆盖车辆周围360 {\ deg}的鱼眼相机。由于其高径向失真,标准算法不容易扩展。以前,我们发布了第一个名为Woodscape的公共鱼眼环境视图数据集。在这项工作中,我们发布了环绕视图数据集的合成版本,涵盖了其许多弱点并扩展了它。首先,不可能获得像素光流和深度的地面真相。其次,为了采样不同的框架,木景没有同时注释的所有四个相机。但是,这意味着不能设计多相机算法以在新数据集中启用的鸟眼空间中获得统一的输出。我们在Carla模拟器中实现了环绕式鱼眼的几何预测,与木观的配置相匹配并创建了Synwoodscape。
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译