行人检测是许多基于视觉的应用程序的基石,从对象跟踪到视频监控,最近,自动驾驶。随着对象检测的深度学习的快速发展,行人检测在传统的单数据集训练和评估设置中取得了非常好的性能。然而,在这项关于广泛的行人探测器的研究中,我们表明,目前的行人探测器在交叉数据集评估中处理甚至是小域移位的差。我们将有限的概括归因于两个主要因素,方法和当前数据源。关于该方法,我们示出了当前行人检测器的设计选择(例如锚定设置)中存在的偏差是有限概括的主要贡献因素。大多数现代行人探测器都针对目标数据集进行量身定制,在那里他们在传统的单一培训和测试管道中实现了高性能,但在通过交叉数据集评估评估时性能遭受降低。因此,由于其通用设计,与最艺术行人检测器的状态相比,通用物体检测器在交叉数据集评估中更好地执行。至于数据,我们表明自主驾驶基准本质上是单调的,也就是说,它们在情景和行人中的密集并不多样化。因此,通过爬行网络(包含不同和密集的方案)来策划的基准是一种有效的预培训来源,以提供更强大的表示。因此,我们提出了一种提高泛化的逐步微调策略。代码和模型CAB在https://github.com/hasanirtiza/pedestron访问。
translated by 谷歌翻译
两阶段探测器在物体检测和行人检测中是最新的。但是,当前的两个阶段探测器效率低下,因为它们在多个步骤中进行边界回归,即在区域提案网络和边界框头中进行回归。此外,基于锚的区域提案网络在计算上的训练价格很高。我们提出了F2DNET,这是一种新型的两阶段检测体系结构,通过使用我们的焦点检测网络和边界框以我们的快速抑制头替换区域建议网络,从而消除了当前两阶段检测器的冗余。我们在顶级行人检测数据集上进行基准F2DNET,将其与现有的最新检测器进行彻底比较,并进行交叉数据集评估,以测试我们模型对未见数据的普遍性。我们的F2DNET在城市人员,加州理工学院行人和欧元城市人数据集中分别获得8.7 \%,2.2 \%和6.1 \%MR-2,分别在单个数据集上进行培训并达到20.4 \%\%\%和26.2 \%MR-2。使用渐进式微调时,加州理工学院行人和城市人员数据集的重型闭塞设置。此外,与当前的最新时间相比,F2DNET的推理时间明显较小。代码和训练有素的模型将在https://github.com/abdulhannankhan/f2dnet上找到。
translated by 谷歌翻译
物体检测通常需要在现代深度学习方法中基于传统或锚盒的滑动窗口分类器。但是,这些方法中的任何一个都需要框中的繁琐配置。在本文中,我们提供了一种新的透视图,其中检测对象被激励为高电平语义特征检测任务。与边缘,角落,斑点和其他特征探测器一样,所提出的探测器扫描到全部图像的特征点,卷积自然适合该特征点。但是,与这些传统的低级功能不同,所提出的探测器用于更高级别的抽象,即我们正在寻找有物体的中心点,而现代深层模型已经能够具有如此高级别的语义抽象。除了Blob检测之外,我们还预测了中心点的尺度,这也是直接的卷积。因此,在本文中,通过卷积简化了行人和面部检测作为直接的中心和规模预测任务。这样,所提出的方法享有一个无盒设置。虽然结构简单,但它对几个具有挑战性的基准呈现竞争准确性,包括行人检测和面部检测。此外,执行交叉数据集评估,证明所提出的方法的卓越泛化能力。可以访问代码和模型(https://github.com/liuwei16/csp和https://github.com/hasanirtiza/pedestron)。
translated by 谷歌翻译
随着深度卷积神经网络的兴起,对象检测在过去几年中取得了突出的进步。但是,这种繁荣无法掩盖小物体检测(SOD)的不令人满意的情况,这是计算机视觉中臭名昭著的挑战性任务之一,这是由于视觉外观不佳和由小目标的内在结构引起的嘈杂表示。此外,用于基准小对象检测方法基准测试的大规模数据集仍然是瓶颈。在本文中,我们首先对小物体检测进行了详尽的审查。然后,为了催化SOD的发展,我们分别构建了两个大规模的小物体检测数据集(SODA),SODA-D和SODA-A,分别集中在驾驶和空中场景上。 SODA-D包括24704个高质量的交通图像和277596个9个类别的实例。对于苏打水,我们收集2510个高分辨率航空图像,并在9个类别上注释800203实例。众所周知,拟议的数据集是有史以来首次尝试使用针对多类SOD量身定制的大量注释实例进行大规模基准测试。最后,我们评估主流方法在苏打水上的性能。我们预计发布的基准可以促进SOD的发展,并产生该领域的更多突破。数据集和代码将很快在:\ url {https://shaunyuan22.github.io/soda}上。
translated by 谷歌翻译
由于卷积神经网络(CNN)在过去的十年中检测成功,多对象跟踪(MOT)通过检测方法的使用来控制。随着数据集和基础标记网站的发布,研究方向已转向在跟踪时在包括重新识别对象的通用场景(包括重新识别(REID))上的最佳准确性。在这项研究中,我们通过提供专用的行人数据集并专注于对性能良好的多对象跟踪器的深入分析来缩小监视的范围)现实世界应用的技术。为此,我们介绍SOMPT22数据集;一套新的,用于多人跟踪的新套装,带有带注释的简短视频,该视频从位于杆子上的静态摄像头捕获,高度为6-8米,用于城市监视。与公共MOT数据集相比,这提供了室外监视的MOT的更为集中和具体的基准。我们分析了该新数据集上检测和REID网络的使用方式,分析了将MOT跟踪器分类为单发和两阶段。我们新数据集的实验结果表明,SOTA远非高效率,而单一跟踪器是统一快速执行和准确性的良好候选者,并具有竞争性的性能。该数据集将在以下网址提供:sompt22.github.io
translated by 谷歌翻译
基准,如Coco,在物体检测中发挥至关重要的作用。然而,现有的基准在规模变化中不足,他们的协议不足以进行公平比较。在本文中,我们介绍了通用尺度对象检测基准(USB)。 USB通过将Coco与最近提出的Waymo Open DataSet和Manga109-S数据集合并了Coco,USB具有对象尺度和图像域的变化。为了实现公平的比较和包容性研究,我们提出了培训和评估议定书。它们有多个部门用于培训时期和评估图像分辨率,如体育中的重量类,以及跨训练协议的兼容性,如通用串行总线的后向兼容性。具体而言,我们要求参与者报告结果,不仅具有更高的协议(更长的培训),而且还有更低的协议(较短培训)。使用所提出的基准和协议,我们分析了八种方法,发现了现有的Coco-偏偏见方法的缺点。代码可在https://github.com/shinya7y/universenet上获得。
translated by 谷歌翻译
In object detection, the intersection over union (IoU) threshold is frequently used to define positives/negatives. The threshold used to train a detector defines its quality. While the commonly used threshold of 0.5 leads to noisy (low-quality) detections, detection performance frequently degrades for larger thresholds. This paradox of high-quality detection has two causes: 1) overfitting, due to vanishing positive samples for large thresholds, and 2) inference-time quality mismatch between detector and test hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, composed of a sequence of detectors trained with increasing IoU thresholds, is proposed to address these problems. The detectors are trained sequentially, using the output of a detector as training set for the next. This resampling progressively improves hypotheses quality, guaranteeing a positive training set of equivalent size for all detectors and minimizing overfitting. The same cascade is applied at inference, to eliminate quality mismatches between hypotheses and detectors. An implementation of the Cascade R-CNN without bells or whistles achieves state-of-the-art performance on the COCO dataset, and significantly improves high-quality detection on generic and specific object detection datasets, including VOC, KITTI, CityPerson, and WiderFace. Finally, the Cascade R-CNN is generalized to instance segmentation, with nontrivial improvements over the Mask R-CNN. To facilitate future research, two implementations are made available at https://github.com/zhaoweicai/cascade-rcnn (Caffe) and https://github.com/zhaoweicai/Detectron-Cascade-RCNN (Detectron).
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
旨在促进现实世界,不断发展和可扩展的自主驾驶系统,我们展示了一个大规模数据集,用于通过从原始数据学习来标准化不同自我监督和半监督方法的评估,这是第一和最大的数据集到期。现有的自主驱动系统严重依赖于“完善”视觉感知模型(即,检测)使用广泛的注释数据培训,以确保安全性。然而,在部署强大的自动驾驶系统时,精致地标记所有情景和环境的实例(即夜,极端天气,城市)是不现实的。最近的自我监督和半监督学习的推进激励,希望通过协作利用大规模未标记的数据和少数标记数据来学习强大的检测模型。现有数据集只提供少量数据或涵盖具有完整注释的有限域,妨碍大规模预训练模型的探索。在这里,我们发布了一个大型2D自主/半监控的对象检测数据集,用于自动驾驶,名为SODA10M,其中包含1000万个未标记的图像和标有6个代表对象类别的20K图像。为了提高多样性,在不同天气条件下的27833个驾驶时间内收集图像,32个不同城市的时期和位置场景。我们提供广泛的实验和对现有的流行自主/半监督方法深度分析,并在自动驾驶范围内给出一些有趣的调查结果。实验表明,SODA10M可以作为不同的自我监督学习方法作为有前途的预训练数据集,这在微调驾驶域中的不同下游任务(即检测,语义/实例分段)进行微调时提供了卓越的性能。更多信息可以参考https://soda-2d.github.io。
translated by 谷歌翻译
面部检测是为了在图像中搜索面部的所有可能区域,并且如果有任何情况,则定位面部。包括面部识别,面部表情识别,面部跟踪和头部姿势估计的许多应用假设面部的位置和尺寸在图像中是已知的。近几十年来,研究人员从Viola-Jones脸上检测器创造了许多典型和有效的面部探测器到当前的基于CNN的CNN。然而,随着图像和视频的巨大增加,具有面部刻度的变化,外观,表达,遮挡和姿势,传统的面部探测器被挑战来检测野外面孔的各种“脸部。深度学习技术的出现带来了非凡的检测突破,以及计算的价格相当大的价格。本文介绍了代表性的深度学习的方法,并在准确性和效率方面提出了深度和全面的分析。我们进一步比较并讨论了流行的并挑战数据集及其评估指标。进行了几种成功的基于深度学习的面部探测器的全面比较,以使用两个度量来揭示其效率:拖鞋和延迟。本文可以指导为不同应用选择合适的面部探测器,也可以开发更高效和准确的探测器。
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
对人类对象相互作用的理解在第一人称愿景(FPV)中至关重要。遵循相机佩戴者操纵的对象的视觉跟踪算法可以提供有效的信息,以有效地建模此类相互作用。在过去的几年中,计算机视觉社区已大大提高了各种目标对象和场景的跟踪算法的性能。尽管以前有几次尝试在FPV域中利用跟踪器,但仍缺少对最先进跟踪器的性能的有条理分析。这项研究差距提出了一个问题,即应使用当前的解决方案``现成''还是应进行更多特定领域的研究。本文旨在为此类问题提供答案。我们介绍了FPV中单个对象跟踪的首次系统研究。我们的研究广泛分析了42个算法的性能,包括通用对象跟踪器和基线FPV特定跟踪器。分析是通过关注FPV设置的不同方面,引入新的绩效指标以及与FPV特定任务有关的。这项研究是通过引入Trek-150(由150个密集注释的视频序列组成的新型基准数据集)来实现的。我们的结果表明,FPV中的对象跟踪对当前的视觉跟踪器构成了新的挑战。我们强调了导致这种行为的因素,并指出了可能的研究方向。尽管遇到了困难,但我们证明了跟踪器为需要短期对象跟踪的FPV下游任务带来好处。我们预计,随着新的和FPV特定的方法学会得到研究,通用对象跟踪将在FPV中受欢迎。
translated by 谷歌翻译
自主驾驶的当代深度学习对象检测方法通常会假定前缀类别的共同交通参与者,例如行人和汽车。大多数现有的探测器无法检测到罕见的物体和拐角案例(例如,越过街道的狗),这可能会导致某些情况下发生严重的事故,从而使真实世界应用可靠的自动驾驶不确定。阻碍了真正可靠的自动驾驶系统发展的主要原因是缺乏评估对象探测器在角案例上的性能的公共数据集。因此,我们介绍了一个名为CODA的具有挑战性的数据集,该数据集揭示了基于视力的检测器的关键问题。该数据集由1500个精心选择的现实世界驾驶场景组成,每个场景平均包含四个对象级角案例(平均),涵盖30多个对象类别。在CODA上,在大型自动驾驶数据集中训练的标准对象探测器的性能显着下降到3月的12.8%。此外,我们试验了最新的开放世界对象检测器,发现它也无法可靠地识别尾声中的新对象,这表明对自主驾驶的强大感知系统可能远离触及。我们希望我们的CODA数据集有助于对现实世界自动驾驶的可靠检测进行进一步的研究。我们的数据集将在https://coda-dataset.github.io上发布。
translated by 谷歌翻译
由于其前所未有的优势,在规模,移动,部署和隐蔽观察能力方面,空中平台和成像传感器的快速出现是实现新的空中监测形式。本文从计算机视觉和模式识别的角度来看,全面概述了以人为本的空中监控任务。它旨在为读者提供使用无人机,无人机和其他空中平台的空中监测任务当前状态的深入系统审查和技术分析。感兴趣的主要对象是人类,其中要检测单个或多个受试者,识别,跟踪,重新识别并进行其行为。更具体地,对于这四项任务中的每一个,我们首先讨论与基于地面的设置相比在空中环境中执行这些任务的独特挑战。然后,我们审查和分析公共可用于每项任务的航空数据集,并深入了解航空文学中的方法,并调查他们目前如何应对鸟瞰挑战。我们在讨论缺失差距和开放研究问题的讨论中得出结论,告知未来的研究途径。
translated by 谷歌翻译
行人检测的典型方法侧重于在拥挤的行人之间进行处理,或处理各种行人的各种鳞片。用大量外观多样性检测不同的行人剪影,不同观点或不同的敷料等行人仍然是一个至关重要的挑战。除了大多数现有方法,我们建议使用与学习特征空间中不同外观之间的行人之间的语义距离的方式进行对比学习以引导特征学习,以引导对比学习以引导特征学习。外观多样性,而行人和背景之间的距离最大化。为了促进对比学习的效率和有效性,我们构建具有代表性行人外观的示例性词典作为先验知识,以构建有效的对比训练对并因此引导对比学习。此外,通过测量提议与示例性词典之间的语义距离,进一步利用构建的示例性词典以评估推理期间的行人提案的质量。对白天和夜间行人检测的广泛实验验证了该方法的有效性。
translated by 谷歌翻译
Pedestrian detection in the wild remains a challenging problem especially for scenes containing serious occlusion. In this paper, we propose a novel feature learning method in the deep learning framework, referred to as Feature Calibration Network (FC-Net), to adaptively detect pedestrians under various occlusions. FC-Net is based on the observation that the visible parts of pedestrians are selective and decisive for detection, and is implemented as a self-paced feature learning framework with a self-activation (SA) module and a feature calibration (FC) module. In a new self-activated manner, FC-Net learns features which highlight the visible parts and suppress the occluded parts of pedestrians. The SA module estimates pedestrian activation maps by reusing classifier weights, without any additional parameter involved, therefore resulting in an extremely parsimony model to reinforce the semantics of features, while the FC module calibrates the convolutional features for adaptive pedestrian representation in both pixel-wise and region-based ways. Experiments on CityPersons and Caltech datasets demonstrate that FC-Net improves detection performance on occluded pedestrians up to 10% while maintaining excellent performance on non-occluded instances.
translated by 谷歌翻译
Pedestrian detection in the wild remains a challenging problem especially when the scene contains significant occlusion and/or low resolution of the pedestrians to be detected. Existing methods are unable to adapt to these difficult cases while maintaining acceptable performance. In this paper we propose a novel feature learning model, referred to as CircleNet, to achieve feature adaptation by mimicking the process humans looking at low resolution and occluded objects: focusing on it again, at a finer scale, if the object can not be identified clearly for the first time. CircleNet is implemented as a set of feature pyramids and uses weight sharing path augmentation for better feature fusion. It targets at reciprocating feature adaptation and iterative object detection using multiple top-down and bottom-up pathways. To take full advantage of the feature adaptation capability in CircleNet, we design an instance decomposition training strategy to focus on detecting pedestrian instances of various resolutions and different occlusion levels in each cycle. Specifically, CircleNet implements feature ensemble with the idea of hard negative boosting in an end-to-end manner. Experiments on two pedestrian detection datasets, Caltech and CityPersons, show that CircleNet improves the performance of occluded and low-resolution pedestrians with significant margins while maintaining good performance on normal instances.
translated by 谷歌翻译
This paper presents a new large scale multi-person tracking dataset -- \texttt{PersonPath22}, which is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20 datasets. The lack of large scale training and test data for this task has limited the community's ability to understand the performance of their tracking systems on a wide range of scenarios and conditions such as variations in person density, actions being performed, weather, and time of day. \texttt{PersonPath22} dataset was specifically sourced to provide a wide variety of these conditions and our annotations include rich meta-data such that the performance of a tracker can be evaluated along these different dimensions. The lack of training data has also limited the ability to perform end-to-end training of tracking systems. As such, the highest performing tracking systems all rely on strong detectors trained on external image datasets. We hope that the release of this dataset will enable new lines of research that take advantage of large scale video based training data.
translated by 谷歌翻译
在视频监视和时尚检索中,识别软性识别人行人属性至关重要。最近的作品在单个数据集上显示了有希望的结果。然而,这些方法在不同属性分布,观点,不同的照明和低分辨率下的概括能力很少因当前数据集中的强偏差和变化属性而很少被理解。为了缩小这一差距并支持系统的调查,我们介绍了UPAR,即统一的人属性识别数据集。它基于四个知名人士属性识别数据集:PA100K,PETA,RAPV2和Market1501。我们通过提供3300万个附加注释来统一这些数据集,以在整个数据集中统一40个属性类别的40个重要二进制属性。因此,我们首次对可概括的行人属性识别以及基于属性的人检索进行研究。由于图像分布,行人姿势,规模和遮挡的巨大差异,现有方法在准确性和效率方面都受到了极大的挑战。此外,我们基于对正则化方法的彻底分析,为基于PAR和属性的人检索开发了强大的基线。我们的模型在PA100K,PETA,RAPV2,Market1501-Atributes和UPAR上的跨域和专业设置中实现了最先进的性能。我们相信UPAR和我们的强大基线将为人工智能界做出贡献,并促进有关大规模,可推广属性识别系统的研究。
translated by 谷歌翻译
Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Faces in the proposed dataset are extremely challenging due to large variations in scale, pose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE dataset is an effective training source for face detection. We benchmark several representative detection systems, providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation. Finally, we discuss common failure cases that worth to be further investigated. Dataset can be downloaded at: mmlab.ie.cuhk.edu.hk/projects/WIDERFace
translated by 谷歌翻译