在视频监视和时尚检索中,识别软性识别人行人属性至关重要。最近的作品在单个数据集上显示了有希望的结果。然而,这些方法在不同属性分布,观点,不同的照明和低分辨率下的概括能力很少因当前数据集中的强偏差和变化属性而很少被理解。为了缩小这一差距并支持系统的调查,我们介绍了UPAR,即统一的人属性识别数据集。它基于四个知名人士属性识别数据集:PA100K,PETA,RAPV2和Market1501。我们通过提供3300万个附加注释来统一这些数据集,以在整个数据集中统一40个属性类别的40个重要二进制属性。因此,我们首次对可概括的行人属性识别以及基于属性的人检索进行研究。由于图像分布,行人姿势,规模和遮挡的巨大差异,现有方法在准确性和效率方面都受到了极大的挑战。此外,我们基于对正则化方法的彻底分析,为基于PAR和属性的人检索开发了强大的基线。我们的模型在PA100K,PETA,RAPV2,Market1501-Atributes和UPAR上的跨域和专业设置中实现了最先进的性能。我们相信UPAR和我们的强大基线将为人工智能界做出贡献,并促进有关大规模,可推广属性识别系统的研究。
translated by 谷歌翻译
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras. With the advancement of deep neural networks and increasing demand of intelligent video surveillance, it has gained significantly increased interest in the computer vision community. By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings. The widely studied closed-world setting is usually applied under various research-oriented assumptions, and has achieved inspiring success using deep learning techniques on a number of datasets. We first conduct a comprehensive overview with in-depth analysis for closed-world person Re-ID from three different perspectives, including deep feature representation learning, deep metric learning and ranking optimization. With the performance saturation under closed-world setting, the research focus for person Re-ID has recently shifted to the open-world setting, facing more challenging issues. This setting is closer to practical applications under specific scenarios. We summarize the open-world Re-ID in terms of five different aspects. By analyzing the advantages of existing methods, we design a powerful AGW baseline, achieving state-of-the-art or at least comparable performance on twelve datasets for FOUR different Re-ID tasks. Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re-ID system for real applications. Finally, some important yet under-investigated open issues are discussed.
translated by 谷歌翻译
步行属性识别旨在将多个属性分配给由视频监控摄像机捕获的一个行人图像。虽然提出了许多方法并进行了巨大的进展,但我们认为现在是时候退回和分析该地区的现状。我们回顾并重新思考从三个角度来看最近的进展。首先,鉴于人们没有明确和完整的行人属性识别定义,我们正式定义和区分步行属性识别与其他类似任务。其次,根据拟议的定义,我们暴露了现有数据集的局限性,违反了学术规范,并与实际行业申请的基本要求不一致。因此,我们提出了两个数据集,Peta \ TextSubscript {$ zs $}和rap \ textsubscript {$ zs $},在行人身份上的零拍设置之后构建。此外,我们还为未来的行人属性数据集建设介绍了几种现实标准。最后,我们重新实现现有的最先进的方法,并引入强大的基线方法,以提供可靠的评估和公平的比较。实验是在四个现有数据集和两个拟议的数据集中进行,以衡量行人属性识别的进展情况。
translated by 谷歌翻译
人重新识别(RE-ID)在公共安全和视频监控等应用中起着重要作用。最近,从合成数据引擎的普及中获益的合成数据学习,从公众眼中引起了极大的关注。但是,现有数据集数量,多样性和变性有限,并且不能有效地用于重新ID问题。为了解决这一挑战,我们手动构造一个名为FineGPR的大型人数据集,具有细粒度的属性注释。此外,旨在充分利用FineGPR的潜力,并推广从数百万综合数据的高效培训,我们提出了一个名为AOST的属性分析流水线,它动态地学习了真实域中的属性分布,然后消除了合成和现实世界之间的差距因此,自由地部署到新场景。在基准上进行的实验表明,FineGPR具有AOST胜过(或与)现有的实际和合成数据集,这表明其对重新ID任务的可行性,并证明了众所周知的较少的原则。我们的Synthetic FineGPR数据集可公开可用于\ URL {https://github.com/jeremyxsc/finegpr}。
translated by 谷歌翻译
近年来,随着对公共安全的需求越来越多,智能监测网络的快速发展,人员重新识别(RE-ID)已成为计算机视野领域的热门研究主题之一。人员RE-ID的主要研究目标是从不同的摄像机中检索具有相同身份的人。但是,传统的人重新ID方法需要手动标记人的目标,这消耗了大量的劳动力成本。随着深度神经网络的广泛应用,出现了许多基于深入的基于学习的人物的方法。因此,本文促进研究人员了解最新的研究成果和该领域的未来趋势。首先,我们总结了对几个最近公布的人的研究重新ID调查,并补充了系统地分类基于深度学习的人的重新ID方法的最新研究方法。其次,我们提出了一种多维分类,根据度量标准和表示学习,将基于深度学习的人的重新ID方法分为四类,包括深度度量学习,本地特征学习,生成的对抗学习和序列特征学习的方法。此外,我们根据其方法和动机来细分以上四类,讨论部分子类别的优缺点。最后,我们讨论了一些挑战和可能的研究方向的人重新ID。
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
具有大量空间和时间跨境的情景中的人重新识别(RE-ID)尚未完全探索。这部分原因是,现有的基准数据集主要由有限的空间和时间范围收集,例如,使用在校园特定区域的相机录制的视频中使用的视频。这种有限的空间和时间范围使得难以模拟真实情景中的人的困难。在这项工作中,我们贡献了一个新的大型时空上次最后一个数据集,包括10,862个图像,具有超过228k的图像。与现有数据集相比,最后一个具有挑战性和高度多样性的重新ID设置,以及显着更大的空间和时间范围。例如,每个人都可以出现在不同的城市或国家,以及在白天到夜间的各个时隙,以及春季到冬季的不同季节。为了我们的最佳知识,最后是一个新的Perse Re-ID数据集,具有最大的时空范围。基于最后,我们通过对14个RE-ID算法进行全面的绩效评估来验证其挑战。我们进一步提出了一种易于实施的基线,适用于如此挑战的重新ID设置。我们还验证了初步训练的模型可以在具有短期和更改方案的现有数据集中概括。我们期待持续激发未来的工程,以更现实和挑战的重新识别任务。有关DataSet的更多信息,请访问https://github.com/shuxjweb/last.git。
translated by 谷歌翻译
行人检测是许多基于视觉的应用程序的基石,从对象跟踪到视频监控,最近,自动驾驶。随着对象检测的深度学习的快速发展,行人检测在传统的单数据集训练和评估设置中取得了非常好的性能。然而,在这项关于广泛的行人探测器的研究中,我们表明,目前的行人探测器在交叉数据集评估中处理甚至是小域移位的差。我们将有限的概括归因于两个主要因素,方法和当前数据源。关于该方法,我们示出了当前行人检测器的设计选择(例如锚定设置)中存在的偏差是有限概括的主要贡献因素。大多数现代行人探测器都针对目标数据集进行量身定制,在那里他们在传统的单一培训和测试管道中实现了高性能,但在通过交叉数据集评估评估时性能遭受降低。因此,由于其通用设计,与最艺术行人检测器的状态相比,通用物体检测器在交叉数据集评估中更好地执行。至于数据,我们表明自主驾驶基准本质上是单调的,也就是说,它们在情景和行人中的密集并不多样化。因此,通过爬行网络(包含不同和密集的方案)来策划的基准是一种有效的预培训来源,以提供更强大的表示。因此,我们提出了一种提高泛化的逐步微调策略。代码和模型CAB在https://github.com/hasanirtiza/pedestron访问。
translated by 谷歌翻译
我们提出了一个我们命名肖像解释的任务,并为其构建一个名为Portrait250k的数据集。当前关于人类属性认可和人重新识别等肖像的研究取得了许多成功,但通常,它们:1)可能缺乏各种任务与可能带来的可能利益之间的相互关系; 2)专门为每个任务设计的深层模型,这效率低下; 3)可能无法满足统一模型的需求和实际场景中的全面感知。在本文中,拟议的肖像解释从新的系统角度认识到人类的感知。我们将肖像的感知分为三个方面,即外观,姿势和情感,以及设计相应的子任务。基于多任务学习的框架,肖像解释需要对静态属性和肖像的动态状态进行全面描述。为了激发有关这项新任务的研究,我们构建了一个新数据集,其中包含25万张图像,上面标有身份,性别,年龄,体质,身高,表达和整个身体和手臂的姿势。我们的数据集是从51部电影中收集的,因此涵盖了广泛的多样性。此外,我们专注于表示肖像解释的表示,并提出了反映我们系统观点的基线。我们还为此任务提出了适当的指标。我们的实验结果表明,结合与肖像解释有关的任务可以产生好处。代码和数据集将公开。
translated by 谷歌翻译
场景分类已确定为一个具有挑战性的研究问题。与单个对象的图像相比,场景图像在语义上可能更为复杂和抽象。它们的差异主要在于识别的粒度水平。然而,图像识别是场景识别良好表现的关键支柱,因为从对象图像中获得的知识可用于准确识别场景。现有场景识别方法仅考虑场景的类别标签。但是,我们发现包含详细的本地描述的上下文信息也有助于允许场景识别模型更具歧视性。在本文中,我们旨在使用对象中编码的属性和类别标签信息来改善场景识别。基于属性和类别标签的互补性,我们提出了一个多任务属性识别识别(MASR)网络,该网络学习一个类别嵌入式,同时预测场景属性。属性采集和对象注释是乏味且耗时的任务。我们通过提出部分监督的注释策略来解决该问题,其中人类干预大大减少。该策略为现实世界情景提供了更具成本效益的解决方案,并且需要减少注释工作。此外,考虑到对象检测到的分数所指示的重要性水平,我们重新进行了权威预测。使用提出的方法,我们有效地注释了四个大型数据集的属性标签,并系统地研究场景和属性识别如何相互受益。实验结果表明,与最先进的方法相比
translated by 谷歌翻译
由于特定属性的定位不准确,监控场景中的行人属性识别仍然是一个具有挑战性的任务。在本文中,我们提出了一种基于注意力(VALA)的新型视图 - 属性定位方法,其利用查看信息来指导识别过程,专注于对特定属性对应区域的特定属性和注意机制。具体地,查看信息由视图预测分支利用,以生成四个视图权重,表示来自不同视图的属性的信心。然后将视图重量交付回撰写以撰写特定的视图属性,该属性将参与和监督深度特征提取。为了探索视图属性的空间位置,引入区域关注来聚合空间信息并编码视图特征的通道间依赖性。随后,特定于细小的特定属性特定区域是本地化的,并且通过区域关注获得了来自不同空间位置的视图属性的区域权重。通过将视图权重与区域权重组合来获得最终视图 - 属性识别结果。在三个宽数据集(RAP,RAPV2和PA-100K)上的实验证明了与最先进的方法相比我们的方法的有效性。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
在将人重新识别(REID)模型部署在安全关键型应用程序中时,它是关键,以了解模型的鲁棒性,以反对不同的图像损坏阵列。但是,当前对人的评估Reid仅考虑干净数据集的性能,并忽略各种损坏方案中的图像。在这项工作中,我们全面建立了六种Reid基准,用于学习腐败不变的代表。在Reid领域,我们是第一个在单个和跨模式数据集中开展腐败腐败的彻底研究,包括市场-1501,CUHK03,MSMT17,REGDB,SYSU-MM01。在再现和检查最近的REID方法的鲁棒性能后,我们有一些观察结果:1)基于变压器的模型对损坏的图像更加强大,与基于CNN的模型相比,2)增加了随机擦除的概率(常用的增强方法)伤害模型腐败鲁棒性,3)交叉数据集泛化改善腐败鲁棒性增加。通过分析上述观察,我们提出了一个强大的基线,对单一和跨型号的内部数据集,实现了对不同腐败的改善的鲁棒性。我们的代码可在https://github.com/minghuichen43/cil -reid上获得。
translated by 谷歌翻译
细粒度的图像分析(FGIA)是计算机视觉和模式识别中的长期和基本问题,并为一组多种现实世界应用提供了基础。 FGIA的任务是从属类别分析视觉物体,例如汽车或汽车型号的种类。细粒度分析中固有的小阶级和阶级阶级内变异使其成为一个具有挑战性的问题。利用深度学习的进步,近年来,我们在深入学习动力的FGIA中见证了显着进展。在本文中,我们对这些进展的系统进行了系统的调查,我们试图通过巩固两个基本的细粒度研究领域 - 细粒度的图像识别和细粒度的图像检索来重新定义和扩大FGIA领域。此外,我们还审查了FGIA的其他关键问题,例如公开可用的基准数据集和相关域的特定于应用程序。我们通过突出几个研究方向和开放问题,从社区中突出了几个研究方向和开放问题。
translated by 谷歌翻译
This paper presents a new large scale multi-person tracking dataset -- \texttt{PersonPath22}, which is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20 datasets. The lack of large scale training and test data for this task has limited the community's ability to understand the performance of their tracking systems on a wide range of scenarios and conditions such as variations in person density, actions being performed, weather, and time of day. \texttt{PersonPath22} dataset was specifically sourced to provide a wide variety of these conditions and our annotations include rich meta-data such that the performance of a tracker can be evaluated along these different dimensions. The lack of training data has also limited the ability to perform end-to-end training of tracking systems. As such, the highest performing tracking systems all rely on strong detectors trained on external image datasets. We hope that the release of this dataset will enable new lines of research that take advantage of large scale video based training data.
translated by 谷歌翻译
Recently, Person Re-Identification (Re-ID) has received a lot of attention. Large datasets containing labeled images of various individuals have been released, allowing researchers to develop and test many successful approaches. However, when such Re-ID models are deployed in new cities or environments, the task of searching for people within a network of security cameras is likely to face an important domain shift, thus resulting in decreased performance. Indeed, while most public datasets were collected in a limited geographic area, images from a new city present different features (e.g., people's ethnicity and clothing style, weather, architecture, etc.). In addition, the whole frames of the video streams must be converted into cropped images of people using pedestrian detection models, which behave differently from the human annotators who created the dataset used for training. To better understand the extent of this issue, this paper introduces a complete methodology to evaluate Re-ID approaches and training datasets with respect to their suitability for unsupervised deployment for live operations. This method is used to benchmark four Re-ID approaches on three datasets, providing insight and guidelines that can help to design better Re-ID pipelines in the future.
translated by 谷歌翻译
人的步态被认为是一种独特的生物识别标识符,其可以在距离处以覆盖方式获取。但是,在受控场景中捕获的现有公共领域步态数据集接受的模型导致应用于现实世界无约束步态数据时的剧烈性能下降。另一方面,视频人员重新识别技术在大规模公共可用数据集中实现了有希望的性能。鉴于服装特性的多样性,衣物提示对于人们的认可不可靠。因此,实际上尚不清楚为什么最先进的人重新识别方法以及他们的工作。在本文中,我们通过从现有的视频人重新识别挑战中提取剪影来构建一个新的步态数据集,该挑战包括1,404人以不受约束的方式行走。基于该数据集,可以进行步态认可与人重新识别之间的一致和比较研究。鉴于我们的实验结果表明,目前在受控情景收集的数据下设计的目前的步态识别方法不适合真实监视情景,我们提出了一种名为Realgait的新型步态识别方法。我们的结果表明,在实际监视情景中识别人的步态是可行的,并且潜在的步态模式可能是视频人重新设计在实践中的真正原因。
translated by 谷歌翻译
Pretraining is a dominant paradigm in computer vision. Generally, supervised ImageNet pretraining is commonly used to initialize the backbones of person re-identification (Re-ID) models. However, recent works show a surprising result that CNN-based pretraining on ImageNet has limited impacts on Re-ID system due to the large domain gap between ImageNet and person Re-ID data. To seek an alternative to traditional pretraining, here we investigate semantic-based pretraining as another method to utilize additional textual data against ImageNet pretraining. Specifically, we manually construct a diversified FineGPR-C caption dataset for the first time on person Re-ID events. Based on it, a pure semantic-based pretraining approach named VTBR is proposed to adopt dense captions to learn visual representations with fewer images. We train convolutional neural networks from scratch on the captions of FineGPR-C dataset, and then transfer them to downstream Re-ID tasks. Comprehensive experiments conducted on benchmark datasets show that our VTBR can achieve competitive performance compared with ImageNet pretraining - despite using up to 1.4x fewer images, revealing its potential in Re-ID pretraining.
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译