Person recognition at a distance entails recognizing the identity of an individual appearing in images or videos collected by long-range imaging systems such as drones or surveillance cameras. Despite recent advances in deep convolutional neural networks (DCNNs), this remains challenging. Images or videos collected by long-range cameras often suffer from atmospheric turbulence, blur, low-resolution, unconstrained poses, and poor illumination. In this paper, we provide a brief survey of recent advances in person recognition at a distance. In particular, we review recent work in multi-spectral face verification, person re-identification, and gait-based analysis techniques. Furthermore, we discuss the merits and drawbacks of existing approaches and identify important, yet under explored challenges for deploying remote person recognition systems in-the-wild.
translated by 谷歌翻译
In recent years, visible-spectrum face verification systems have been shown to match the performance of experienced forensic examiners. However, such systems are ineffective in low-light and nighttime conditions. Thermal face imagery, which captures body heat emissions, effectively augments the visible spectrum, capturing discriminative facial features in scenes with limited illumination. Due to the increased cost and difficulty of obtaining diverse, paired thermal and visible spectrum datasets, not many algorithms and large-scale benchmarks for low-light recognition are available. This paper presents an algorithm that achieves state-of-the-art performance on both the ARL-VTF and TUFTS multi-spectral face datasets. Importantly, we study the impact of face alignment, pixel-level correspondence, and identity classification with label smoothing for multi-spectral face synthesis and verification. We show that our proposed method is widely applicable, robust, and highly effective. In addition, we show that the proposed method significantly outperforms face frontalization methods on profile-to-frontal verification. Finally, we present MILAB-VTF(B), a challenging multi-spectral face dataset that is composed of paired thermal and visible videos. To the best of our knowledge, with face data from 400 subjects, this dataset represents the most extensive collection of indoor and long-range outdoor thermal-visible face imagery. Lastly, we show that our end-to-end thermal-to-visible face verification system provides strong performance on the MILAB-VTF(B) dataset.
translated by 谷歌翻译
步态识别旨在通过相机来识别一个距离的人。随着深度学习的出现,步态识别的重大进步通过使用深度学习技术在许多情况下取得了鼓舞人心的成功。然而,对视频监视的越来越多的需求引入了更多的挑战,包括在各种方差下进行良好的识别,步态序列中的运动信息建模,由于协议方差,生物量标准安全性和预防隐私而引起的不公平性能比较。本文对步态识别的深度学习进行了全面的调查。我们首先介绍了从传统算法到深层模型的步态识别的奥德赛,从而提供了对步态识别系统的整个工作流程的明确知识。然后,从深度表示和建筑的角度讨论了步态识别的深入学习,并深入摘要。具体而言,深层步态表示分为静态和动态特征,而深度体系结构包括单流和多流架构。遵循我们提出的新颖性分类法,它可能有益于提供灵感并促进对步态认识的感知。此外,我们还提供了所有基于视觉的步态数据集和性能分析的全面摘要。最后,本文讨论了一些潜在潜在前景的开放问题。
translated by 谷歌翻译
近年来,随着对公共安全的需求越来越多,智能监测网络的快速发展,人员重新识别(RE-ID)已成为计算机视野领域的热门研究主题之一。人员RE-ID的主要研究目标是从不同的摄像机中检索具有相同身份的人。但是,传统的人重新ID方法需要手动标记人的目标,这消耗了大量的劳动力成本。随着深度神经网络的广泛应用,出现了许多基于深入的基于学习的人物的方法。因此,本文促进研究人员了解最新的研究成果和该领域的未来趋势。首先,我们总结了对几个最近公布的人的研究重新ID调查,并补充了系统地分类基于深度学习的人的重新ID方法的最新研究方法。其次,我们提出了一种多维分类,根据度量标准和表示学习,将基于深度学习的人的重新ID方法分为四类,包括深度度量学习,本地特征学习,生成的对抗学习和序列特征学习的方法。此外,我们根据其方法和动机来细分以上四类,讨论部分子类别的优缺点。最后,我们讨论了一些挑战和可能的研究方向的人重新ID。
translated by 谷歌翻译
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras. With the advancement of deep neural networks and increasing demand of intelligent video surveillance, it has gained significantly increased interest in the computer vision community. By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings. The widely studied closed-world setting is usually applied under various research-oriented assumptions, and has achieved inspiring success using deep learning techniques on a number of datasets. We first conduct a comprehensive overview with in-depth analysis for closed-world person Re-ID from three different perspectives, including deep feature representation learning, deep metric learning and ranking optimization. With the performance saturation under closed-world setting, the research focus for person Re-ID has recently shifted to the open-world setting, facing more challenging issues. This setting is closer to practical applications under specific scenarios. We summarize the open-world Re-ID in terms of five different aspects. By analyzing the advantages of existing methods, we design a powerful AGW baseline, achieving state-of-the-art or at least comparable performance on twelve datasets for FOUR different Re-ID tasks. Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re-ID system for real applications. Finally, some important yet under-investigated open issues are discussed.
translated by 谷歌翻译
横梁面部识别(CFR)旨在识别个体,其中比较面部图像源自不同的感测模式,例如红外与可见的。虽然CFR由于与模态差距相关的面部外观的显着变化,但CFR具有比经典的面部识别更具挑战性,但它在具有有限或挑战的照明的场景中,以及在呈现攻击的情况下,它是优越的。与卷积神经网络(CNNS)相关的人工智能最近的进展使CFR的显着性能提高了。由此激励,这项调查的贡献是三倍。我们提供CFR的概述,目标是通过首先正式化CFR然后呈现具体相关的应用来比较不同光谱中捕获的面部图像。其次,我们探索合适的谱带进行识别和讨论最近的CFR方法,重点放在神经网络上。特别是,我们提出了提取和比较异构特征以及数据集的重新访问技术。我们枚举不同光谱和相关算法的优势和局限性。最后,我们讨论了研究挑战和未来的研究线。
translated by 谷歌翻译
由于其前所未有的优势,在规模,移动,部署和隐蔽观察能力方面,空中平台和成像传感器的快速出现是实现新的空中监测形式。本文从计算机视觉和模式识别的角度来看,全面概述了以人为本的空中监控任务。它旨在为读者提供使用无人机,无人机和其他空中平台的空中监测任务当前状态的深入系统审查和技术分析。感兴趣的主要对象是人类,其中要检测单个或多个受试者,识别,跟踪,重新识别并进行其行为。更具体地,对于这四项任务中的每一个,我们首先讨论与基于地面的设置相比在空中环境中执行这些任务的独特挑战。然后,我们审查和分析公共可用于每项任务的航空数据集,并深入了解航空文学中的方法,并调查他们目前如何应对鸟瞰挑战。我们在讨论缺失差距和开放研究问题的讨论中得出结论,告知未来的研究途径。
translated by 谷歌翻译
人的步态被认为是一种独特的生物识别标识符,其可以在距离处以覆盖方式获取。但是,在受控场景中捕获的现有公共领域步态数据集接受的模型导致应用于现实世界无约束步态数据时的剧烈性能下降。另一方面,视频人员重新识别技术在大规模公共可用数据集中实现了有希望的性能。鉴于服装特性的多样性,衣物提示对于人们的认可不可靠。因此,实际上尚不清楚为什么最先进的人重新识别方法以及他们的工作。在本文中,我们通过从现有的视频人重新识别挑战中提取剪影来构建一个新的步态数据集,该挑战包括1,404人以不受约束的方式行走。基于该数据集,可以进行步态认可与人重新识别之间的一致和比较研究。鉴于我们的实验结果表明,目前在受控情景收集的数据下设计的目前的步态识别方法不适合真实监视情景,我们提出了一种名为Realgait的新型步态识别方法。我们的结果表明,在实际监视情景中识别人的步态是可行的,并且潜在的步态模式可能是视频人重新设计在实践中的真正原因。
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
通常,基于生物谱系的控制系统可能不依赖于各个预期行为或合作适当运行。相反,这种系统应该了解未经授权的访问尝试的恶意程序。文献中提供的一些作品建议通过步态识别方法来解决问题。这些方法旨在通过内在的可察觉功能来识别人类,尽管穿着衣服或配件。虽然该问题表示相对长时间的挑战,但是为处理问题的大多数技术存在与特征提取和低分类率相关的几个缺点,以及其他问题。然而,最近的深度学习方法是一种强大的一组工具,可以处理几乎任何图像和计算机视觉相关问题,为步态识别提供最重要的结果。因此,这项工作提供了通过步态认可的关于生物识别检测的最近作品的调查汇编,重点是深入学习方法,强调他们的益处,暴露出弱点。此外,它还呈现用于解决相关约束的数据集,方法和体系结构的分类和表征描述。
translated by 谷歌翻译
深度神经网络在人类分析中已经普遍存在,增强了应用的性能,例如生物识别识别,动作识别以及人重新识别。但是,此类网络的性能通过可用的培训数据缩放。在人类分析中,对大规模数据集的需求构成了严重的挑战,因为数据收集乏味,廉价,昂贵,并且必须遵守数据保护法。当前的研究研究了\ textit {合成数据}的生成,作为在现场收集真实数据的有效且具有隐私性的替代方案。这项调查介绍了基本定义和方法,在生成和采用合成数据进行人类分析时必不可少。我们进行了一项调查,总结了当前的最新方法以及使用合成数据的主要好处。我们还提供了公开可用的合成数据集和生成模型的概述。最后,我们讨论了该领域的局限性以及开放研究问题。这项调查旨在为人类分析领域的研究人员和从业人员提供。
translated by 谷歌翻译
已经广泛地研究了使用虹膜和围眼区域作为生物特征,主要是由于虹膜特征的奇异性以及当图像分辨率不足以提取虹膜信息时的奇异区域的使用。除了提供有关个人身份的信息外,还可以探索从这些特征提取的功能,以获得其他信息,例如个人的性别,药物使用的影响,隐形眼镜的使用,欺骗等。这项工作提出了对为眼部识别创建的数据库的调查,详细说明其协议以及如何获取其图像。我们还描述并讨论了最受欢迎的眼镜识别比赛(比赛),突出了所提交的算法,只使用Iris特征和融合虹膜和周边地区信息实现了最佳结果。最后,我们描述了一些相关工程,将深度学习技术应用于眼镜识别,并指出了新的挑战和未来方向。考虑到有大量的眼部数据库,并且每个人通常都设计用于特定问题,我们认为这项调查可以广泛概述眼部生物识别学中的挑战。
translated by 谷歌翻译
步态描绘了个人独特而区别的步行模式,并已成为人类识别最有希望的生物识别特征之一。作为一项精细的识别任务,步态识别很容易受到许多因素的影响,并且通常需要大量完全注释的数据,这些数据是昂贵且无法满足的。本文提出了一个大规模的自我监督基准,以通过对比度学习进行步态识别,旨在通过提供信息丰富的步行先验和各种现实世界中的多样化的变化,从大型的无标记的步行视频中学习一般步态代表。具体而言,我们收集了一个由1.02m步行序列组成的大规模的无标记的步态数据集gaitu-1m,并提出了一个概念上简单而经验上强大的基线模型步态。在实验上,我们在四个广泛使用的步态基准(Casia-B,Ou-Mvlp,Grew and Grew and Gait3d)上评估了预训练的模型,或者在不转移学习的情况下。无监督的结果与基于早期模型和基于GEI的早期方法相当甚至更好。在转移学习后,我们的方法在大多数情况下都超过现有方法。从理论上讲,我们讨论了步态特异性对比框架的关键问题,并提供了一些进一步研究的见解。据我们所知,Gaitlu-1M是第一个大规模未标记的步态数据集,而GaitSSB是第一种在上述基准测试基准上取得显着无监督结果的方法。 GaitSSB的源代码将集成到OpenGait中,可在https://github.com/shiqiyu/opengait上获得。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains K sub-centers and training samples only need to be close to any of the K positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.
translated by 谷歌翻译
自动面部识别是一个知名的研究领域。在该领域的最后三十年的深入研究中,已经提出了许多不同的面部识别算法。随着深度学习的普及及其解决各种不同问题的能力,面部识别研究人员集中精力在此范式下创建更好的模型。从2015年开始,最先进的面部识别就植根于深度学习模型。尽管有大规模和多样化的数据集可用于评估面部识别算法的性能,但许多现代数据集仅结合了影响面部识别的不同因素,例如面部姿势,遮挡,照明,面部表情和图像质量。当算法在这些数据集上产生错误时,尚不清楚哪些因素导致了此错误,因此,没有指导需要多个方向进行更多的研究。这项工作是我们以前在2014年开发的作品的后续作品,最终于2016年发表,显示了各种面部方面对面部识别算法的影响。通过将当前的最新技术与过去的最佳系统进行比较,我们证明了在强烈的遮挡下,某些类型的照明和强烈表达的面孔是深入学习算法所掌握的问题,而具有低分辨率图像的识别,极端的姿势变化和开放式识别仍然是一个开放的问题。为了证明这一点,我们使用六个不同的数据集和五种不同的面部识别算法以开源和可重现的方式运行一系列实验。我们提供了运行所有实验的源代码,这很容易扩展,因此在我们的评估中利用自己的深网只有几分钟的路程。
translated by 谷歌翻译
近年来,由于深度学习体系结构的有希望的进步,面部识别系统取得了非凡的成功。但是,当将配置图像与额叶图像的画廊匹配时,它们仍然无法实现预期的准确性。当前方法要么执行姿势归一化(即额叶化)或脱离姿势信息以进行面部识别。相反,我们提出了一种新方法,通过注意机制将姿势用作辅助信息。在本文中,我们假设使用注意机制姿势参加的信息可以指导剖面面上的上下文和独特的特征提取,从而进一步使嵌入式域中的更好表示形式学习。为了实现这一目标,首先,我们设计了一个统一的耦合曲线到额定面部识别网络。它通过特定于类的对比损失来学习从面孔到紧凑的嵌入子空间的映射。其次,我们开发了一个新颖的姿势注意力块(PAB),以专门指导从剖面面上提取姿势 - 不合稳定的特征。更具体地说,PAB旨在显式地帮助网络沿着频道和空间维度沿着频道和空间维度的重要特征,同时学习嵌入式子空间中的歧视性但构成不变的特征。为了验证我们提出的方法的有效性,我们对包括多PIE,CFP,IJBC在内的受控和野生基准进行实验,并在艺术状态下表现出优势。
translated by 谷歌翻译
地理定位的概念是指确定地球上的某些“实体”的位置的过程,通常使用全球定位系统(GPS)坐标。感兴趣的实体可以是图像,图像序列,视频,卫星图像,甚至图像中可见的物体。由于GPS标记媒体的大规模数据集由于智能手机和互联网而迅速变得可用,而深入学习已经上升以提高机器学习模型的性能能力,因此由于其显着影响而出现了视觉和对象地理定位的领域广泛的应用,如增强现实,机器人,自驾驶车辆,道路维护和3D重建。本文提供了对涉及图像的地理定位的全面调查,其涉及从捕获图像(图像地理定位)或图像内的地理定位对象(对象地理定位)的地理定位的综合调查。我们将提供深入的研究,包括流行算法的摘要,对所提出的数据集的描述以及性能结果的分析来说明每个字段的当前状态。
translated by 谷歌翻译
可见光面图像匹配是跨模型识别的具有挑战性的变化。挑战在于,可见和热模式之间的较大的模态间隙和低相关性。现有方法采用图像预处理,特征提取或常见的子空间投影,它们本身是独立的问题。在本文中,我们提出了一种用于交叉模态面部识别的端到端框架。该算法的旨在从未处理的面部图像学习身份鉴别特征,并识别跨模态图像对。提出了一种新颖的单元级丢失,用于在丢弃模态信息时保留身份信息。另外,提出用于将图像对分类能力集成到网络中的跨模判位块。所提出的网络可用于提取无关的矢量表示或测试图像的匹配对分类。我们对五个独立数据库的跨型号人脸识别实验表明,该方法实现了对现有最先进的方法的显着改善。
translated by 谷歌翻译
Person re-identification is a challenging task because of the high intra-class variance induced by the unrestricted nuisance factors of variations such as pose, illumination, viewpoint, background, and sensor noise. Recent approaches postulate that powerful architectures have the capacity to learn feature representations invariant to nuisance factors, by training them with losses that minimize intra-class variance and maximize inter-class separation, without modeling nuisance factors explicitly. The dominant approaches use either a discriminative loss with margin, like the softmax loss with the additive angular margin, or a metric learning loss, like the triplet loss with batch hard mining of triplets. Since the softmax imposes feature normalization, it limits the gradient flow supervising the feature embedding. We address this by joining the losses and leveraging the triplet loss as a proxy for the missing gradients. We further improve invariance to nuisance factors by adding the discriminative task of predicting attributes. Our extensive evaluation highlights that when only a holistic representation is learned, we consistently outperform the state-of-the-art on the three most challenging datasets. Such representations are easier to deploy in practical systems. Finally, we found that joining the losses removes the requirement for having a margin in the softmax loss while increasing performance.
translated by 谷歌翻译