Heterogeneous face re-identification, namely matching heterogeneous faces across disjoint visible light (VIS) and near-infrared (NIR) cameras, has become an important problem in video surveillance application. However, the large domain discrepancy between heterogeneous NIR-VIS faces makes the performance of face re-identification degraded dramatically. To solve this problem, a multimodal fusion ranking optimization algorithm for heterogeneous face re-identification is proposed in this paper. Firstly, we design a heterogeneous face translation network to obtain multimodal face pairs, including NIR-VIS/NIR-NIR/VIS-VIS face pairs, through mutual transformation between NIR-VIS faces. Secondly, we propose linear and non-linear fusion strategies to aggregate initial ranking lists of multimodal face pairs and acquire the optimized re-ranked list based on modal complementarity. The experimental results show that the proposed multimodal fusion ranking optimization algorithm can effectively utilize the complementarity and outperforms some relative methods on the SCface dataset.
translated by 谷歌翻译
When considering person re-identification (re-ID) as a retrieval process, re-ranking is a critical step to improve its accuracy. Yet in the re-ID community, limited effort has been devoted to re-ranking, especially those fully automatic, unsupervised solutions. In this paper, we propose a -reciprocal encoding method to re-rank the re-ID results. Our hypothesis is that if a gallery image is similar to the probe in the -reciprocal nearest neighbors, it is more likely to be a true match. Specifically, given an image, areciprocal feature is calculated by encoding its -reciprocal nearest neighbors into a single vector, which is used for reranking under the Jaccard distance. The final distance is computed as the combination of the original distance and the Jaccard distance. Our re-ranking method does not require any human interaction or any labeled data, so it is applicable to large-scale datasets. Experiments on the largescale Market-1501, CUHK03, MARS, and PRW datasets confirm the effectiveness of our method 1 .
translated by 谷歌翻译
可见光面图像匹配是跨模型识别的具有挑战性的变化。挑战在于,可见和热模式之间的较大的模态间隙和低相关性。现有方法采用图像预处理,特征提取或常见的子空间投影,它们本身是独立的问题。在本文中,我们提出了一种用于交叉模态面部识别的端到端框架。该算法的旨在从未处理的面部图像学习身份鉴别特征,并识别跨模态图像对。提出了一种新颖的单元级丢失,用于在丢弃模态信息时保留身份信息。另外,提出用于将图像对分类能力集成到网络中的跨模判位块。所提出的网络可用于提取无关的矢量表示或测试图像的匹配对分类。我们对五个独立数据库的跨型号人脸识别实验表明,该方法实现了对现有最先进的方法的显着改善。
translated by 谷歌翻译
横梁面部识别(CFR)旨在识别个体,其中比较面部图像源自不同的感测模式,例如红外与可见的。虽然CFR由于与模态差距相关的面部外观的显着变化,但CFR具有比经典的面部识别更具挑战性,但它在具有有限或挑战的照明的场景中,以及在呈现攻击的情况下,它是优越的。与卷积神经网络(CNNS)相关的人工智能最近的进展使CFR的显着性能提高了。由此激励,这项调查的贡献是三倍。我们提供CFR的概述,目标是通过首先正式化CFR然后呈现具体相关的应用来比较不同光谱中捕获的面部图像。其次,我们探索合适的谱带进行识别和讨论最近的CFR方法,重点放在神经网络上。特别是,我们提出了提取和比较异构特征以及数据集的重新访问技术。我们枚举不同光谱和相关算法的优势和局限性。最后,我们讨论了研究挑战和未来的研究线。
translated by 谷歌翻译
In this paper, we aim to address the large domain gap between high-resolution face images, e.g., from professional portrait photography, and low-quality surveillance images, e.g., from security cameras. Establishing an identity match between disparate sources like this is a classical surveillance face identification scenario, which continues to be a challenging problem for modern face recognition techniques. To that end, we propose a method that combines face super-resolution, resolution matching, and multi-scale template accumulation to reliably recognize faces from long-range surveillance footage, including from low quality sources. The proposed approach does not require training or fine-tuning on the target dataset of real surveillance images. Extensive experiments show that our proposed method is able to outperform even existing methods fine-tuned to the SCFace dataset.
translated by 谷歌翻译
近年来,随着对公共安全的需求越来越多,智能监测网络的快速发展,人员重新识别(RE-ID)已成为计算机视野领域的热门研究主题之一。人员RE-ID的主要研究目标是从不同的摄像机中检索具有相同身份的人。但是,传统的人重新ID方法需要手动标记人的目标,这消耗了大量的劳动力成本。随着深度神经网络的广泛应用,出现了许多基于深入的基于学习的人物的方法。因此,本文促进研究人员了解最新的研究成果和该领域的未来趋势。首先,我们总结了对几个最近公布的人的研究重新ID调查,并补充了系统地分类基于深度学习的人的重新ID方法的最新研究方法。其次,我们提出了一种多维分类,根据度量标准和表示学习,将基于深度学习的人的重新ID方法分为四类,包括深度度量学习,本地特征学习,生成的对抗学习和序列特征学习的方法。此外,我们根据其方法和动机来细分以上四类,讨论部分子类别的优缺点。最后,我们讨论了一些挑战和可能的研究方向的人重新ID。
translated by 谷歌翻译
Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras. With the advancement of deep neural networks and increasing demand of intelligent video surveillance, it has gained significantly increased interest in the computer vision community. By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings. The widely studied closed-world setting is usually applied under various research-oriented assumptions, and has achieved inspiring success using deep learning techniques on a number of datasets. We first conduct a comprehensive overview with in-depth analysis for closed-world person Re-ID from three different perspectives, including deep feature representation learning, deep metric learning and ranking optimization. With the performance saturation under closed-world setting, the research focus for person Re-ID has recently shifted to the open-world setting, facing more challenging issues. This setting is closer to practical applications under specific scenarios. We summarize the open-world Re-ID in terms of five different aspects. By analyzing the advantages of existing methods, we design a powerful AGW baseline, achieving state-of-the-art or at least comparable performance on twelve datasets for FOUR different Re-ID tasks. Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re-ID system for real applications. Finally, some important yet under-investigated open issues are discussed.
translated by 谷歌翻译
Face forgery detection plays an important role in personal privacy and social security. With the development of adversarial generative models, high-quality forgery images become more and more indistinguishable from real to humans. Existing methods always regard as forgery detection task as the common binary or multi-label classification, and ignore exploring diverse multi-modality forgery image types, e.g. visible light spectrum and near-infrared scenarios. In this paper, we propose a novel Hierarchical Forgery Classifier for Multi-modality Face Forgery Detection (HFC-MFFD), which could effectively learn robust patches-based hybrid domain representation to enhance forgery authentication in multiple-modality scenarios. The local spatial hybrid domain feature module is designed to explore strong discriminative forgery clues both in the image and frequency domain in local distinct face regions. Furthermore, the specific hierarchical face forgery classifier is proposed to alleviate the class imbalance problem and further boost detection performance. Experimental results on representative multi-modality face forgery datasets demonstrate the superior performance of the proposed HFC-MFFD compared with state-of-the-art algorithms. The source code and models are publicly available at https://github.com/EdWhites/HFC-MFFD.
translated by 谷歌翻译
深入学习方法通​​过用非常大的面部图像数据集训练模型来实现高度准确的人脸识别。与大型2D面部图像数据集的可用性不同,公众缺少大型3D面部数据集。现有的公共3D面部数据集通常收集有很少的科目,导致过度拟合的问题。本文提出了两个CNN模型来提高RGB-D面部识别任务。首先是分割感知深度估计网络,称为DepthNet,其通过包括用于更准确的面部区域定位的语义分段信息来估计来自RGB面部图像的深度映射。另一种是一种新的掩模引导RGB-D面识别模型,其包含RGB识别分支,深度图识别分支和具有空间注意模块的辅助分割掩模分支。我们的深度用于将大型2D面部图像数据集增强到大RGB-D面部数据集,用于训练精确的RGB-D面识别模型。此外,所提出的掩模引导的RGB-D面识别模型可以充分利用深度图和分割掩模信息,并且比以前的方法更稳健地对姿势变化。我们的实验结果表明,DepthNet可以通过分割掩模从面部图像产生更可靠的深度图。我们的掩模引导的面部识别模型优于几个公共3D面部数据集上的最先进方法。
translated by 谷歌翻译
面部识别技术已被广泛采用,许多使命批判性方案,如人类识别,受控入门和移动设备访问等手段等。安全监测是人脸识别技术的典型情景。因为监视视频和图像的低分辨率特征使得高分辨率面部识别算法难以提取有效特征信息,所应用于高分辨率面部识别的算法难以直接迁移到低分辨率情况。由于安全监控中的人脸识别在密集城市化时代变得更加重要,因此开发能够在处理低分辨率监视摄像机产生的视频帧时能够提供令人满意的性能的算法。本文详细阐述了利用均匀低分辨率监视视频,理论,实验细节和实验结果的基于相关特征的面部识别(Coffar)方法。实验结果验证了相关特征方法的有效性,从线监控安全方案中提高了均匀面部识别的准确性。
translated by 谷歌翻译
异质的面部识别(HFR)旨在匹配不同域(例如,可见到近红外图像)的面孔,该面孔已被广泛应用于身份验证和取证方案。但是,HFR是一个具有挑战性的问题,因为跨域差异很大,异质数据对有限和面部属性变化很大。为了应对这些挑战,我们从异质数据增强的角度提出了一种新的HFR方法,该方法称为面部合成,具有身份 - 属性分解(FSIAD)。首先,身份属性分解(IAD)将图像截取到与身份相关的表示和与身份无关的表示(称为属性)中,然后降低身份和属性之间的相关性。其次,我们设计了一个面部合成模块(FSM),以生成大量具有分离的身份和属性的随机组合的图像,以丰富合成图像的属性多样性。原始图像和合成图像均被用于训练HFR网络,以应对挑战并提高HFR的性能。在五个HFR数据库上进行的广泛实验验证了FSIAD的性能比以前的HFR方法更高。特别是,FSIAD以vr@far = 0.01%在LAMP-HQ上获得了4.8%的改善,这是迄今为止最大的HFR数据库。
translated by 谷歌翻译
Most existing person re-identification methods compute the matching relations between person images across camera views based on the ranking of the pairwise similarities. This matching strategy with the lack of the global viewpoint and the context's consideration inevitably leads to ambiguous matching results and sub-optimal performance. Based on a natural assumption that images belonging to the same person identity should not match with images belonging to multiple different person identities across views, called the unicity of person matching on the identity level, we propose an end-to-end person unicity matching architecture for learning and refining the person matching relations. First, we adopt the image samples' contextual information in feature space to generate the initial soft matching results by using graph neural networks. Secondly, we utilize the samples' global context relationship to refine the soft matching results and reach the matching unicity through bipartite graph matching. Given full consideration to real-world person re-identification applications, we achieve the unicity matching in both one-shot and multi-shot settings of person re-identification and further develop a fast version of the unicity matching without losing the performance. The proposed method is evaluated on five public benchmarks, including four multi-shot datasets MSMT17, DukeMTMC, Market1501, CUHK03, and a one-shot dataset VIPeR. Experimental results show the superiority of the proposed method on performance and efficiency.
translated by 谷歌翻译
我们提出了一种质量感知的多模式识别框架,其将来自多个生物特征的表示与不同的质量和样本数量相结合,以通过基于样本的质量提取互补识别信息来实现增加的识别准确性。我们通过使用以弱监督时尚估计的质量分数加权,为融合输入方式的质量意识框架,以融合输入方式的融合。此框架利用两个融合块,每个融合块由一组质量感知和聚合网络表示。除了架构修改外,我们还提出了两种特定于任务特定的损耗功能:多模式可分离性损失和多模式紧凑性损失。第一个损失确保了类的模态的表示具有可比的大小来提供更好的质量估计,而不同类别的多式数代表分布以实现嵌入空间中的最大判别。第二次丢失,被认为是正规化网络权重,通过规范框架来提高泛化性能。我们通过考虑由面部,虹膜和指纹方式组成的三个多模式数据集来评估性能。通过与最先进的算法进行比较来证明框架的功效。特别是,我们的框架优于BioMdata的模式的级别和得分级别融合超过30%以获得$ 10 ^ { - 4} $ 10 ^ { - 4} $的真正验收率。
translated by 谷歌翻译
深度卷积神经网络(DCNNS)的最新进展显示了热量的性能改进,可见的脸部合成和匹配问题。然而,当前的基于DCNN的合成模型在具有大姿势变化的热面上不太良好。为了处理该问题,需要异构面部额定化方法,其中模型采用热剖面图像并产生正面可见面。这是由于大域的一个极其困难的问题,以及两个模式之间的大姿态差异。尽管其在生物识别和监测中存在应用,但文献中的这种问题相对未探索。我们提出了一种域名不可知论的基于学习的生成对抗网络(DAL-GAN),其可以通过具有姿势变化的热面来合成可见域中的前视图。 Dal-GaN由具有辅助分类器的发电机和两个鉴别器,捕获局部和全局纹理鉴别以获得更好的合成。在双路径训练策略的帮助下,在发电机的潜在空间中强制实施对比度约束,这改善了特征向量辨别。最后,利用多功能损失函数来指导网络合成保存跨域累加的身份。广泛的实验结果表明,与其他基线方法相比,Dal-GaN可以产生更好的质量正面视图。
translated by 谷歌翻译
长期以来,面部识别一直是人工智能领域的一个积极研究领域,尤其是自近年来深度学习的兴起以来。在某些实际情况下,每个身份只有一个可以培训的样本。在这种情况下的面部识别被称为单个样本识别,并对深层模型的有效培训构成了重大挑战。因此,近年来,研究人员试图释放更多的深度学习潜力,并在单个样本情况下提高模型识别性能。尽管已经对传统的单个样本面部识别方法进行了几项全面的调查,但这些评论很少涉及新兴的基于深度学习的方法。因此,我们将重点放在本文中的基于深度学习的方法上,将其分类为虚拟示例方法和通用学习方法。在前一种类别中,生成虚拟图像或虚拟特征以使深层模型的训练受益。在后者中,使用了其他多样本通用集。通用学习方法有三种类型:结合传统方法和深度特征,改善损失功能并改善网络结构,所有这些都涵盖了我们的分析。此外,我们回顾了通常用于评估单个样本面部识别模型的面部数据集,并继续比较不同类型的模型的结果。此外,我们讨论了现有的单个样本面部识别方法的问题,包括虚拟样本方法中的身份信息保存,通用学习方法中的域适应性。此外,我们认为开发无监督的方法是一个有希望的未来方向,并指出语义差距是需要进一步考虑的重要问题。
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
最新的深度神经网络模型已在受控的高分辨率面部图像上达到了几乎完美的面部识别精度。但是,当他们使用非常低分辨率的面部图像测试时,它们的性能会大大降低。这在监视系统中尤其重要,在监视系统中,低分辨率探测图像应与高分辨率图库图像匹配。超分辨率技术旨在从低分辨率对应物中产生高分辨率的面部图像。尽管它们能够重建视觉上吸引人的图像,但与身份相关的信息尚未保留。在这里,我们提出了一个具有身份的端到端图像到图像翻译的深度神经网络,该网络能够使其高分辨率的高分辨率面孔超级解决方案,同时保留与身份相关的信息。我们通过训练一个非常深的卷积编码器网络来实现这一目标,并在相应层之间具有对称收缩路径。该网络在多尺度的低分辨率条件下训练了重建和具有身份损失的结合。对我们提出的模型的广泛定量评估表明,它在自然和人工低分辨率的面部数据集甚至看不见的身份方面优于竞争超分辨率和低分辨率的面部识别方法。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
已经广泛地研究了使用虹膜和围眼区域作为生物特征,主要是由于虹膜特征的奇异性以及当图像分辨率不足以提取虹膜信息时的奇异区域的使用。除了提供有关个人身份的信息外,还可以探索从这些特征提取的功能,以获得其他信息,例如个人的性别,药物使用的影响,隐形眼镜的使用,欺骗等。这项工作提出了对为眼部识别创建的数据库的调查,详细说明其协议以及如何获取其图像。我们还描述并讨论了最受欢迎的眼镜识别比赛(比赛),突出了所提交的算法,只使用Iris特征和融合虹膜和周边地区信息实现了最佳结果。最后,我们描述了一些相关工程,将深度学习技术应用于眼镜识别,并指出了新的挑战和未来方向。考虑到有大量的眼部数据库,并且每个人通常都设计用于特定问题,我们认为这项调查可以广泛概述眼部生物识别学中的挑战。
translated by 谷歌翻译
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps as well as a lack of sufficient data for cross-modality model training. To overcome this problem, we propose a novel method for paired NIR-VIS facial image generation. Specifically, we reconstruct 3D face shape and reflectance from a large 2D facial dataset and introduce a novel method of transforming the VIS reflectance to NIR reflectance. We then use a physically-based renderer to generate a vast, high-resolution and photorealistic dataset consisting of various poses and identities in the NIR and VIS spectra. Moreover, to facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss, which not only reduces the modality gap between NIR and VIS images at the domain level but encourages the network to focus on the identity features instead of facial details, such as poses and accessories. Extensive experiments conducted on four challenging NIR-VIS face recognition benchmarks demonstrate that the proposed method can achieve comparable performance with the state-of-the-art (SOTA) methods without requiring any existing NIR-VIS face recognition datasets. With slightly fine-tuning on the target NIR-VIS face recognition datasets, our method can significantly surpass the SOTA performance. Code and pretrained models are released under the insightface (https://github.com/deepinsight/insightface/tree/master/recognition).
translated by 谷歌翻译