学习歧视性面部特征在建立高性能面部识别模型方面发挥着重要作用。最近的最先进的面部识别解决方案,提出了一种在常用的分类损失函数,Softmax损失中纳入固定的惩罚率,通过最大限度地减少级别的变化来增加面部识别模型的辨别力并最大化级别的帧间变化。边缘惩罚Softmax损失,如arcFace和Cosface,假设可以使用固定的惩罚余量同样地学习不同身份之间的测地距。然而,这种学习目标对于具有不一致的间帧内变化的真实数据并不是现实的,这可能限制了面部识别模型的判别和概括性。在本文中,我们通过提出弹性罚款损失(弹性面)来放松固定的罚款边缘约束,这允许在推动阶级可分离性中灵活性。主要思想是利用从每个训练迭代中的正常分布中汲取的随机保证金值。这旨在提供决策边界机会,以提取和缩回,以允许灵活的类别可分离学习的空间。我们展示了在大量主流基准上使用相同的几何变换,展示了我们的弹性面损失和COSFace损失的优势。从更广泛的角度来看,我们的弹性面在九个主流基准中提出了最先进的面部识别性能。
translated by 谷歌翻译
面部图像的质量显着影响底层识别算法的性能。面部图像质量评估(FIQA)估计捕获的图像的效用在实现可靠和准确的识别性能方面。在这项工作中,我们提出了一种新的学习范式,可以在培训过程中学习内部网络观察。基于此,我们所提出的CR-FiQA使用该范例来通过预测其相对分类性来估计样品的面部图像质量。基于关于其类中心和最近的负类中心的角度空间中的训练样本特征表示来测量该分类性。我们通过实验说明了面部图像质量与样本相对分类性之间的相关性。由于此类属性仅为培训数据集可观察到,因此我们建议从培训数据集中学习此属性,并利用它来预测看不见样品的质量措施。该培训同时执行,同时通过用于面部识别模型训练的角度裕度罚款的软墨损失来优化类中心。通过对八个基准和四个面部识别模型的广泛评估实验,我们展示了我们提出的CR-FiQA在最先进(SOTA)FIQ算法上的优越性。
translated by 谷歌翻译
Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains K sub-centers and training samples only need to be close to any of the K positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.
translated by 谷歌翻译
Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L 2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.
translated by 谷歌翻译
面部识别系统必须处理可能导致匹配决策不正确的大型变量(例如不同的姿势,照明和表达)。这些可变性可以根据面部图像质量来测量,这在样本的效用上定义了用于识别的实用性。以前的识别作品不使用这种有价值的信息或利用非本质上的质量估算。在这项工作中,我们提出了一种简单且有效的面部识别解决方案(Qmagface),其将质量感知的比较分数与基于大小感知角裕度损耗的识别模型相结合。所提出的方法包括比较过程中特定于模型的面部图像质量,以增强在无约束情况下的识别性能。利用利用损失诱导的质量与其比较评分之间的线性,我们的质量意识比较功能简单且高度普遍。在几个面部识别数据库和基准上进行的实验表明,引入的质量意识导致识别性能一致的改进。此外,所提出的Qmagface方法在挑战性环境下特别好,例如交叉姿势,跨年或跨品。因此,它导致最先进的性能在几个面部识别基准上,例如在XQLFQ上的98.50%,83.97%,CFP-FP上的98.74%。 QMagface的代码是公开可用的。
translated by 谷歌翻译
基于软马克斯的损失函数及其变体(例如,界面,圆顶和弧形)可显着改善野生无约束场景中的面部识别性能。这些算法的一种常见实践是对嵌入特征和线性转换矩阵之间的乘法进行优化。但是,在大多数情况下,基于传统的设计经验给出了嵌入功能的尺寸,并且在给出固定尺寸时,使用该功能本身提高性能的研究较少。为了应对这一挑战,本文提出了一种称为subface的软关系近似方法,该方法采用了子空间功能来促进面部识别的性能。具体而言,我们在训练过程中动态选择每个批次中的非重叠子空间特征,然后使用子空间特征在基于软磁性的损失之间近似完整功能,因此,深层模型的可区分性可以显着增强,以增强面部识别。在基准数据集上进行的综合实验表明,我们的方法可以显着提高香草CNN基线的性能,这强烈证明了基于利润率的损失的子空间策略的有效性。
translated by 谷歌翻译
In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax [10] and Angular Softmax [9] have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available 1 .
translated by 谷歌翻译
This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge show the superiority of A-Softmax loss in FR tasks. The code has also been made publicly available 1 .
translated by 谷歌翻译
文献中提出的最新深层识别模型利用了大规模的公共数据集(例如MS-CELEB-1M和VGGFACE2)来培训非常深的神经网络,从而在主流基准上实现了最先进的表现。最近,由于可靠的隐私和道德问题,许多这些数据集(例如MS-CELEB-1M和VGGFACE2)被撤回。这激发了这项工作提出和调查使用隐私友好型合成生成的面部数据集来训练面部识别模型的可行性。为此,我们利用类别条件生成的对抗网络来生成类标记的合成面部图像,即sface。为了解决使用此类数据训练面部识别模型的隐私方面,我们提供了有关合成数据集与用于训练生成模型的原始真实数据集之间的身份关系的广泛评估实验。我们报告的评估证明,将真实数据集与合成数据集中的同一类标签相关联是不可能的。我们还建议使用三种不同的学习策略,多级分类,无标签的知识转移以及多级分类和知识转移的联合学习,对我们的隐私友好数据集进行识别。报告的五个真实面部基准的评估结果表明,隐私友好的合成数据集具有很高的潜力,可用于训练面部识别模型,例如,使用多级分类和99.13在LFW上实现91.87 \%的验证精度。 \%使用联合学习策略。
translated by 谷歌翻译
使用面部作为生物识别标识特征是通过捕获过程的非接触性质和识别算法的高准确度的激励。在目前的Covid-19大流行之后,在公共场所施加了面膜,以保持大流行。然而,由于戴着面具而面的遮挡是面部识别系统的新出现挑战。在本文中,我们提出了一种改进掩蔽面部识别性能的解决方案。具体地,我们提出了在现有面部识别模型的顶部操作的嵌入揭露模型(EUM)。我们还提出了一种新颖的损失功能,自限制的三态(SRT),使欧莱斯能够产生类似于相同身份的未掩蔽面的嵌入物。实现了三个面部识别模型,两个真实屏蔽数据集和两个合成产生的掩蔽面部数据集所取得的评价结果​​证明我们的提出方法在大多数实验环境中显着提高了性能。
translated by 谷歌翻译
最先进的面部识别方法通常采用多分类管道,并采用基于SoftMax的损耗进行优化。虽然这些方法取得了巨大的成功,但基于Softmax的损失在开放式分类的角度下有其限制:训练阶段的多分类目标并没有严格匹配开放式分类测试的目标。在本文中,我们派生了一个名为全局边界Cosface的新损失(GB-Cosface)。我们的GB-COSface介绍了自适应全局边界,以确定两个面积是否属于相同的身份,使得优化目标与从开放集分类的角度与测试过程对齐。同时,由于损失配方来自于基于软MAX的损失,因此我们的GB-COSFace保留了基于软MAX的损耗的优异性能,并且证明了COSFace是拟议损失的特殊情况。我们在几何上分析并解释了所提出的GB-Cosface。多面识别基准测试的综合实验表明,所提出的GB-Cosface优于主流面部识别任务中的当前最先进的面部识别损失。与Cosface相比,我们的GB-Cosface在Tar @ Far = 1E-6,1E-5,1E-4上提高了1.58%,0.57%和0.28%的IJB-C基准。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
全球Covid-19大流行的出现会给生物识别技术带来新的挑战。不仅是非接触式生物识别选项变得更加重要,而且最近也遇到了频繁的面具的面对面识别。这些掩模会影响前面识别系统的性能,因为它们隐藏了重要的身份信息。在本文中,我们提出了一种掩模不变的面部识别解决方案(MaskInv),其利用训练范例内的模板级知识蒸馏,其旨在产生类似于相同身份的非掩盖面的掩模面的嵌入面。除了蒸馏知识外,学生网络还通过基于边缘的身份分类损失,弹性面,使用遮蔽和非蒙面面的额外指导。在两个真正蒙面面部数据库和具有合成面具的五个主流数据库的逐步消融研究中,我们证明了我们的maskinV方法的合理化。我们所提出的解决方案优于先前的最先进(SOTA)在最近的MFRC-21挑战中的学术解决方案,屏蔽和屏蔽VS非屏蔽,并且还优于MFR2数据集上的先前解决方案。此外,我们证明所提出的模型仍然可以在缺陷的面上表现良好,只有在验证性能下的少量损失。代码,培训的模型以及合成屏蔽数据的评估协议是公开的:https://github.com/fdbtrs/masked-face-recognition-kd。
translated by 谷歌翻译
深度神经网络已迅速成为人脸识别(FR)的主流方法。但是,这限制了这些模型的部署,该模型包含了嵌入式和低端设备的极大量参数。在这项工作中,我们展示了一个非常轻巧和准确的FR解决方案,即小组装。我们利用神经结构搜索开发一个新的轻量级脸部架构。我们还提出了一种基于知识蒸馏(KD)的新型培训范式,该培训范式是多步KD,其中知识从教师模型蒸馏到学生模型的培训成熟日的不同阶段。我们进行了详细的消融研究,证明了使用NAS为FR的特定任务而不是一般对象分类的理智,以及我们提出的多步KD的益处。我们对九种不同基准的最先进(SOTA)紧凑型FR模型提供了广泛的实验评估和比较,包括IJB-B,IJB-C和Megaface等大规模评估基准。在考虑相同水平的模型紧凑性时,Pocketnets在九个主流基准上始终如一地推进了SOTA FR性能。使用0.92M参数,我们最小的网络PocketNets-128对最近的SOTA压缩型号实现了非常竞争力的结果,该模型包含多达4M参数。
translated by 谷歌翻译
在本文中,我们试图在抽象嵌入空间中绘制额叶和轮廓面图像之间的连接。我们使用耦合编码器网络利用此连接将额叶/配置文件的面部图像投影到一个常见的潜在嵌入空间中。提出的模型通过最大化面部两种视图之间的相互信息来迫使嵌入空间中表示的相似性。拟议的耦合编码器从三个贡献中受益于与极端姿势差异的匹配面。首先,我们利用我们的姿势意识到的对比学习来最大程度地提高身份额叶和概况表示之间的相互信息。其次,由在过去的迭代中积累的潜在表示组成的内存缓冲区已集成到模型中,因此它可以比小批量大小相对较多的实例。第三,一种新颖的姿势感知的对抗结构域适应方法迫使模型学习从轮廓到额叶表示的不对称映射。在我们的框架中,耦合编码器学会了扩大真实面孔和冒名顶替面部分布之间的边距,这导致了相同身份的不同观点之间的高度相互信息。通过对四个基准数据集的广泛实验,评估和消融研究来研究拟议模型的有效性,并与引人入胜的最新算法进行比较。
translated by 谷歌翻译
Cross-entropy loss together with softmax is arguably one of the most common used supervision components in convolutional neural networks (CNNs). Despite its simplicity, popularity and excellent performance, the component does not explicitly encourage discriminative learning of features. In this paper, we propose a generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features. Moreover, L-Softmax not only can adjust the desired margin but also can avoid overfitting. We also show that the L-Softmax loss can be optimized by typical stochastic gradient descent. Extensive experiments on four benchmark datasets demonstrate that the deeply-learned features with L-softmax loss become more discriminative, hence significantly boosting the performance on a variety of visual classification and verification tasks.
translated by 谷歌翻译
Person re-identification is a challenging task because of the high intra-class variance induced by the unrestricted nuisance factors of variations such as pose, illumination, viewpoint, background, and sensor noise. Recent approaches postulate that powerful architectures have the capacity to learn feature representations invariant to nuisance factors, by training them with losses that minimize intra-class variance and maximize inter-class separation, without modeling nuisance factors explicitly. The dominant approaches use either a discriminative loss with margin, like the softmax loss with the additive angular margin, or a metric learning loss, like the triplet loss with batch hard mining of triplets. Since the softmax imposes feature normalization, it limits the gradient flow supervising the feature embedding. We address this by joining the losses and leveraging the triplet loss as a proxy for the missing gradients. We further improve invariance to nuisance factors by adding the discriminative task of predicting attributes. Our extensive evaluation highlights that when only a holistic representation is learned, we consistently outperform the state-of-the-art on the three most challenging datasets. Such representations are easier to deploy in practical systems. Finally, we found that joining the losses removes the requirement for having a margin in the softmax loss while increasing performance.
translated by 谷歌翻译
SARS-COV-2向科学界提出了直接和间接的挑战。从大量国家的强制使用面部面具的强制使用最突出的间接挑战之一。面部识别方法在蒙版和未掩蔽的个体上努力执行具有类似准确性的身份验证。已经表明,这些方法的性能在面部掩模存在下显着下降,特别是如果参考图像是未被掩蔽的。我们提出了FocusFace,一种使用对比学习的多任务架构能够准确地执行蒙面的面部识别。该建议的架构被设计为从头开始训练或者在最先进的面部识别方法上工作,而不牺牲传统的面部识别任务中现有模型的能力。我们还探讨了设计对比学习模块的不同方法。结果以屏蔽掩蔽(M-M)和未掩蔽掩蔽(U-M)面验证性能提出。对于这两个设置,结果都与已发布的方法相提并论,但对于M-M而言,该方法能够优于与其比较的所有解决方案。我们进一步表明,当在现有方法顶部使用我们的方法时,培训计算成本在保持类似的表现时显着降低。在Github上提供了实施和培训的型号。
translated by 谷歌翻译
深度学习的面部识别模型通过利用具有较高计算成本的完整精确浮点网络来遵循深神经网络的共同趋势。由于完整的模型所需的大量内存,将这些网络部署在受计算需求约束的用例中通常是不可行的。以前的紧凑型面部识别方法提议设计特殊的紧凑型建筑并使用真实的培训数据从头开始训练它们,由于隐私问题,在现实世界中可能无法使用。我们在这项工作中介绍了基于低位精度格式模型量化的定量解决方案。 Quantface降低了现有面部识别模型所需的计算成本,而无需设计特定的体系结构或访问真实的培训数据。 Quantface将隐私友好的合成面数据引入量化过程中,以减轻潜在的隐私问题和与真实培训数据有关的问题。通过对七个基准和四个网络体系结构进行的广泛评估实验,我们证明了Quantface可以成功地将模型大小降低到5倍,同时在很大程度上维护完整模型的验证性能而无需访问真实的培训数据集。
translated by 谷歌翻译
The classification loss functions used in deep neural network classifiers can be grouped into two categories based on maximizing the margin in either Euclidean or angular spaces. Euclidean distances between sample vectors are used during classification for the methods maximizing the margin in Euclidean spaces whereas the Cosine similarity distance is used during the testing stage for the methods maximizing margin in the angular spaces. This paper introduces a novel classification loss that maximizes the margin in both the Euclidean and angular spaces at the same time. This way, the Euclidean and Cosine distances will produce similar and consistent results and complement each other, which will in turn improve the accuracies. The proposed loss function enforces the samples of classes to cluster around the centers that represent them. The centers approximating classes are chosen from the boundary of a hypersphere, and the pairwise distances between class centers are always equivalent. This restriction corresponds to choosing centers from the vertices of a regular simplex. There is not any hyperparameter that must be set by the user in the proposed loss function, therefore the use of the proposed method is extremely easy for classical classification problems. Moreover, since the class samples are compactly clustered around their corresponding means, the proposed classifier is also very suitable for open set recognition problems where test samples can come from the unknown classes that are not seen in the training phase. Experimental studies show that the proposed method achieves the state-of-the-art accuracies on open set recognition despite its simplicity.
translated by 谷歌翻译