Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L 2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.
translated by 谷歌翻译
This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge show the superiority of A-Softmax loss in FR tasks. The code has also been made publicly available 1 .
translated by 谷歌翻译
In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax [10] and Angular Softmax [9] have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available 1 .
translated by 谷歌翻译
学习歧视性面部特征在建立高性能面部识别模型方面发挥着重要作用。最近的最先进的面部识别解决方案,提出了一种在常用的分类损失函数,Softmax损失中纳入固定的惩罚率,通过最大限度地减少级别的变化来增加面部识别模型的辨别力并最大化级别的帧间变化。边缘惩罚Softmax损失,如arcFace和Cosface,假设可以使用固定的惩罚余量同样地学习不同身份之间的测地距。然而,这种学习目标对于具有不一致的间帧内变化的真实数据并不是现实的,这可能限制了面部识别模型的判别和概括性。在本文中,我们通过提出弹性罚款损失(弹性面)来放松固定的罚款边缘约束,这允许在推动阶级可分离性中灵活性。主要思想是利用从每个训练迭代中的正常分布中汲取的随机保证金值。这旨在提供决策边界机会,以提取和缩回,以允许灵活的类别可分离学习的空间。我们展示了在大量主流基准上使用相同的几何变换,展示了我们的弹性面损失和COSFace损失的优势。从更广泛的角度来看,我们的弹性面在九个主流基准中提出了最先进的面部识别性能。
translated by 谷歌翻译
Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains K sub-centers and training samples only need to be close to any of the K positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.
translated by 谷歌翻译
基于软马克斯的损失函数及其变体(例如,界面,圆顶和弧形)可显着改善野生无约束场景中的面部识别性能。这些算法的一种常见实践是对嵌入特征和线性转换矩阵之间的乘法进行优化。但是,在大多数情况下,基于传统的设计经验给出了嵌入功能的尺寸,并且在给出固定尺寸时,使用该功能本身提高性能的研究较少。为了应对这一挑战,本文提出了一种称为subface的软关系近似方法,该方法采用了子空间功能来促进面部识别的性能。具体而言,我们在训练过程中动态选择每个批次中的非重叠子空间特征,然后使用子空间特征在基于软磁性的损失之间近似完整功能,因此,深层模型的可区分性可以显着增强,以增强面部识别。在基准数据集上进行的综合实验表明,我们的方法可以显着提高香草CNN基线的性能,这强烈证明了基于利润率的损失的子空间策略的有效性。
translated by 谷歌翻译
随着最近深度卷积神经网络的进步,一般面临的概念取得了重大进展。然而,最先进的一般面部识别模型对遮挡面部图像没有概括,这正是现实世界场景中的常见情况。潜在原因是用于训练和特定设计的大规模遮挡面部数据,用于解决闭塞所带来的损坏功能。本文提出了一种新颖的面部识别方法,其基于单端到端的深神经网络的闭塞是强大的。我们的方法(使用遮挡掩码)命名(面部识别),学会发现深度卷积神经网络的损坏功能,并通过动态学习的面具清洁它们。此外,我们构建了大规模的遮挡面部图像,从有效且有效地培训。与现有方法相比,依靠外部探测器发现遮挡或采用较少鉴别的浅模型的现有方法,从简单且功能强大。 LFW,Megaface挑战1,RMF2,AR数据集和其他模拟遮挡/掩蔽数据集的实验结果证实,从大幅提高了遮挡下的准确性,并概括了一般面部识别。
translated by 谷歌翻译
Cross-entropy loss together with softmax is arguably one of the most common used supervision components in convolutional neural networks (CNNs). Despite its simplicity, popularity and excellent performance, the component does not explicitly encourage discriminative learning of features. In this paper, we propose a generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features. Moreover, L-Softmax not only can adjust the desired margin but also can avoid overfitting. We also show that the L-Softmax loss can be optimized by typical stochastic gradient descent. Extensive experiments on four benchmark datasets demonstrate that the deeply-learned features with L-softmax loss become more discriminative, hence significantly boosting the performance on a variety of visual classification and verification tasks.
translated by 谷歌翻译
The classification loss functions used in deep neural network classifiers can be grouped into two categories based on maximizing the margin in either Euclidean or angular spaces. Euclidean distances between sample vectors are used during classification for the methods maximizing the margin in Euclidean spaces whereas the Cosine similarity distance is used during the testing stage for the methods maximizing margin in the angular spaces. This paper introduces a novel classification loss that maximizes the margin in both the Euclidean and angular spaces at the same time. This way, the Euclidean and Cosine distances will produce similar and consistent results and complement each other, which will in turn improve the accuracies. The proposed loss function enforces the samples of classes to cluster around the centers that represent them. The centers approximating classes are chosen from the boundary of a hypersphere, and the pairwise distances between class centers are always equivalent. This restriction corresponds to choosing centers from the vertices of a regular simplex. There is not any hyperparameter that must be set by the user in the proposed loss function, therefore the use of the proposed method is extremely easy for classical classification problems. Moreover, since the class samples are compactly clustered around their corresponding means, the proposed classifier is also very suitable for open set recognition problems where test samples can come from the unknown classes that are not seen in the training phase. Experimental studies show that the proposed method achieves the state-of-the-art accuracies on open set recognition despite its simplicity.
translated by 谷歌翻译
最先进的面部识别方法通常采用多分类管道,并采用基于SoftMax的损耗进行优化。虽然这些方法取得了巨大的成功,但基于Softmax的损失在开放式分类的角度下有其限制:训练阶段的多分类目标并没有严格匹配开放式分类测试的目标。在本文中,我们派生了一个名为全局边界Cosface的新损失(GB-Cosface)。我们的GB-COSface介绍了自适应全局边界,以确定两个面积是否属于相同的身份,使得优化目标与从开放集分类的角度与测试过程对齐。同时,由于损失配方来自于基于软MAX的损失,因此我们的GB-COSFace保留了基于软MAX的损耗的优异性能,并且证明了COSFace是拟议损失的特殊情况。我们在几何上分析并解释了所提出的GB-Cosface。多面识别基准测试的综合实验表明,所提出的GB-Cosface优于主流面部识别任务中的当前最先进的面部识别损失。与Cosface相比,我们的GB-Cosface在Tar @ Far = 1E-6,1E-5,1E-4上提高了1.58%,0.57%和0.28%的IJB-C基准。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
Deep metric learning has gained much popularity in recent years, following the success of deep learning. However, existing frameworks of deep metric learning based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one negative example while not interacting with the other negative classes in each update. In this paper, we propose to address this problem with a new metric learning objective called multi-class N -pair loss. The proposed objective function firstly generalizes triplet loss by allowing joint comparison among more than one negative examples -more specifically, N -1 negative examples -and secondly reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples, instead of (N +1)×N . We demonstrate the superiority of our proposed loss to the triplet loss as well as other competing loss functions for a variety of tasks on several visual recognition benchmark, including fine-grained object recognition and verification, image clustering and retrieval, and face verification and identification.
translated by 谷歌翻译
This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity s p and minimize the between-class similarity s n . We find a majority of loss functions, including the triplet loss and the softmax cross-entropy loss, embed s n and s p into similarity pairs and seek to reduce (s n − s p ). Such an optimization manner is inflexible, because the penalty strength on every single similarity score is restricted to be equal. Our intuition is that if a similarity score deviates far from the optimum, it should be emphasized. To this end, we simply re-weight each similarity to highlight the less-optimized similarity scores. It results in a Circle loss, which is named due to its circular decision boundary. The Circle loss has a unified formula for two elemental deep feature learning paradigms, i.e., learning with class-level labels and pair-wise labels. Analytically, we show that the Circle loss offers a more flexible optimization approach towards a more definite convergence target, compared with the loss functions optimizing (s n − s p ). Experimentally, we demonstrate the superiority of the Circle loss on a variety of deep feature learning tasks. On face recognition, person re-identification, as well as several finegrained image retrieval datasets, the achieved performance is on par with the state of the art.
translated by 谷歌翻译
In this paper, we investigate the problem of predictive confidence in face and kinship verification. Most existing face and kinship verification methods focus on accuracy performance while ignoring confidence estimation for their prediction results. However, confidence estimation is essential for modeling reliability in such high-risk tasks. To address this issue, we first introduce a novel yet simple confidence measure for face and kinship verification, which allows the verification models to transform the similarity score into a confidence score for a given face pair. We further propose a confidence-calibrated approach called angular scaling calibration (ASC). ASC is easy to implement and can be directly applied to existing face and kinship verification models without model modifications, yielding accuracy-preserving and confidence-calibrated probabilistic verification models. To the best of our knowledge, our approach is the first general confidence-calibrated solution to face and kinship verification in a modern context. We conduct extensive experiments on four widely used face and kinship verification datasets, and the results demonstrate the effectiveness of our approach.
translated by 谷歌翻译
基于深度学习的分类中特征表示的主要挑战之一是设计表现出强大歧视力的适当损失功能。经典的SoftMax损失并不能明确鼓励对特征的歧视性学习。研究的一个流行方向是将边缘纳入良好的损失中,以实施额外的课内紧凑性和阶层间的可分离性,但是,这是通过启发式手段而不是严格的数学原则来开发的。在这项工作中,我们试图通过将原则优化目标提出为最大的利润率来解决这一限制。具体而言,我们首先将类别的边缘定义为级别间的可分离性的度量,而样品边缘是级别的紧凑性的度量。因此,为了鼓励特征的歧视性表示,损失函数应促进类和样品的最大可能边缘。此外,我们得出了广义的保证金软损失,以得出现有基于边缘的损失的一般结论。这个原则性的框架不仅提供了新的观点来理解和解释现有的基于保证金的损失,而且还提供了新的见解,可以指导新工具的设计,包括样本保证金正则化和最大的平衡案例的最大保证金损失,和零中心的正则化案例。实验结果证明了我们的策略对各种任务的有效性,包括视觉分类,分类不平衡,重新识别和面部验证。
translated by 谷歌翻译
我们提出了一种质量感知的多模式识别框架,其将来自多个生物特征的表示与不同的质量和样本数量相结合,以通过基于样本的质量提取互补识别信息来实现增加的识别准确性。我们通过使用以弱监督时尚估计的质量分数加权,为融合输入方式的质量意识框架,以融合输入方式的融合。此框架利用两个融合块,每个融合块由一组质量感知和聚合网络表示。除了架构修改外,我们还提出了两种特定于任务特定的损耗功能:多模式可分离性损失和多模式紧凑性损失。第一个损失确保了类的模态的表示具有可比的大小来提供更好的质量估计,而不同类别的多式数代表分布以实现嵌入空间中的最大判别。第二次丢失,被认为是正规化网络权重,通过规范框架来提高泛化性能。我们通过考虑由面部,虹膜和指纹方式组成的三个多模式数据集来评估性能。通过与最先进的算法进行比较来证明框架的功效。特别是,我们的框架优于BioMdata的模式的级别和得分级别融合超过30%以获得$ 10 ^ { - 4} $ 10 ^ { - 4} $的真正验收率。
translated by 谷歌翻译
在本文中,我们试图在抽象嵌入空间中绘制额叶和轮廓面图像之间的连接。我们使用耦合编码器网络利用此连接将额叶/配置文件的面部图像投影到一个常见的潜在嵌入空间中。提出的模型通过最大化面部两种视图之间的相互信息来迫使嵌入空间中表示的相似性。拟议的耦合编码器从三个贡献中受益于与极端姿势差异的匹配面。首先,我们利用我们的姿势意识到的对比学习来最大程度地提高身份额叶和概况表示之间的相互信息。其次,由在过去的迭代中积累的潜在表示组成的内存缓冲区已集成到模型中,因此它可以比小批量大小相对较多的实例。第三,一种新颖的姿势感知的对抗结构域适应方法迫使模型学习从轮廓到额叶表示的不对称映射。在我们的框架中,耦合编码器学会了扩大真实面孔和冒名顶替面部分布之间的边距,这导致了相同身份的不同观点之间的高度相互信息。通过对四个基准数据集的广泛实验,评估和消融研究来研究拟议模型的有效性,并与引人入胜的最新算法进行比较。
translated by 谷歌翻译
近年来,由于深度学习体系结构的有希望的进步,面部识别系统取得了非凡的成功。但是,当将配置图像与额叶图像的画廊匹配时,它们仍然无法实现预期的准确性。当前方法要么执行姿势归一化(即额叶化)或脱离姿势信息以进行面部识别。相反,我们提出了一种新方法,通过注意机制将姿势用作辅助信息。在本文中,我们假设使用注意机制姿势参加的信息可以指导剖面面上的上下文和独特的特征提取,从而进一步使嵌入式域中的更好表示形式学习。为了实现这一目标,首先,我们设计了一个统一的耦合曲线到额定面部识别网络。它通过特定于类的对比损失来学习从面孔到紧凑的嵌入子空间的映射。其次,我们开发了一个新颖的姿势注意力块(PAB),以专门指导从剖面面上提取姿势 - 不合稳定的特征。更具体地说,PAB旨在显式地帮助网络沿着频道和空间维度沿着频道和空间维度的重要特征,同时学习嵌入式子空间中的歧视性但构成不变的特征。为了验证我们提出的方法的有效性,我们对包括多PIE,CFP,IJBC在内的受控和野生基准进行实验,并在艺术状态下表现出优势。
translated by 谷歌翻译
随着近期神经网络的成功,对人脸识别取得了显着进展。然而,收集面部识别的大规模现实世界培训数据已经挑战,特别是由于标签噪音和隐私问题。同时,通常从网络图像收集现有的面部识别数据集,缺乏关于属性的详细注释(例如,姿势和表达),因此对面部识别的不同属性的影响已经很差。在本文中,我们使用合成面部图像,即Synface来解决面部识别中的上述问题。具体而言,我们首先探讨用合成和真实面部图像训练的最近最先进的人脸识别模型之间的性能差距。然后,我们分析了性能差距背后的潜在原因,例如,较差的阶级变化和合成和真实面部图像之间的域间隙。灵感来自于此,我们使用身份混合(IM)和域混合(DM)设计了SYNFACE,以减轻上述性能差距,展示了对面部识别的综合数据的巨大潜力。此外,利用可控的面部合成模型,我们可以容易地管理合成面代的不同因素,包括姿势,表达,照明,身份的数量和每个身份的样本。因此,我们还对综合性面部图像进行系统实证分析,以提供一些关于如何有效利用综合数据进行人脸识别的见解。
translated by 谷歌翻译
卷积神经网络(CNNS)在监督环境中的影响提供了巨大的性能。从CNN中学到的表示,在高度球形歧管上运作,导致了面部识别,面部识别和其他受监督任务的富有魅力结果。具有广泛的激活功能,具有间直觉,在欧几里德空间中执行优于Softmax。这项研究的主要动力是提供见解。首先,暗示立体图投影以将数据从欧几里德空间($ \ mathbb {r} ^ {n} $)转换为高度球形歧管($ \ mathbb {s} ^ {n} $)来分析角度边缘损失的性能。其次,从理论上证明了使用立体投影在极度上构建的决策边界义务授权了神经网络的学习。实验已经证明,在现有的最先进的角度边缘目标功能上应用立体摄影改善了标准图像分类数据集的性能(CIFAR-10,100)。此外,我们在疟疾薄血涂片图像上运行了我们的实验,导致有效的结果。该代码可公开可用:https://github.com/barulalithb/stereo -angular-margin。
translated by 谷歌翻译