How to improve discriminative feature learning is central in classification. Existing works address this problem by explicitly increasing inter-class separability and intra-class similarity, whether by constructing positive and negative pairs for contrastive learning or posing tighter class separating margins. These methods do not exploit the similarity between different classes as they adhere to i.i.d. assumption in data. In this paper, we embrace the real-world data distribution setting that some classes share semantic overlaps due to their similar appearances or concepts. Regarding this hypothesis, we propose a novel regularization to improve discriminative learning. We first calibrate the estimated highest likelihood of one sample based on its semantically neighboring classes, then encourage the overall likelihood predictions to be deterministic by imposing an adaptive exponential penalty. As the gradient of the proposed method is roughly proportional to the uncertainty of the predicted likelihoods, we name it adaptive discriminative regularization (ADR), trained along with a standard cross entropy loss in classification. Extensive experiments demonstrate that it can yield consistent and non-trivial performance improvements in a variety of visual classification tasks (over 10 benchmarks). Furthermore, we find it is robust to long-tailed and noisy label data distribution. Its flexible design enables its compatibility with mainstream classification architectures and losses.
translated by 谷歌翻译
基于软马克斯的损失函数及其变体(例如,界面,圆顶和弧形)可显着改善野生无约束场景中的面部识别性能。这些算法的一种常见实践是对嵌入特征和线性转换矩阵之间的乘法进行优化。但是,在大多数情况下,基于传统的设计经验给出了嵌入功能的尺寸,并且在给出固定尺寸时,使用该功能本身提高性能的研究较少。为了应对这一挑战,本文提出了一种称为subface的软关系近似方法,该方法采用了子空间功能来促进面部识别的性能。具体而言,我们在训练过程中动态选择每个批次中的非重叠子空间特征,然后使用子空间特征在基于软磁性的损失之间近似完整功能,因此,深层模型的可区分性可以显着增强,以增强面部识别。在基准数据集上进行的综合实验表明,我们的方法可以显着提高香草CNN基线的性能,这强烈证明了基于利润率的损失的子空间策略的有效性。
translated by 谷歌翻译
尽管深神经网络的占优势性能,但最近的作品表明它们校准不佳,导致过度自信的预测。由于培训期间的跨熵最小化,因此可以通过过度化来加剧错误烫伤,因为它促进了预测的Softmax概率来匹配单热标签分配。这产生了正确的类别的Pre-SoftMax激活,该类别明显大于剩余的激活。来自文献的最近证据表明,损失函数嵌入隐含或明确最大化的预测熵会产生最先进的校准性能。我们提供了当前最先进的校准损耗的统一约束优化视角。具体地,这些损失可以被视为在Logit距离上施加平等约束的线性惩罚(或拉格朗日)的近似值。这指出了这种潜在的平等约束的一个重要限制,其随后的梯度不断推动非信息解决方案,这可能会阻止在基于梯度的优化期间模型的辨别性能和校准之间的最佳妥协。在我们的观察之后,我们提出了一种基于不平等约束的简单灵活的泛化,这在Logit距离上强加了可控裕度。关于各种图像分类,语义分割和NLP基准的综合实验表明,我们的方法在网络校准方面对这些任务设置了新的最先进的结果,而不会影响辨别性能。代码可在https://github.com/by-liu/mbls上获得。
translated by 谷歌翻译
Recent studies have revealed that, beyond conventional accuracy, calibration should also be considered for training modern deep neural networks. To address miscalibration during learning, some methods have explored different penalty functions as part of the learning objective, alongside a standard classification loss, with a hyper-parameter controlling the relative contribution of each term. Nevertheless, these methods share two major drawbacks: 1) the scalar balancing weight is the same for all classes, hindering the ability to address different intrinsic difficulties or imbalance among classes; and 2) the balancing weight is usually fixed without an adaptive strategy, which may prevent from reaching the best compromise between accuracy and calibration, and requires hyper-parameter search for each application. We propose Class Adaptive Label Smoothing (CALS) for calibrating deep networks, which allows to learn class-wise multipliers during training, yielding a powerful alternative to common label smoothing penalties. Our method builds on a general Augmented Lagrangian approach, a well-established technique in constrained optimization, but we introduce several modifications to tailor it for large-scale, class-adaptive training. Comprehensive evaluation and multiple comparisons on a variety of benchmarks, including standard and long-tailed image classification, semantic segmentation, and text classification, demonstrate the superiority of the proposed method. The code is available at https://github.com/by-liu/CALS.
translated by 谷歌翻译
检测到分布输入对于在现实世界中安全部署机器学习模型至关重要。然而,已知神经网络遭受过度自信的问题,在该问题中,它们对分布和分布的输入的信心异常高。在这项工作中,我们表明,可以通过在训练中实施恒定的向量规范来通过logit归一化(logitnorm)(logitnorm)来缓解此问题。我们的方法是通过分析的激励,即logit的规范在训练过程中不断增加,从而导致过度自信的产出。因此,LogitNorm背后的关键思想是将网络优化期间输出规范的影响解散。通过LogitNorm培训,神经网络在分布数据和分布数据之间产生高度可区分的置信度得分。广泛的实验证明了LogitNorm的优势,在公共基准上,平均FPR95最高为42.30%。
translated by 谷歌翻译
可以通过对手动预定义目标的监督(例如,一hot或Hadamard代码)进行深入的表示学习来解决细粒度的视觉分类。这种目标编码方案对于模型间相关性的灵活性较小,并且对稀疏和不平衡的数据分布也很敏感。鉴于此,本文介绍了一种新颖的目标编码方案 - 动态目标关系图(DTRG),作为辅助特征正则化,是一个自生成的结构输出,可根据输入图像映射。具体而言,类级特征中心的在线计算旨在在表示空间中生成跨类别距离,因此可以通过非参数方式通过动态图来描绘。明确最大程度地减少锚定在这些级别中心的阶层内特征变化可以鼓励学习判别特征。此外,由于利用了类间的依赖性,提出的目标图可以减轻代表学习中的数据稀疏性和不稳定。受混合风格数据增强的最新成功的启发,本文将随机性引入了动态目标关系图的软结构,以进一步探索目标类别的关系多样性。实验结果可以证明我们方法对多个视觉分类任务的许多不同基准的有效性,尤其是在流行的细粒对象基准上实现最先进的性能以及针对稀疏和不平衡数据的出色鲁棒性。源代码可在https://github.com/akonlau/dtrg上公开提供。
translated by 谷歌翻译
Recently, a popular line of research in face recognition is adopting margins in the well-established softmax loss function to maximize class separability. In this paper, we first introduce an Additive Angular Margin Loss (ArcFace), which not only has a clear geometric interpretation but also significantly enhances the discriminative power. Since ArcFace is susceptible to the massive label noise, we further propose sub-center ArcFace, in which each class contains K sub-centers and training samples only need to be close to any of the K positive sub-centers. Sub-center ArcFace encourages one dominant sub-class that contains the majority of clean faces and non-dominant sub-classes that include hard or noisy faces. Based on this self-propelled isolation, we boost the performance through automatically purifying raw web faces under massive real-world noise. Besides discriminative feature embedding, we also explore the inverse problem, mapping feature vectors to face images. Without training any additional generator or discriminator, the pre-trained ArcFace model can generate identity-preserved face images for both subjects inside and outside the training data only by using the network gradient and Batch Normalization (BN) priors. Extensive experiments demonstrate that ArcFace can enhance the discriminative feature embedding as well as strengthen the generative face synthesis.
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
The real-world data tends to be heavily imbalanced and severely skew the data-driven deep neural networks, which makes Long-Tailed Recognition (LTR) a massive challenging task. Existing LTR methods seldom train Vision Transformers (ViTs) with Long-Tailed (LT) data, while the off-the-shelf pretrain weight of ViTs always leads to unfair comparisons. In this paper, we systematically investigate the ViTs' performance in LTR and propose LiVT to train ViTs from scratch only with LT data. With the observation that ViTs suffer more severe LTR problems, we conduct Masked Generative Pretraining (MGP) to learn generalized features. With ample and solid evidence, we show that MGP is more robust than supervised manners. In addition, Binary Cross Entropy (BCE) loss, which shows conspicuous performance with ViTs, encounters predicaments in LTR. We further propose the balanced BCE to ameliorate it with strong theoretical groundings. Specially, we derive the unbiased extension of Sigmoid and compensate extra logit margins to deploy it. Our Bal-BCE contributes to the quick convergence of ViTs in just a few epochs. Extensive experiments demonstrate that with MGP and Bal-BCE, LiVT successfully trains ViTs well without any additional data and outperforms comparable state-of-the-art methods significantly, e.g., our ViT-B achieves 81.0% Top-1 accuracy in iNaturalist 2018 without bells and whistles. Code is available at https://github.com/XuZhengzhuo/LiVT.
translated by 谷歌翻译
Data uncertainty is commonly observed in the images for face recognition (FR). However, deep learning algorithms often make predictions with high confidence even for uncertain or irrelevant inputs. Intuitively, FR algorithms can benefit from both the estimation of uncertainty and the detection of out-of-distribution (OOD) samples. Taking a probabilistic view of the current classification model, the temperature scalar is exactly the scale of uncertainty noise implicitly added in the softmax function. Meanwhile, the uncertainty of images in a dataset should follow a prior distribution. Based on the observation, a unified framework for uncertainty modeling and FR, Random Temperature Scaling (RTS), is proposed to learn a reliable FR algorithm. The benefits of RTS are two-fold. (1) In the training phase, it can adjust the learning strength of clean and noisy samples for stability and accuracy. (2) In the test phase, it can provide a score of confidence to detect uncertain, low-quality and even OOD samples, without training on extra labels. Extensive experiments on FR benchmarks demonstrate that the magnitude of variance in RTS, which serves as an OOD detection metric, is closely related to the uncertainty of the input image. RTS can achieve top performance on both the FR and OOD detection tasks. Moreover, the model trained with RTS can perform robustly on datasets with noise. The proposed module is light-weight and only adds negligible computation cost to the model.
translated by 谷歌翻译
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL -- representation learning and label disambiguation -- in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Moreover, we study a challenging yet practical noisy partial label learning setup, where the ground-truth may not be included in the candidate set. To remedy this problem, we present an extension PiCO+ that performs distance-based clean sample selection and learns robust classifiers by a semi-supervised contrastive learning algorithm. Extensive experiments demonstrate that our proposed methods significantly outperform the current state-of-the-art approaches in standard and noisy PLL tasks and even achieve comparable results to fully supervised learning.
translated by 谷歌翻译
部分标签学习(PLL)是一个典型的弱监督学习框架,每个培训实例都与候选标签集相关联,其中只有一个标签是有效的。为了解决PLL问题,通常方法试图通过使用先验知识(例如培训数据的结构信息)或以自训练方式提炼模型输出来对候选人集进行歧义。不幸的是,由于在模型训练的早期阶段缺乏先前的信息或不可靠的预测,这些方法通常无法获得有利的性能。在本文中,我们提出了一个新的针对部分标签学习的框架,该框架具有元客观指导性的歧义(MOGD),该框架旨在通过在小验证集中求解元目标来从设置的候选标签中恢复地面真相标签。具体而言,为了减轻假阳性标签的负面影响,我们根据验证集的元损失重新权重。然后,分类器通过最大程度地减少加权交叉熵损失来训练。通过使用普通SGD优化器的各种深网络可以轻松实现所提出的方法。从理论上讲,我们证明了元目标的收敛属性,并得出了所提出方法的估计误差界限。在各种基准数据集和实际PLL数据集上进行的广泛实验表明,与最先进的方法相比,所提出的方法可以实现合理的性能。
translated by 谷歌翻译
Cross-entropy loss together with softmax is arguably one of the most common used supervision components in convolutional neural networks (CNNs). Despite its simplicity, popularity and excellent performance, the component does not explicitly encourage discriminative learning of features. In this paper, we propose a generalized large-margin softmax (L-Softmax) loss which explicitly encourages intra-class compactness and inter-class separability between learned features. Moreover, L-Softmax not only can adjust the desired margin but also can avoid overfitting. We also show that the L-Softmax loss can be optimized by typical stochastic gradient descent. Extensive experiments on four benchmark datasets demonstrate that the deeply-learned features with L-softmax loss become more discriminative, hence significantly boosting the performance on a variety of visual classification and verification tasks.
translated by 谷歌翻译
现实世界数据普遍面对严重的类别 - 不平衡问题,并且展示了长尾分布,即,大多数标签与有限的情况有关。由此类数据集监督的NA \“IVE模型更愿意占主导地位标签,遇到严重的普遍化挑战并变得不佳。我们从先前的角度提出了两种新的方法,以减轻这种困境。首先,我们推导了一个以平衡为导向的数据增强命名均匀的混合物(Unimix)促进长尾情景中的混合,采用先进的混合因子和采样器,支持少数民族。第二,受贝叶斯理论的动机,我们弄清了贝叶斯偏见(北美),是由此引起的固有偏见先前的不一致,并将其补偿为对标准交叉熵损失的修改。我们进一步证明了所提出的方法理论上和经验地确保分类校准。广泛的实验验证我们的策略是否有助于更好校准的模型,以及他们的策略组合在CIFAR-LT,ImageNet-LT和Inattations 2018上实现最先进的性能。
translated by 谷歌翻译
作为标签噪声,最受欢迎的分布变化之一,严重降低了深度神经网络的概括性能,具有嘈杂标签的强大训练正在成为现代深度学习中的重要任务。在本文中,我们提出了我们的框架,在子分类器(ALASCA)上创造了自适应标签平滑,该框架提供了具有理论保证和可忽略的其他计算的可靠特征提取器。首先,我们得出标签平滑(LS)会产生隐式Lipschitz正则化(LR)。此外,基于这些推导,我们将自适应LS(ALS)应用于子分类器架构上,以在中间层上的自适应LR的实际应用。我们对ALASCA进行了广泛的实验,并将其与以前的几个数据集上的噪声燃烧方法相结合,并显示我们的框架始终优于相应的基线。
translated by 谷歌翻译
深度神经网络的成功在很大程度上取决于大量高质量注释的数据的可用性,但是这些数据很难或昂贵。由此产生的标签可能是类别不平衡,嘈杂或人类偏见。从不完美注释的数据集中学习无偏分类模型是一项挑战,我们通常会遭受过度拟合或不足的折磨。在这项工作中,我们彻底研究了流行的软马克斯损失和基于保证金的损失,并提供了一种可行的方法来加强通过最大化最小样本余量来限制的概括误差。我们为此目的进一步得出了最佳条件,该条件指示了类原型应锚定的方式。通过理论分析的激励,我们提出了一种简单但有效的方法,即原型锚定学习(PAL),可以轻松地将其纳入各种基于学习的分类方案中以处理不完美的注释。我们通过对合成和现实世界数据集进行广泛的实验来验证PAL对班级不平衡学习和降低噪声学习的有效性。
translated by 谷歌翻译
基于深度学习的分类中特征表示的主要挑战之一是设计表现出强大歧视力的适当损失功能。经典的SoftMax损失并不能明确鼓励对特征的歧视性学习。研究的一个流行方向是将边缘纳入良好的损失中,以实施额外的课内紧凑性和阶层间的可分离性,但是,这是通过启发式手段而不是严格的数学原则来开发的。在这项工作中,我们试图通过将原则优化目标提出为最大的利润率来解决这一限制。具体而言,我们首先将类别的边缘定义为级别间的可分离性的度量,而样品边缘是级别的紧凑性的度量。因此,为了鼓励特征的歧视性表示,损失函数应促进类和样品的最大可能边缘。此外,我们得出了广义的保证金软损失,以得出现有基于边缘的损失的一般结论。这个原则性的框架不仅提供了新的观点来理解和解释现有的基于保证金的损失,而且还提供了新的见解,可以指导新工具的设计,包括样本保证金正则化和最大的平衡案例的最大保证金损失,和零中心的正则化案例。实验结果证明了我们的策略对各种任务的有效性,包括视觉分类,分类不平衡,重新识别和面部验证。
translated by 谷歌翻译
标签平滑(LS)是一种出现的学习范式,它使用硬训练标签和均匀分布的软标签的正加权平均值。结果表明,LS是带有硬标签的训练数据的常规器,因此改善了模型的概括。后来,据报道,LS甚至有助于用嘈杂的标签学习时改善鲁棒性。但是,我们观察到,当我们以高标签噪声状态运行时,LS的优势就会消失。从直觉上讲,这是由于$ \ mathbb {p}的熵增加(\ text {noisy label} | x)$当噪声速率很高时,在这种情况下,进一步应用LS会倾向于“超平滑”估计后部。我们开始发现,文献中的几种学习与噪声标签的解决方案相反,与负面/不标签平滑(NLS)更紧密地关联,它们与LS相反,并将其定义为使用负重量来结合硬和软标签呢我们在使用嘈杂标签学习时对LS和NLS的性质提供理解。在其他已建立的属性中,我们从理论上表明,当标签噪声速率高时,NLS被认为更有益。我们在多个基准测试中提供了广泛的实验结果,以支持我们的发现。代码可在https://github.com/ucsc-real/negative-label-smooth上公开获取。
translated by 谷歌翻译
从一个非常少数标记的样品中学习新颖的课程引起了机器学习区域的越来越高。最近关于基于元学习或转移学习的基于范例的研究表明,良好特征空间的获取信息可以是在几次拍摄任务上实现有利性能的有效解决方案。在本文中,我们提出了一种简单但有效的范式,该范式解耦了学习特征表示和分类器的任务,并且只能通过典型的传送学习培训策略从基类嵌入体系结构的特征。为了在每个类别内保持跨基地和新类别和辨别能力的泛化能力,我们提出了一种双路径特征学习方案,其有效地结合了与对比特征结构的结构相似性。以这种方式,内部级别对齐和级别的均匀性可以很好地平衡,并且导致性能提高。三个流行基准测试的实验表明,当与简单的基于原型的分类器结合起来时,我们的方法仍然可以在电感或转换推理设置中的标准和广义的几次射击问题达到有希望的结果。
translated by 谷歌翻译
现代深层神经网络在医学图像分割任务中取得了显着进展。然而,最近观察到他们倾向于产生过于自信的估计,即使在高度不确定性的情况下,导致校准差和不可靠的模型。在这项工作中,我们介绍了错误的预测(MEEP)的最大熵,分割网络的培训策略,这些网络选择性地惩罚过度自信预测,仅关注错误分类的像素。特别是,我们设计了一个正规化术语,鼓励出于错误的预测,增加了复杂场景中的网络不确定性。我们的方法对于神经结构不可知,不会提高模型复杂性,并且可以与多分割损耗功能耦合。我们在两个具有挑战性的医学图像分割任务中将拟议的策略基准:脑磁共振图像(MRI)中的白质超强度病变,心脏MRI中的心房分段。实验结果表明,具有标准分割损耗的耦合MEEP不仅可以改善模型校准,而且还导致分割质量。
translated by 谷歌翻译