我们研究了两种现实情景中的一系列识别任务,要求在强闭塞下分析面孔。一方面,我们的目标是识别佩戴虚拟现实(VR)耳机的人们的面部表情。另一方面,我们的目标是估计年龄并确定穿手术面具的人们的性别。对于所有这些任务,共同的地面是遮挡的一半面孔。在这一具有挑战性的环境中,我们表明,在完全可见的面上培训的卷积神经网络(CNNS)表现出非常低的性能水平。在微调遮挡面上的深度学习模型非常有用,我们表明可以通过从完全可见面上培训的模型蒸馏出来的知识来获得额外的性能增益。为此,我们研究了两种知识蒸馏方法,一个基于教师学生培训,一个基于三重态损失。我们的主要贡献包括基于三态损失的知识蒸馏的新方法,这遍历模型和任务。此外,我们考虑通过传统的师生培训或通过我们的小型教师学生培训来组合蒸馏模型,或通过基于三态损失的小说学生培训。我们提供了实证证据表明,在大多数情况下,个人和组合的知识蒸馏方法都会带来统计上显着的性能改进。我们在各种任务(面部表情识别,性别识别,年龄估计)上进行三种不同的神经模型(VGG-F,Vogg-Face,Reset-50)进行实验,而不管模型或任务如何,都显示出一致的改进。
translated by 谷歌翻译
为了对线性不可分离的数据进行分类,神经元通常被组织成具有至少一个隐藏层的多层神经网络。灵感来自最近神经科学的发现,我们提出了一种新的神经元模型以及一种新的激活函数,可以使用单个神经元来学习非线性决策边界。我们表明标准神经元随后是新颖的顶端枝晶激活(ADA)可以使用100 \%的精度来学习XOR逻辑函数。此外,我们在计算机视觉,信号处理和自然语言处理中进行五个基准数据集进行实验,即摩洛哥,utkface,crema-d,时尚mnist和微小的想象成,表明ADA和泄漏的ADA功能提供了卓越的结果用于各种神经网络架构的整流线性单元(Relu),泄漏的Relu,RBF和嗖嗖声,例如单隐层或两个隐藏层的多层的Perceptrons(MLPS)和卷积神经网络(CNNS),如LENET,VGG,RESET和字符级CNN。当我们使用具有顶端树突激活(Pynada)的金字塔神经元改变神经元的标准模型时,我们获得进一步的性能改进。我们的代码可用于:https://github.com/raduionescu/pynada。
translated by 谷歌翻译
分解表示形式通常被用于年龄不变的面部识别(AIFR)任务。但是,这些方法已经达到了一些局限性,(1)具有年龄标签的大规模面部识别(FR)培训数据的要求,这在实践中受到限制; (2)高性能的重型深网架构; (3)他们的评估通常是在与年龄相关的面部数据库上进行的,同时忽略了标准的大规模FR数据库以确保鲁棒性。这项工作提出了一种新颖的轻巧的角度蒸馏(LIAAD)方法,用于克服这些限制的大规模轻量级AIFR。鉴于两个具有不同专业知识的教师,LIAAD引入了学习范式,以有效地提炼老年人的专注和棱角分明的知识,从这些老师到轻量级的学生网络,使其更强大,以更高的fr准确性和稳健的年龄,从而有效地提炼了一个学习范式因素。因此,LIAAD方法能够采用带有和不具有年龄标签的两个FR数据集的优势来训练AIFR模型。除了先前的蒸馏方法主要关注封闭设置问题中的准确性和压缩比,我们的LIAAD旨在解决开放式问题,即大规模的面部识别。对LFW,IJB-B和IJB-C Janus,AgeDB和Megaface-Fgnet的评估证明了拟议方法在轻重量结构上的效率。这项工作还提出了一个新的纵向面部衰老(Logiface)数据库\ footNote {将提供该数据库},以进一步研究未来与年龄相关的面部问题。
translated by 谷歌翻译
全球Covid-19大流行的出现会给生物识别技术带来新的挑战。不仅是非接触式生物识别选项变得更加重要,而且最近也遇到了频繁的面具的面对面识别。这些掩模会影响前面识别系统的性能,因为它们隐藏了重要的身份信息。在本文中,我们提出了一种掩模不变的面部识别解决方案(MaskInv),其利用训练范例内的模板级知识蒸馏,其旨在产生类似于相同身份的非掩盖面的掩模面的嵌入面。除了蒸馏知识外,学生网络还通过基于边缘的身份分类损失,弹性面,使用遮蔽和非蒙面面的额外指导。在两个真正蒙面面部数据库和具有合成面具的五个主流数据库的逐步消融研究中,我们证明了我们的maskinV方法的合理化。我们所提出的解决方案优于先前的最先进(SOTA)在最近的MFRC-21挑战中的学术解决方案,屏蔽和屏蔽VS非屏蔽,并且还优于MFR2数据集上的先前解决方案。此外,我们证明所提出的模型仍然可以在缺陷的面上表现良好,只有在验证性能下的少量损失。代码,培训的模型以及合成屏蔽数据的评估协议是公开的:https://github.com/fdbtrs/masked-face-recognition-kd。
translated by 谷歌翻译
使用卷积神经网络,面部属性(例如,年龄和吸引力)估算性能得到了大大提高。然而,现有方法在培训目标和评估度量之间存在不一致,因此它们可能是次优。此外,这些方法始终采用具有大量参数的图像分类或面部识别模型,其携带昂贵的计算成本和存储开销。在本文中,我们首先分析了两种最新方法(排名CNN和DLDL)之间的基本关系,并表明排名方法实际上是隐含的学习标签分布。因此,该结果首先将两个现有的最新方法统一到DLDL框架中。其次,为了减轻不一致和降低资源消耗,我们设计了一种轻量级网络架构,并提出了一个统一的框架,可以共同学习面部属性分发和回归属性值。在面部年龄和吸引力估算任务中都证明了我们的方法的有效性。我们的方法使用单一模型实现新的最先进的结果,使用36美元\倍,参数减少3美元,在面部年龄/吸引力估算上的推动速度为3美元。此外,即使参数的数量进一步降低到0.9m(3.8MB磁盘存储),我们的方法也可以实现与最先进的结果。
translated by 谷歌翻译
Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.
translated by 谷歌翻译
Facial action units (FAUs) are critical for fine-grained facial expression analysis. Although FAU detection has been actively studied using ideally high quality images, it was not thoroughly studied under heavily occluded conditions. In this paper, we propose the first occlusion-robust FAU recognition method to maintain FAU detection performance under heavy occlusions. Our novel approach takes advantage of rich information from the latent space of masked autoencoder (MAE) and transforms it into FAU features. Bypassing the occlusion reconstruction step, our model efficiently extracts FAU features of occluded faces by mining the latent space of a pretrained masked autoencoder. Both node and edge-level knowledge distillation are also employed to guide our model to find a mapping between latent space vectors and FAU features. Facial occlusion conditions, including random small patches and large blocks, are thoroughly studied. Experimental results on BP4D and DISFA datasets show that our method can achieve state-of-the-art performances under the studied facial occlusion, significantly outperforming existing baseline methods. In particular, even under heavy occlusion, the proposed method can achieve comparable performance as state-of-the-art methods under normal conditions.
translated by 谷歌翻译
We propose a very fast frame-level model for anomaly detection in video, which learns to detect anomalies by distilling knowledge from multiple highly accurate object-level teacher models. To improve the fidelity of our student, we distill the low-resolution anomaly maps of the teachers by jointly applying standard and adversarial distillation, introducing an adversarial discriminator for each teacher to distinguish between target and generated anomaly maps. We conduct experiments on three benchmarks (Avenue, ShanghaiTech, UCSD Ped2), showing that our method is over 7 times faster than the fastest competing method, and between 28 and 62 times faster than object-centric models, while obtaining comparable results to recent methods. Our evaluation also indicates that our model achieves the best trade-off between speed and accuracy, due to its previously unheard-of speed of 1480 FPS. In addition, we carry out a comprehensive ablation study to justify our architectural design choices.
translated by 谷歌翻译
机器学习中的知识蒸馏是将知识从名为教师的大型模型转移到一个名为“学生”的较小模型的过程。知识蒸馏是将大型网络(教师)压缩到较小网络(学生)的技术之一,该网络可以部署在手机等小型设备中。当教师和学生之间的网络规模差距增加时,学生网络的表现就会下降。为了解决这个问题,在教师模型和名为助教模型的学生模型之间采用了中间模型,这反过来弥补了教师与学生之间的差距。在这项研究中,我们已经表明,使用多个助教模型,可以进一步改进学生模型(较小的模型)。我们使用加权集合学习将这些多个助教模型组合在一起,我们使用了差异评估优化算法来生成权重值。
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
基于全面的生物识别是一个广泛的研究区域。然而,仅使用部分可见的面,例如在遮盖的人的情况下,是一个具有挑战性的任务。在这项工作中使用深卷积神经网络(CNN)来提取来自遮盖者面部图像的特征。我们发现,第六和第七完全连接的层,FC6和FC7分别在VGG19网络的结构中提供了鲁棒特征,其中这两层包含4096个功能。这项工作的主要目标是测试基于深度学习的自动化计算机系统的能力,不仅要识别人,还要对眼睛微笑等性别,年龄和面部表达的认可。我们的实验结果表明,我们为所有任务获得了高精度。最佳记录的准确度值高达99.95%,用于识别人员,99.9%,年龄识别的99.9%,面部表情(眼睛微笑)认可为80.9%。
translated by 谷歌翻译
While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
translated by 谷歌翻译
基于可穿戴传感器的人类动作识别(HAR)最近取得了杰出的成功。但是,基于可穿戴传感器的HAR的准确性仍然远远落后于基于视觉模式的系统(即RGB视频,骨架和深度)。多样化的输入方式可以提供互补的提示,从而提高HAR的准确性,但是如何利用基于可穿戴传感器的HAR的多模式数据的优势很少探索。当前,可穿戴设备(即智能手表)只能捕获有限的非视态模式数据。这阻碍了多模式HAR关联,因为它无法同时使用视觉和非视态模态数据。另一个主要挑战在于如何在有限的计算资源上有效地利用可穿戴设备上的多模式数据。在这项工作中,我们提出了一种新型的渐进骨骼到传感器知识蒸馏(PSKD)模型,该模型仅利用时间序列数据,即加速度计数据,从智能手表来解决基于可穿戴传感器的HAR问题。具体而言,我们使用来自教师(人类骨架序列)和学生(时间序列加速度计数据)模式的数据构建多个教师模型。此外,我们提出了一种有效的渐进学习计划,以消除教师和学生模型之间的绩效差距。我们还设计了一种称为自适应信心语义(ACS)的新型损失功能,以使学生模型可以自适应地选择其中一种教师模型或所需模拟的地面真实标签。为了证明我们提出的PSKD方法的有效性,我们对伯克利-MHAD,UTD-MHAD和MMACT数据集进行了广泛的实验。结果证实,与以前的基于单传感器的HAR方法相比,提出的PSKD方法具有竞争性能。
translated by 谷歌翻译
Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous labels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.
translated by 谷歌翻译
Figure 1. An illustration of standard knowledge distillation. Despite widespread use, an understanding of when the student can learn from the teacher is missing.
translated by 谷歌翻译
Knowledge Distillation (KD) consists of transferring "knowledge" from one machine learning model (the teacher) to another (the student). Commonly, the teacher is a high-capacity model with formidable performance, while the student is more compact. By transferring knowledge, one hopes to benefit from the student's compactness, without sacrificing too much performance. We study KD from a new perspective: rather than compressing models, we train students parameterized identically to their teachers. Surprisingly, these Born-Again Networks (BANs), outperform their teachers significantly, both on computer vision and language modeling tasks. Our experiments with BANs based on DenseNets demonstrate state-of-the-art performance on the CIFAR-10 (3.5%) and CIFAR-100 (15.5%) datasets, by validation error. Additional experiments explore two distillation objectives: (i) Confidence-Weighted by Teacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP). Both methods elucidate the essential components of KD, demonstrating the effect of the teacher outputs on both predicted and nonpredicted classes.
translated by 谷歌翻译
面部地标检测是许多面部图像分析应用的重要步骤。虽然基于深入的学习的方法在此任务中取得了良好的性能,但它们通常不适合在移动设备上运行。这些方法依赖于具有许多参数的网络,这使得训练和推动耗时。培训轻量级神经网络,如移动单元往往是具有挑战性的,并且模型可能具有低的准确性。通过知识蒸馏(KD)的启发,本文提出了一种新的损失函数,用于培养用于面部地标检测的轻量级学生网络(例如MobileNetv2)。我们与学生网络一起使用两个教师网络,宽容教师和艰难的老师。宽容老师使用主动形状模型创建的软标志培训,而艰难的老师使用地面真理(AKA硬质标)训练。为了利用教师网络预测的面部地标点,我们为每个教师网络定义辅助丢失(alloss)。此外,我们定义称为KD损失的损失函数,它利用两个预先训练的教师网络(AfficesTET-B3)预测的面部地标点来指导轻量级学生网络朝向预测硬质标志。我们对三个挑战性面部数据集的实验结果表明,拟议的架构将导致培训的学生网络,可以高精度提取面部地标点。
translated by 谷歌翻译
Knowledge distillation is a widely applicable techniquefor training a student neural network under the guidance of a trained teacher network. For example, in neural network compression, a high-capacity teacher is distilled to train a compact student; in privileged learning, a teacher trained with privileged data is distilled to train a student without access to that data. The distillation loss determines how a teacher's knowledge is captured and transferred to the student. In this paper, we propose a new form of knowledge distillation loss that is inspired by the observation that semantically similar inputs tend to elicit similar activation patterns in a trained network. Similarity-preserving knowledge distillation guides the training of a student network such that input pairs that produce similar (dissimilar) activations in the teacher network produce similar (dissimilar) activations in the student network. In contrast to previous distillation methods, the student is not required to mimic the representation space of the teacher, but rather to preserve the pairwise similarities in its own representation space. Experiments on three public datasets demonstrate the potential of our approach.
translated by 谷歌翻译
图像分辨率或一般图像质量在当今面部识别系统的性能中起着至关重要的作用。为了解决这个问题,我们提出了一种流行的三胞胎损失的新型组合,以通过微调现有面部识别模型来提高与图像分辨率的鲁棒性。随着八度损失,我们利用高分辨率图像及其合成下采样变体之间的关系与其身份标签共同采样。通过我们的方法对几种最先进的方法进行微调证明,我们可以在各种数据集上显着提高跨分辨率(高低分辨率)面部验证的性能,而不会有意义地加剧高高度的性能分辨率图像。我们的方法应用于FaceTransFormer网络,在挑战性的XQLFW数据集上达到95.12%的面对验证精度,同时在LFW数据库上达到99.73%。此外,低到低面验证精度从我们的方法中受益。我们发布我们的代码,以允许将OCTUPLET损失的无缝集成到现有框架中。
translated by 谷歌翻译