深度学习系统needlargedatafortraining.Datasets的面部验证系统难以获得并容易出现隐私问题。由GAN等生成模型生成的合成数据可以是一个很好的选择。但是,我们表明,甘恩产生的数据容易出现偏见和公平问题。特别是在FFHQ数据集中训练的甘斯表明,在20-29岁年龄段的年龄组中产生白脸。我们还证明,当用于微调面部验证系统时,合成面部面孔会引起不同的影响,特别是针对种族属性的影响。这是使用$ dob_ {fv} $ metric测量的,该公制定义为gar@far far for face验证的标准偏差。
translated by 谷歌翻译
已显示现有的面部分析系统对某些人口统计亚组产生偏见的结果。由于其对社会的影响,因此必须确保这些系统不会根据个人的性别,身份或肤色歧视。这导致了在AI系统中识别和减轻偏差的研究。在本文中,我们封装了面部分析的偏置检测/估计和缓解算法。我们的主要贡献包括对拟议理解偏见的算法的系统审查,以及分类和广泛概述现有的偏置缓解算法。我们还讨论了偏见面部分析领域的开放挑战。
translated by 谷歌翻译
面部识别网络通常展示相对于性别,Skintone等的敏感属性,适用于性别和Skintone,我们观察到网络的面积,网络参加属性的类别。这可能有助于偏见。在这种直觉上建立一种新的基于蒸馏的方法,称为蒸馏和去偏置(D&D),以实施网络以寻求类似的面部区域,而不管属性类别如何。在D&D中,我们从一个属性中培训一类图像的教师网络;例如轻的Skintone。然后从教师蒸馏信息,我们在剩余类别的图像上培训学生网络;例如,黑暗的skintone。特征级蒸馏损失约束学生网络以生成类似教师的表示。这允许学生网络参加所有属性类别的类似面部区域,并使其能够减少偏差。我们还提出了D&D的顶部的第二蒸馏步骤,称为D&D ++。对于D&D ++网络,我们将D&D网络的“未偏见”蒸馏成新的学生网络,D&D ++网络。我们在所有属性类别上培训新网络;例如,光明和黑暗的碳酸根。这有助于我们培训对属性偏差的网络,同时获得比D&D更高的面部验证性能。我们展示D&D ++优于在IJB-C数据集上减少性别和Skintone偏置的现有基线,同时获得比现有的对抗偏置方法更高的面部验证性能。我们评估我们所提出的方法对两个最先进的面部识别网络的有效性:Crystalface和Arcface。
translated by 谷歌翻译
在本文中,我们分析了面部图像中基本身份的基本3D形状如何扭曲其整体外观,尤其是从深面识别的角度来看。正如在流行的训练数据增强方案中所做的那样,我们以随机选择或最合适的3D面部模型的形式渲染真实和合成的面部图像,以产生基本身份的新视图。我们比较了这些图像产生的深度特征,以评估这些渲染引入原始身份的扰动。我们以各种程度的面部偏航进行了这种分析,基本身份的性别和种族各不相同。此外,我们调查在这些渲染图像中添加某种形式的上下文和背景像素,当用作训练数据时,进一步改善了面部识别模型的下游性能。我们的实验证明了面部形状在准确的面部匹配中的重要性,并基于上下文数据对网络训练的重要性。
translated by 谷歌翻译
With the rising adoption of Machine Learning across the domains like banking, pharmaceutical, ed-tech, etc, it has become utmost important to adopt responsible AI methods to ensure models are not unfairly discriminating against any group. Given the lack of clean training data, generative adversarial techniques are preferred to generate synthetic data with several state-of-the-art architectures readily available across various domains from unstructured data such as text, images to structured datasets modelling fraud detection and many more. These techniques overcome several challenges such as class imbalance, limited training data, restricted access to data due to privacy issues. Existing work focusing on generating fair data either works for a certain GAN architecture or is very difficult to tune across the GANs. In this paper, we propose a pipeline to generate fairer synthetic data independent of the GAN architecture. The proposed paper utilizes a pre-processing algorithm to identify and remove bias inducing samples. In particular, we claim that while generating synthetic data most GANs amplify bias present in the training data but by removing these bias inducing samples, GANs essentially focuses more on real informative samples. Our experimental evaluation on two open-source datasets demonstrates how the proposed pipeline is generating fair data along with improved performance in some cases.
translated by 谷歌翻译
Several face de-identification methods have been proposed to preserve users' privacy by obscuring their faces. These methods, however, can degrade the quality of photos, and they usually do not preserve the utility of faces, e.g., their age, gender, pose, and facial expression. Recently, advanced generative adversarial network models, such as StyleGAN, have been proposed, which generate realistic, high-quality imaginary faces. In this paper, we investigate the use of StyleGAN in generating de-identified faces through style mixing, where the styles or features of the target face and an auxiliary face get mixed to generate a de-identified face that carries the utilities of the target face. We examined this de-identification method with respect to preserving utility and privacy, by implementing several face detection, verification, and identification attacks. Through extensive experiments and also comparing with two state-of-the-art face de-identification methods, we show that StyleGAN preserves the quality and utility of the faces much better than the other approaches and also by choosing the style mixing levels correctly, it can preserve the privacy of the faces much better than other methods.
translated by 谷歌翻译
在本文中,我们提出了一个新颖的解释性框架,旨在更好地理解面部识别模型作为基本数据特征的表现(受保护的属性:性别,种族,年龄;非保护属性:面部毛发,化妆品,配件,脸部,面部,面部,面部,面部,面部,它们被测试的变化的方向和阻塞,图像失真,情绪)。通过我们的框架,我们评估了十种最先进的面部识别模型,并在两个数据集上的安全性和可用性方面进行了比较,涉及基于性别和种族的六个小组。然后,我们分析图像特征对模型性能的影响。我们的结果表明,当考虑多归因组时,单属分析中出现的趋势消失或逆转,并且性能差异也与非保护属性有关。源代码:https://cutt.ly/2xwrlia。
translated by 谷歌翻译
广泛认为,面部识别准确性存在“性别差距”,女性具有较高的错误匹配和错误的非匹配率。但是,关于这种性别差距的原因,相对较少了解。甚至最近有关人口影响的NIST报告也列出了“我们没有做的事情”下的“分析因果”。我们首先证明女性和男性发型具有影响面部识别准确性的重要差异。特别是,与女性相比,男性面部毛发有助于在不同男性面孔之间产生更大的外观平均差异。然后,我们证明,当用来估计识别精度的数据在性别之间保持平衡,以使发型如何阻塞面部时,最初观察到的性别差距在准确性上大大消失。我们为两个不同的匹配者展示了这一结果,并分析了白种人和非裔美国人的图像。这些结果表明,对准确性的人口统计学差异的未来研究应包括检查测试数据的平衡质量,作为问题制定的一部分。为了促进可重复的研究,将公开使用此研究中使用的匹配项,属性分类器和数据集。
translated by 谷歌翻译
媒体报道指责人们对“偏见”',“”性别歧视“和”种族主义“的人士指责。研究文献中有共识,面部识别准确性为女性较低,妇女通常具有更高的假匹配率和更高的假非匹配率。然而,几乎没有出版的研究,旨在识别女性准确性较低的原因。例如,2019年的面部识别供应商测试将在广泛的算法和数据集中记录较低的女性准确性,并且数据集也列出了“分析原因和效果”在“我们没有做的东西”下''。我们介绍了第一个实验分析,以确定在去以前研究的数据集上对女性的较低人脸识别准确性的主要原因。在测试图像中控制相等的可见面部可见面积减轻了女性的表观更高的假非匹配率。其他分析表明,化妆平衡数据集进一步改善了女性以实现较低的虚假非匹配率。最后,聚类实验表明,两种不同女性的图像本质上比两种不同的男性更相似,潜在地占错误匹配速率的差异。
translated by 谷歌翻译
Although significant progress has been made in face recognition, demographic bias still exists in face recognition systems. For instance, it usually happens that the face recognition performance for a certain demographic group is lower than the others. In this paper, we propose MixFairFace framework to improve the fairness in face recognition models. First of all, we argue that the commonly used attribute-based fairness metric is not appropriate for face recognition. A face recognition system can only be considered fair while every person has a close performance. Hence, we propose a new evaluation protocol to fairly evaluate the fairness performance of different approaches. Different from previous approaches that require sensitive attribute labels such as race and gender for reducing the demographic bias, we aim at addressing the identity bias in face representation, i.e., the performance inconsistency between different identities, without the need for sensitive attribute labels. To this end, we propose MixFair Adapter to determine and reduce the identity bias of training samples. Our extensive experiments demonstrate that our MixFairFace approach achieves state-of-the-art fairness performance on all benchmark datasets.
translated by 谷歌翻译
已发表的研究表明,基于性别的性别分类算法在性别竞赛组中存在偏见。具体而言,女性和黑皮肤的人获得了不平等的准确性率。为了减轻性别分类器的偏见,愿景社区已经制定了多种策略。但是,这些缓解策略的功效对于有限数量的种族证明了主要是高加索人和非裔美国人的功效。此外,这些策略通常在偏见和分类准确性之间提供权衡。为了进一步推进最先进的方法,我们利用生成观点,结构化学习和证据学习的力量来减轻性别分类偏见。我们通过广泛的实验验证来证明我们的偏见缓解策略在提高分类准确性和降低性别种族群体之间的偏见方面的优势,从而在内部和交叉数据集评估中取得了最新的性能。
translated by 谷歌翻译
The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face' and 'non-face' classes. In this work, we investigate possible bias in the domain of face detection through facial region localization which is currently unexplored. Since facial region localization is an essential task for all face recognition pipelines, it is imperative to analyze the presence of such bias in popular deep models. Most existing face detection datasets lack suitable annotation for such analysis. Therefore, we web-curate the Fair Face Localization with Attributes (F2LA) dataset and manually annotate more than 10 attributes per face, including facial localization information. Utilizing the extensive annotations from F2LA, an experimental setup is designed to study the performance of four pre-trained face detectors. We observe (i) a high disparity in detection accuracies across gender and skin-tone, and (ii) interplay of confounding factors beyond demography. The F2LA data and associated annotations can be accessed at http://iab-rubric.org/index.php/F2LA.
translated by 谷歌翻译
由于人口统计因素(例如年龄,性别,种族等)的影响,已经在自动化的面部识别系统中进行了广泛的研究。但是,\ textIt {数字修改}的人口统计学和面部属性对面部识别的影响相对较小。在这项工作中,我们研究了通过生成对抗网络(GAN)引起的属性操作的影响对面部识别性能。我们通过使用Attgan和Stgan有意修改13个属性,并评估它们对两种基于深度学习的面部验证方法,Arcface和VGGFACE的影响,在Celeba数据集上进行实验。我们的发现表明,涉及眼镜和性线索的数字变化的一些属性操纵可能会大大损害面部识别多达73%,需要进一步分析。
translated by 谷歌翻译
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%. The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.
translated by 谷歌翻译
非接触式和高效的系统迅速实施,以提倡对抗Covid-19大流行的预防方法。尽管此类系统的积极效益,但通过侵入用户隐私有潜力。在这项工作中,我们通过使用掩蔽面部图像预测隐私敏感的软生物测量来分析面部生物识别系统的隐私侵犯性。我们根据Reset-50架构培训并申请CNN,具有20,003个合成屏蔽图像并测量隐私侵犯性。尽管人们在人们中戴着面具的隐私益处存在受欢迎的信念,但我们表明,当面具磨损时,隐私侵犯性没有显着差异。在我们的实验中,我们能够准确地预测来自蒙面的面部图像的性别(94.7%),种族(83.1%)和年龄(MAE 6.21和RMSE 8.33)。我们所提出的方法可以作为基准实用程序来评估利用隐私敏感信息的人工智能系统的隐私侵犯性。我们开展研究界的重新提供和更广泛的使用贡献。
translated by 谷歌翻译
当前用于面部识别的模型(FR)中存在人口偏见。我们在野外(BFW)数据集中平衡的面孔是衡量种族和性别亚组偏见的代理,使一个人可以表征每个亚组的FR表现。当单个分数阈值确定样本对是真实还是冒名顶替者时,我们显示的结果是非最佳选择的。在亚组中,性能通常与全球平均水平有很大差异。因此,仅适用于与验证数据相匹配的人群的特定错误率。我们使用新的域适应性学习方案来减轻性能不平衡,以使用最先进的神经网络提取的面部特征。该技术平衡了性能,但也可以提高整体性能。该建议的好处是在面部特征中保留身份信息,同时减少其所包含的人口统计信息。人口统计学知识的去除阻止了潜在的未来偏见被注入决策。由于对个人的可用信息或推断,因此此删除可改善隐私。我们定性地探索这一点;我们还定量地表明,亚组分类器不再从提出的域适应方案的特征中学习。有关源代码和数据描述,请参见https://github.com/visionjo/facerec-bias-bfw。
translated by 谷歌翻译
随着近期神经网络的成功,对人脸识别取得了显着进展。然而,收集面部识别的大规模现实世界培训数据已经挑战,特别是由于标签噪音和隐私问题。同时,通常从网络图像收集现有的面部识别数据集,缺乏关于属性的详细注释(例如,姿势和表达),因此对面部识别的不同属性的影响已经很差。在本文中,我们使用合成面部图像,即Synface来解决面部识别中的上述问题。具体而言,我们首先探讨用合成和真实面部图像训练的最近最先进的人脸识别模型之间的性能差距。然后,我们分析了性能差距背后的潜在原因,例如,较差的阶级变化和合成和真实面部图像之间的域间隙。灵感来自于此,我们使用身份混合(IM)和域混合(DM)设计了SYNFACE,以减轻上述性能差距,展示了对面部识别的综合数据的巨大潜力。此外,利用可控的面部合成模型,我们可以容易地管理合成面代的不同因素,包括姿势,表达,照明,身份的数量和每个身份的样本。因此,我们还对综合性面部图像进行系统实证分析,以提供一些关于如何有效利用综合数据进行人脸识别的见解。
translated by 谷歌翻译
数据和算法中固有的偏见使得基于广泛的机器学习(ML)的决策系统的公平性不如最佳。为了提高此类ML决策系统的信任性,至关重要的是要意识到这些解决方案中的固有偏见并使它们对公众和开发商更加透明。在这项工作中,我们旨在提供一组解释性工具,以分析处理不同人口组时面部识别模型行为的差异。我们通过利用基于激活图的高阶统计信息来构建解释性工具来做到这一点,以将FR模型的行为差异与某些面部区域联系起来。与参考组相比,在两个数据集和两个面部识别模型上的实验结果指出了FR模型对某些人口组的反应不同。这些分析的结果有趣地与分析人体测量学差异和人类判断差异的研究结果非常相吻合。因此,这是第一项专门解释不同人口组上FR模型的偏见行为并将其直接链接到空间面部特征的研究。该代码在这里公开可用。
translated by 谷歌翻译
在计算机视觉中,在评估深度学习模型中的潜在人口偏见方面具有重要的研究兴趣。这种偏见的主要原因之一是训练数据中的失衡。在医学成像中,偏见的潜在影响可以说要大得多,因此兴趣较小。在医学成像管道中,对感兴趣的结构的分割在估计随后用于告知患者管理的临床生物标志物方面起着重要作用。卷积神经网络(CNN)开始用于自动化此过程。我们介绍了训练集失衡对种族和性别偏见在基于CNN的细分中的影响的首次系统研究。我们专注于从短轴Cine Cine心脏磁共振图像中对心脏结构进行分割,并训练具有不同种族/性别不平衡水平的CNN分割模型。我们发现性实验没有明显的偏见,但是在两个单独的种族实验中有明显的偏见,强调需要考虑健康数据集中不同人口组的足够代表。
translated by 谷歌翻译
Face image quality assessment (FIQA) attempts to improve face recognition (FR) performance by providing additional information about sample quality. Because FIQA methods attempt to estimate the utility of a sample for face recognition, it is reasonable to assume that these methods are heavily influenced by the underlying face recognition system. Although modern face recognition systems are known to perform well, several studies have found that such systems often exhibit problems with demographic bias. It is therefore likely that such problems are also present with FIQA techniques. To investigate the demographic biases associated with FIQA approaches, this paper presents a comprehensive study involving a variety of quality assessment methods (general-purpose image quality assessment, supervised face quality assessment, and unsupervised face quality assessment methods) and three diverse state-of-theart FR models. Our analysis on the Balanced Faces in the Wild (BFW) dataset shows that all techniques considered are affected more by variations in race than sex. While the general-purpose image quality assessment methods appear to be less biased with respect to the two demographic factors considered, the supervised and unsupervised face image quality assessment methods both show strong bias with a tendency to favor white individuals (of either sex). In addition, we found that methods that are less racially biased perform worse overall. This suggests that the observed bias in FIQA methods is to a significant extent related to the underlying face recognition system.
translated by 谷歌翻译