The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face' and 'non-face' classes. In this work, we investigate possible bias in the domain of face detection through facial region localization which is currently unexplored. Since facial region localization is an essential task for all face recognition pipelines, it is imperative to analyze the presence of such bias in popular deep models. Most existing face detection datasets lack suitable annotation for such analysis. Therefore, we web-curate the Fair Face Localization with Attributes (F2LA) dataset and manually annotate more than 10 attributes per face, including facial localization information. Utilizing the extensive annotations from F2LA, an experimental setup is designed to study the performance of four pre-trained face detectors. We observe (i) a high disparity in detection accuracies across gender and skin-tone, and (ii) interplay of confounding factors beyond demography. The F2LA data and associated annotations can be accessed at http://iab-rubric.org/index.php/F2LA.
translated by 谷歌翻译
已显示现有的面部分析系统对某些人口统计亚组产生偏见的结果。由于其对社会的影响,因此必须确保这些系统不会根据个人的性别,身份或肤色歧视。这导致了在AI系统中识别和减轻偏差的研究。在本文中,我们封装了面部分析的偏置检测/估计和缓解算法。我们的主要贡献包括对拟议理解偏见的算法的系统审查,以及分类和广泛概述现有的偏置缓解算法。我们还讨论了偏见面部分析领域的开放挑战。
translated by 谷歌翻译
面部检测是计算机愿景领域的长期挑战,最终目标是准确地将人类面临着不受约束的环境。由于与姿势,图像分辨率,照明,闭塞和观点相关的混淆因素,使这些系统具有重要的技术障碍。据说,随着最近的机器学习的发展,面部检测系统实现了非凡的准确性,主要是基于数据驱动的深度学习模型[70]。虽然鼓励,限制了部署系统的面部检测性能和社会责任的关键方面是人类外观的固有多样性。每个人类的外表都反映了一个人的东西,包括他们的遗产,身份,经验和自我表达的可见表现。但是,有关面部检测系统如何在面对不同的面部尺寸和形状,肤色,身体修改和身体装饰方面进行良好的表现问题。为了实现这一目标,我们收集了独特的人类外观数据集,这是一种图像集,表示具有低频率的外观,并且往往是面部数据集的缺点。然后,我们评估了当前最先进的脸部检测模型,其能够检测这些图像中的面部。评估结果表明,面部检测算法对这些不同的外观没有概括。评估和表征当前的面部检测模型的状态将加速研究和开发,以创造更公平和更准确的面部检测系统。
translated by 谷歌翻译
本文介绍了一个新颖的数据集,以帮助研究人员评估他们的计算机视觉和音频模型,以便在各种年龄,性别,表观肤色和环境照明条件下进行准确性。我们的数据集由3,011名受试者组成,并包含超过45,000个视频,平均每人15个视频。这些视频被录制在多个美国国家,各种成年人在各种年龄,性别和明显的肤色群体中。一个关键特征是每个主题同意参与他们使用的相似之处。此外,我们的年龄和性别诠释由受试者自己提供。一组训练有素的注释器使用FitzPatrick皮肤型刻度标记了受试者的表观肤色。此外,还提供了在低环境照明中记录的视频的注释。作为衡量某些属性的预测稳健性的申请,我们对DeepFake检测挑战(DFDC)的前五名获胜者提供了全面的研究。实验评估表明,获胜模型对某些特定人群的表现较小,例如肤色较深的肤色,因此可能对所有人都不概括。此外,我们还评估了最先进的明显年龄和性别分类方法。我们的实验在各种背景的人们的公平待遇方面对这些模型进行了彻底的分析。
translated by 谷歌翻译
As facial recognition systems are deployed more widely, scholars and activists have studied their biases and harms. Audits are commonly used to accomplish this and compare the algorithmic facial recognition systems' performance against datasets with various metadata labels about the subjects of the images. Seminal works have found discrepancies in performance by gender expression, age, perceived race, skin type, etc. These studies and audits often examine algorithms which fall into two categories: academic models or commercial models. We present a detailed comparison between academic and commercial face detection systems, specifically examining robustness to noise. We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness, specifically by having statistically significant decreased performance on older individuals and those who present their gender in a masculine manner. When we compare the size of these disparities to that of commercial models, we conclude that commercial models - in contrast to their relatively larger development budget and industry-level fairness commitments - are always as biased or more biased than an academic model.
translated by 谷歌翻译
媒体报道指责人们对“偏见”',“”性别歧视“和”种族主义“的人士指责。研究文献中有共识,面部识别准确性为女性较低,妇女通常具有更高的假匹配率和更高的假非匹配率。然而,几乎没有出版的研究,旨在识别女性准确性较低的原因。例如,2019年的面部识别供应商测试将在广泛的算法和数据集中记录较低的女性准确性,并且数据集也列出了“分析原因和效果”在“我们没有做的东西”下''。我们介绍了第一个实验分析,以确定在去以前研究的数据集上对女性的较低人脸识别准确性的主要原因。在测试图像中控制相等的可见面部可见面积减轻了女性的表观更高的假非匹配率。其他分析表明,化妆平衡数据集进一步改善了女性以实现较低的虚假非匹配率。最后,聚类实验表明,两种不同女性的图像本质上比两种不同的男性更相似,潜在地占错误匹配速率的差异。
translated by 谷歌翻译
Facial analysis systems have been deployed by large companies and critiqued by scholars and activists for the past decade. Many existing algorithmic audits examine the performance of these systems on later stage elements of facial analysis systems like facial recognition and age, emotion, or perceived gender prediction; however, a core component to these systems has been vastly understudied from a fairness perspective: face detection, sometimes called face localization. Since face detection is a pre-requisite step in facial analysis systems, the bias we observe in face detection will flow downstream to the other components like facial recognition and emotion prediction. Additionally, no prior work has focused on the robustness of these systems under various perturbations and corruptions, which leaves open the question of how various people are impacted by these phenomena. We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models. We use both standard and recently released academic facial datasets to quantitatively analyze trends in face detection robustness. Across all the datasets and systems, we generally find that photos of individuals who are $\textit{masculine presenting}$, $\textit{older}$, of $\textit{darker skin type}$, or have $\textit{dim lighting}$ are more susceptible to errors than their counterparts in other identities.
translated by 谷歌翻译
面部表现攻击检测(PAD)对于保护面部识别(FR)应用程序至关重要。 FR性能已被证明对某些人口统计学和非人口统计学组是不公平的。但是,面部垫的公平性是一个研究的问题,这主要是由于缺乏适当的注释数据。为了解决此问题,这项工作首先通过组合几个知名的PAD数据集,在其中提供了七个人类宣传的属性标签,从而提出了一个组合的注释数据集(CAAD-PAD)。然后,这项工作通过研究我们的CAAD-Pad上的四个面部垫方法,全面分析了一组面垫的公平及其与培训数据的性质和操作决策阈值分配(ODTA)的关系。同时代表垫子的公平性和绝对垫性能,我们引入了一种新颖的指标,即准确性平衡公平(ABF)。关于CAAD-PAD的广泛实验表明,训练数据和ODTA会引起性别,遮挡和其他属性组的不公平性。基于这些分析,我们提出了一种数据增强方法Fairswap,该方法旨在破坏身份/语义信息和指南模型以挖掘攻击线索而不是与属性相关的信息。详细的实验结果表明,Fairswap通常可以提高垫子性能和面部垫的公平性。
translated by 谷歌翻译
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%. The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
2019年冠状病毒疾病(Covid-19)继续自爆发以来对世界产生巨大挑战。为了对抗这种疾病,开发了一系列人工智能(AI)技术,并应用于现实世界的情景,如安全监测,疾病诊断,感染风险评估,Covid-19 CT扫描的病变细分等。 Coronavirus流行病迫使人们佩戴面膜来抵消病毒的传播,这也带来了监控戴着面具的大群人群的困难。在本文中,我们主要关注蒙面面部检测和相关数据集的AI技术。从蒙面面部检测数据集的描述开始,我们调查了最近的进步。详细描述并详细讨论了十三可用数据集。然后,该方法大致分为两类:传统方法和基于神经网络的方法。常规方法通常通过用手工制作的特征升高算法来训练,该算法占少比例。基于神经网络的方法根据处理阶段的数量进一步归类为三个部分。详细描述了代表性算法,与一些简要描述的一些典型技术耦合。最后,我们总结了最近的基准测试结果,讨论了关于数据集和方法的局限性,并扩大了未来的研究方向。据我们所知,这是关于蒙面面部检测方法和数据集的第一次调查。希望我们的调查可以提供一些帮助对抗流行病的帮助。
translated by 谷歌翻译
深层伪造的面部伪造引起了严重的社会问题。愿景社区已经提出了几种解决方案,以通过自动化的深层检测系统有效地对待互联网上的错误信息。最近的研究表明,基于面部分析的深度学习模型可以根据受保护的属性区分。对于对DeepFake检测技术的商业采用和大规模推出,对跨性别和种族等人口变化的深层探测器的评估和了解(不存在任何偏见或偏爱)至关重要。由于人口亚组之间的深泡探测器的性能差异会影响贫困子组的数百万人。本文旨在评估跨男性和女性的深泡探测器的公平性。但是,现有的DeepFake数据集未用人口标签注释以促进公平分析。为此,我们用性别标签手动注释了现有的流行DeepFake数据集,并评估了整个性别的当前DeepFake探测器的性能差异。我们对数据集的性别标记版本的分析表明,(a)当前的DeepFake数据集在性别上偏斜了分布,并且(b)通常采用的深层捕获探测器在性别中获得不平等的表现,而男性大多数均优于女性。最后,我们贡献了一个性别平衡和注释的DeepFake数据集GBDF,以减轻性能差异,并促进研究和发展,以朝着公平意识到的深层假探测器。 GBDF数据集可公开可用:https://github.com/aakash4305/gbdf
translated by 谷歌翻译
在本文中,我们提出了一个新颖的解释性框架,旨在更好地理解面部识别模型作为基本数据特征的表现(受保护的属性:性别,种族,年龄;非保护属性:面部毛发,化妆品,配件,脸部,面部,面部,面部,面部,面部,它们被测试的变化的方向和阻塞,图像失真,情绪)。通过我们的框架,我们评估了十种最先进的面部识别模型,并在两个数据集上的安全性和可用性方面进行了比较,涉及基于性别和种族的六个小组。然后,我们分析图像特征对模型性能的影响。我们的结果表明,当考虑多归因组时,单属分析中出现的趋势消失或逆转,并且性能差异也与非保护属性有关。源代码:https://cutt.ly/2xwrlia。
translated by 谷歌翻译
Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Faces in the proposed dataset are extremely challenging due to large variations in scale, pose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE dataset is an effective training source for face detection. We benchmark several representative detection systems, providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation. Finally, we discuss common failure cases that worth to be further investigated. Dataset can be downloaded at: mmlab.ie.cuhk.edu.hk/projects/WIDERFace
translated by 谷歌翻译
现代卷积神经网络(CNN)的面部探测器由于大量注释的数据集而取得了巨大的进步。但是,以高检测置信度未对准结果,但定位精度较低,限制了检测性能的进一步改善。在本文中,作者首先预测了训练集本身的高置信度检测结果。令人惊讶的是,其中相当一部分存在于同一未对准问题中。然后,作者仔细检查了这些案例,并指出注释未对准是主要原因。后来,对预测和注释的边界盒之间的替代合理性进行了全面讨论。最后,作者提出了一种新颖的边界盒深校准(BDC)方法,以通过模型预测的边界盒合理地替换未对准的注释,并为训练集提供校准的注释。在多个检测器和两个流行的基准数据集上进行了广泛的实验,显示了BDC对提高模型的精度和召回率的有效性,而无需添加额外的推理时间和记忆消耗。我们简单有效的方法为改善面部检测提供了一种一般策略,尤其是在实时情况下轻巧检测器的一般策略。
translated by 谷歌翻译
近年来,用深击的图像和视频操纵已成为安全和社会的严重关注。因此,已经提出了许多检测模型和数据库来可靠地检测DeepFake数据。但是,人们越来越担心这些模型和培训数据库可能会有偏见,从而导致深泡检测器失败。在这项工作中,我们通过(a)为五个流行的DeepFake数据集提供41个不同属性的大规模人口统计学和非人口统计学注释,以及(b)全面分析多个最先进的ART的AI偏见这些数据库上的DeepFake检测模型。调查分析了各种独特属性(从6500万标签)对检测性能的影响,包括人口统计学(年龄,性别,种族)和非人口统计学(头发,皮肤,配件等)信息。结果表明,研究的数据库缺乏多样性,更重要的是表明,使用的深层检测模型对许多研究的属性有很大偏见。此外,结果表明,模型的决策可能基于几个可疑(偏见)的假设,例如,如果一个人在微笑或戴上帽子。根据这种深泡检测方法的应用,这些偏见可能导致普遍性,公平性和安全性问题。我们希望这项研究的发现和注释数据库将有助于评估和减轻未来深层检测技术的偏见。我们的注释数据集可公开使用。
translated by 谷歌翻译
广泛认为,面部识别准确性存在“性别差距”,女性具有较高的错误匹配和错误的非匹配率。但是,关于这种性别差距的原因,相对较少了解。甚至最近有关人口影响的NIST报告也列出了“我们没有做的事情”下的“分析因果”。我们首先证明女性和男性发型具有影响面部识别准确性的重要差异。特别是,与女性相比,男性面部毛发有助于在不同男性面孔之间产生更大的外观平均差异。然后,我们证明,当用来估计识别精度的数据在性别之间保持平衡,以使发型如何阻塞面部时,最初观察到的性别差距在准确性上大大消失。我们为两个不同的匹配者展示了这一结果,并分析了白种人和非裔美国人的图像。这些结果表明,对准确性的人口统计学差异的未来研究应包括检查测试数据的平衡质量,作为问题制定的一部分。为了促进可重复的研究,将公开使用此研究中使用的匹配项,属性分类器和数据集。
translated by 谷歌翻译
由于隐私,透明度,问责制和缺少程序保障的担忧,印度的面部加工系统的增加越来越多。与此同时,我们也很少了解这些技术如何在印度13.4亿种群的不同特征,特征和肤色上表现出来。在本文中,我们在印度脸部的数据集中测试四个商用面部加工工具的面部检测和面部分析功能。该工具在面部检测和性别和年龄分类功能中显示不同的错误率。与男性相比,印度女性面的性别分类错误率始终如一,最高的女性错误率为14.68%。在某些情况下,这种错误率远高于其他国籍的女性之前的研究表明。年龄分类错误也很高。尽管从一个人的实际年龄从一个人的实际年龄到10年来考虑到可接受的误差率,但年龄预测失败的速度为14.3%至42.2%。这些发现指向面部加工工具的准确性有限,特别是某些人口组,在采用此类系统之前需要更关键的思维。
translated by 谷歌翻译
在过去的几十年里,机器和深度学习界在挑战性的任务中庆祝了巨大成就,如图像分类。人工神经网络的深度建筑与可用数据的宽度一起使得可以描述高度复杂的关系。然而,仍然不可能完全捕捉深度学习模型已经了解到的深度学习模型并验证它公平,而不会产生偏见,特别是在临界任务中,例如在医学领域产生的问题。这样的任务的一个示例是检测面部图像中的不同面部表情,称为动作单位。考虑到这项特定任务,我们的研究旨在为偏见提供透明度,具体与性别和肤色有关。我们训练一个神经网络进行动作单位分类,并根据其准确性和基于热量的定性分析其性能。对我们的结果的结构化审查表明我们能够检测到偏见。尽管我们不能从我们的结果得出结论,但较低的分类表现完全来自性别和肤色偏差,这些偏差必须得到解决,这就是为什么我们通过提出关于如何避免检测到的偏差的建议。
translated by 谷歌翻译
计算机视觉(CV)取得了显着的结果,在几个任务中表现优于人类。尽管如此,如果不正确处理,可能会导致重大歧视,因为CV系统高度依赖于他们所用的数据,并且可以在此类数据中学习和扩大偏见。因此,理解和发现偏见的问题至关重要。但是,没有关于视觉数据集中偏见的全面调查。因此,这项工作的目的是:i)描述可能在视觉数据集中表现出来的偏差; ii)回顾有关视觉数据集中偏置发现和量化方法的文献; iii)讨论现有的尝试收集偏见视觉数据集的尝试。我们研究的一个关键结论是,视觉数据集中发现和量化的问题仍然是开放的,并且在方法和可以解决的偏见范围方面都有改进的余地。此外,没有无偏见的数据集之类的东西,因此科学家和从业者必须意识到其数据集中的偏见并使它们明确。为此,我们提出了一个清单,以在Visual DataSet收集过程中发现不同类型的偏差。
translated by 谷歌翻译