As facial recognition systems are deployed more widely, scholars and activists have studied their biases and harms. Audits are commonly used to accomplish this and compare the algorithmic facial recognition systems' performance against datasets with various metadata labels about the subjects of the images. Seminal works have found discrepancies in performance by gender expression, age, perceived race, skin type, etc. These studies and audits often examine algorithms which fall into two categories: academic models or commercial models. We present a detailed comparison between academic and commercial face detection systems, specifically examining robustness to noise. We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness, specifically by having statistically significant decreased performance on older individuals and those who present their gender in a masculine manner. When we compare the size of these disparities to that of commercial models, we conclude that commercial models - in contrast to their relatively larger development budget and industry-level fairness commitments - are always as biased or more biased than an academic model.
translated by 谷歌翻译
Facial analysis systems have been deployed by large companies and critiqued by scholars and activists for the past decade. Many existing algorithmic audits examine the performance of these systems on later stage elements of facial analysis systems like facial recognition and age, emotion, or perceived gender prediction; however, a core component to these systems has been vastly understudied from a fairness perspective: face detection, sometimes called face localization. Since face detection is a pre-requisite step in facial analysis systems, the bias we observe in face detection will flow downstream to the other components like facial recognition and emotion prediction. Additionally, no prior work has focused on the robustness of these systems under various perturbations and corruptions, which leaves open the question of how various people are impacted by these phenomena. We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models. We use both standard and recently released academic facial datasets to quantitatively analyze trends in face detection robustness. Across all the datasets and systems, we generally find that photos of individuals who are $\textit{masculine presenting}$, $\textit{older}$, of $\textit{darker skin type}$, or have $\textit{dim lighting}$ are more susceptible to errors than their counterparts in other identities.
translated by 谷歌翻译
面部检测是计算机愿景领域的长期挑战,最终目标是准确地将人类面临着不受约束的环境。由于与姿势,图像分辨率,照明,闭塞和观点相关的混淆因素,使这些系统具有重要的技术障碍。据说,随着最近的机器学习的发展,面部检测系统实现了非凡的准确性,主要是基于数据驱动的深度学习模型[70]。虽然鼓励,限制了部署系统的面部检测性能和社会责任的关键方面是人类外观的固有多样性。每个人类的外表都反映了一个人的东西,包括他们的遗产,身份,经验和自我表达的可见表现。但是,有关面部检测系统如何在面对不同的面部尺寸和形状,肤色,身体修改和身体装饰方面进行良好的表现问题。为了实现这一目标,我们收集了独特的人类外观数据集,这是一种图像集,表示具有低频率的外观,并且往往是面部数据集的缺点。然后,我们评估了当前最先进的脸部检测模型,其能够检测这些图像中的面部。评估结果表明,面部检测算法对这些不同的外观没有概括。评估和表征当前的面部检测模型的状态将加速研究和开发,以创造更公平和更准确的面部检测系统。
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.
translated by 谷歌翻译
由于隐私,透明度,问责制和缺少程序保障的担忧,印度的面部加工系统的增加越来越多。与此同时,我们也很少了解这些技术如何在印度13.4亿种群的不同特征,特征和肤色上表现出来。在本文中,我们在印度脸部的数据集中测试四个商用面部加工工具的面部检测和面部分析功能。该工具在面部检测和性别和年龄分类功能中显示不同的错误率。与男性相比,印度女性面的性别分类错误率始终如一,最高的女性错误率为14.68%。在某些情况下,这种错误率远高于其他国籍的女性之前的研究表明。年龄分类错误也很高。尽管从一个人的实际年龄从一个人的实际年龄到10年来考虑到可接受的误差率,但年龄预测失败的速度为14.3%至42.2%。这些发现指向面部加工工具的准确性有限,特别是某些人口组,在采用此类系统之前需要更关键的思维。
translated by 谷歌翻译
计算机视觉(CV)取得了显着的结果,在几个任务中表现优于人类。尽管如此,如果不正确处理,可能会导致重大歧视,因为CV系统高度依赖于他们所用的数据,并且可以在此类数据中学习和扩大偏见。因此,理解和发现偏见的问题至关重要。但是,没有关于视觉数据集中偏见的全面调查。因此,这项工作的目的是:i)描述可能在视觉数据集中表现出来的偏差; ii)回顾有关视觉数据集中偏置发现和量化方法的文献; iii)讨论现有的尝试收集偏见视觉数据集的尝试。我们研究的一个关键结论是,视觉数据集中发现和量化的问题仍然是开放的,并且在方法和可以解决的偏见范围方面都有改进的余地。此外,没有无偏见的数据集之类的东西,因此科学家和从业者必须意识到其数据集中的偏见并使它们明确。为此,我们提出了一个清单,以在Visual DataSet收集过程中发现不同类型的偏差。
translated by 谷歌翻译
本文介绍了一个新颖的数据集,以帮助研究人员评估他们的计算机视觉和音频模型,以便在各种年龄,性别,表观肤色和环境照明条件下进行准确性。我们的数据集由3,011名受试者组成,并包含超过45,000个视频,平均每人15个视频。这些视频被录制在多个美国国家,各种成年人在各种年龄,性别和明显的肤色群体中。一个关键特征是每个主题同意参与他们使用的相似之处。此外,我们的年龄和性别诠释由受试者自己提供。一组训练有素的注释器使用FitzPatrick皮肤型刻度标记了受试者的表观肤色。此外,还提供了在低环境照明中记录的视频的注释。作为衡量某些属性的预测稳健性的申请,我们对DeepFake检测挑战(DFDC)的前五名获胜者提供了全面的研究。实验评估表明,获胜模型对某些特定人群的表现较小,例如肤色较深的肤色,因此可能对所有人都不概括。此外,我们还评估了最先进的明显年龄和性别分类方法。我们的实验在各种背景的人们的公平待遇方面对这些模型进行了彻底的分析。
translated by 谷歌翻译
机器学习数据集引起了对隐私,偏见和不道德应用的担忧,导致突出数据集的缩写,例如Dukemtmc,MS-Celeb-1M和微小图像。作为响应,机器学习界已在数据集创建中呼吁更高的道德标准。为了帮助通知这些努力,我们研究了三个有影响力的但道德问题的面部和人识别数据集 - 在野外(LFW),MS-Celeb-1M和DukemTM中标记的面孔 - 通过分析近1000篇引用它们的纸张。我们发现,创建衍生数据集和模型,更广泛的技术和社会变革,许可证缺乏清晰度,数据集管理实践可以引入广泛的道德问题。我们通过表明分布式方法来伤害消除数据集的整个生命周期的危害。
translated by 谷歌翻译
面部检测是为了在图像中搜索面部的所有可能区域,并且如果有任何情况,则定位面部。包括面部识别,面部表情识别,面部跟踪和头部姿势估计的许多应用假设面部的位置和尺寸在图像中是已知的。近几十年来,研究人员从Viola-Jones脸上检测器创造了许多典型和有效的面部探测器到当前的基于CNN的CNN。然而,随着图像和视频的巨大增加,具有面部刻度的变化,外观,表达,遮挡和姿势,传统的面部探测器被挑战来检测野外面孔的各种“脸部。深度学习技术的出现带来了非凡的检测突破,以及计算的价格相当大的价格。本文介绍了代表性的深度学习的方法,并在准确性和效率方面提出了深度和全面的分析。我们进一步比较并讨论了流行的并挑战数据集及其评估指标。进行了几种成功的基于深度学习的面部探测器的全面比较,以使用两个度量来揭示其效率:拖鞋和延迟。本文可以指导为不同应用选择合适的面部探测器,也可以开发更高效和准确的探测器。
translated by 谷歌翻译
自动面检测等计算机视觉应用用于各种目的,从解锁智能设备到跟踪监视的潜在感兴趣的人。这些申请的审计透露,他们倾向于对少数民族群体偏见,导致不公平和关于社会和政治结果。尽管随着时间的推移,但这些偏差尚未完全减轻,但实际上已经增加了年龄预测等任务。虽然这些系统审核了基准数据集,但有必要评估其对抗性投入的鲁棒性。在这项工作中,我们在多个系统和数据集上进行广泛的对手审核,并进行了许多关于观察 - 从以前的审计以来的一些任务对一些任务进行了准确性。虽然仍然对多个数据集的少数群体的个体仍然存在偏差,但更令人担忧的观察是这些偏差倾向于对少数群体的对抗意义进行过度发音。我们讨论了鉴于这些观察结果更广泛的社会影响以及关于如何共同应对这个问题的建议。
translated by 谷歌翻译
The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Research in bias focuses primarily on facial recognition and attribute prediction with scarce emphasis on face detection. Existing studies consider face detection as binary classification into 'face' and 'non-face' classes. In this work, we investigate possible bias in the domain of face detection through facial region localization which is currently unexplored. Since facial region localization is an essential task for all face recognition pipelines, it is imperative to analyze the presence of such bias in popular deep models. Most existing face detection datasets lack suitable annotation for such analysis. Therefore, we web-curate the Fair Face Localization with Attributes (F2LA) dataset and manually annotate more than 10 attributes per face, including facial localization information. Utilizing the extensive annotations from F2LA, an experimental setup is designed to study the performance of four pre-trained face detectors. We observe (i) a high disparity in detection accuracies across gender and skin-tone, and (ii) interplay of confounding factors beyond demography. The F2LA data and associated annotations can be accessed at http://iab-rubric.org/index.php/F2LA.
translated by 谷歌翻译
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%. The substantial disparities in the accuracy of classifying darker females, lighter females, darker males, and lighter males in gender classification systems require urgent attention if commercial companies are to build genuinely fair, transparent and accountable facial analysis algorithms.
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
儿童性滥用和剥削(CSAE)受害者的确切年龄估计是最重要的数字取证挑战之一。调查人员通常需要通过查看图像和解释性发展阶段和其他人类特征来确定受害者的年龄。主要优先事项 - 保障儿童 - 通常受到这项工作可能需要的巨大的法医反积云,认知偏见和巨大的心理压力的负面影响。本文评估了现有的面部图像数据集,并提出了一种针对类似数字法医研究贡献的需求而定制的新数据集。这个小型,不同的DataSet为0到20岁的个人包含245个图像,并与FG-Net DataSet的82个唯一图像合并,从而实现了具有高图像分集和低年龄范围密度的327个图像。在IMDB-Wiki DataSet上预先培训的深度期望(DEX)算法测试新数据集。 16至20岁的年轻青少年和年龄较大的青少年/成年人的整体成果非常令人鼓舞 - 达到1.79年的MAE,但也表明0至10岁儿童的准确性需要进一步的工作。为了确定原型的功效,已经考虑了四个数字法医专家的有价值输入,以提高年龄估计结果。需要进一步的研究来扩展关于图像密度的数据集和性别和种族分集等因素的平等分布。
translated by 谷歌翻译
对AI系统的分类评估,其中系统性能分别为不同的人分别评估和报告,在概念上简单。然而,他们的设计涉及各种选择。其中一些选择会影响将获得的结果,从而产生可以绘制的结论;其他人影响了有益和有害的影响 - 将分列的评估将对人们进行分类,包括其数据用于进行评估的人员。我们认为,更深入的了解这些选择将使研究人员和从业者能够设计仔细和决定性的分类评估。我们还争辩说,更好地记录这些选择,以及所做的潜在考虑因素和权衡,将在解释评估的结果和结论时帮助别人。
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
随着数据驱动的系统越来越大规模部署,对历史上边缘化的群体的不公平和歧视结果引起了道德问题,这些群体在培训数据中的代表性不足。作为回应,围绕AI的公平和包容性的工作呼吁代表各个人口组的数据集。在本文中,我们对可访问性数据集中的年龄,性别和种族和种族的代表性进行了分析 - 数据集 - 来自拥有的数据集,这些数据集来自拥有的人。残疾和老年人 - 这可能在减轻包含AI注入的应用程序的偏见方面发挥重要作用。我们通过审查190个数据集的公开信息来检查由残疾人来源的数据集中的当前表示状态,我们称这些可访问性数据集为止。我们发现可访问性数据集代表不同的年龄,但具有性别和种族表示差距。此外,我们研究了人口统计学变量的敏感和复杂性质如何使分类变得困难和不一致(例如,性别,种族和种族),标记的来源通常未知。通过反思当前代表残疾数据贡献者的挑战和机会,我们希望我们的努力扩大了更多可能将边缘化社区纳入AI注入系统的可能性。
translated by 谷歌翻译
已显示现有的面部分析系统对某些人口统计亚组产生偏见的结果。由于其对社会的影响,因此必须确保这些系统不会根据个人的性别,身份或肤色歧视。这导致了在AI系统中识别和减轻偏差的研究。在本文中,我们封装了面部分析的偏置检测/估计和缓解算法。我们的主要贡献包括对拟议理解偏见的算法的系统审查,以及分类和广泛概述现有的偏置缓解算法。我们还讨论了偏见面部分析领域的开放挑战。
translated by 谷歌翻译
在过去的几年中,涉及AI驱动警察工作的歧视性做法一直引起了很多争议,Compas,Predpol和Shotspotter等算法被指控不公平地影响少数群体。同时,机器学习中的公平性,尤其是计算机视觉的问题,已经成为越来越多的学术工作的主题。在本文中,我们研究了这些区域如何相交。我们提供有关这些实践如何存在的信息以及减轻它们的困难。然后,我们检查目前正在开发的三个应用程序,以了解它们对公平性构成的风险以及如何减轻这些风险。
translated by 谷歌翻译