已知性别偏见存在于大规模的视觉数据集中,并且可以在下游模型中反映甚至扩大。许多先前的作品通常通过尝试从图像中删除性别表达信息来减轻性别偏见。为了理解这些方法的可行性和实用性,我们研究了大规模视觉数据集中存在的$ \ textit {gengender伪像} $。我们将$ \ textit {性别伪像} $定义为与性别相关的视觉提示,专门针对那些由现代图像分类器学习并具有可解释的人类推论的线索。通过我们的分析,我们发现性别伪像在可可和开放型数据集中无处不在,从低级信息(例如,颜色通道的平均值)到图像的高级组成(例如姿势和姿势和姿势,,,,,,,,,地和图像的平均值),无处不在(例如,姿势和姿势,姿势和姿势,,,姿势和姿势,是姿势和姿势,是姿势和姿势,是姿势和姿势的平均值)。人的位置)。鉴于性别文物的流行,我们声称试图从此类数据集中删除性别文物的尝试是不可行的。取而代之的是,责任在于研究人员和从业人员意识到数据集中图像的分布是高度性别的,因此开发了对各组之间这些分配变化的强大方法。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
计算机视觉(CV)取得了显着的结果,在几个任务中表现优于人类。尽管如此,如果不正确处理,可能会导致重大歧视,因为CV系统高度依赖于他们所用的数据,并且可以在此类数据中学习和扩大偏见。因此,理解和发现偏见的问题至关重要。但是,没有关于视觉数据集中偏见的全面调查。因此,这项工作的目的是:i)描述可能在视觉数据集中表现出来的偏差; ii)回顾有关视觉数据集中偏置发现和量化方法的文献; iii)讨论现有的尝试收集偏见视觉数据集的尝试。我们研究的一个关键结论是,视觉数据集中发现和量化的问题仍然是开放的,并且在方法和可以解决的偏见范围方面都有改进的余地。此外,没有无偏见的数据集之类的东西,因此科学家和从业者必须意识到其数据集中的偏见并使它们明确。为此,我们提出了一个清单,以在Visual DataSet收集过程中发现不同类型的偏差。
translated by 谷歌翻译
以前的工作在很大程度上通过“偏见”的透镜指定的镜头考虑了图像字幕系统的公平性。相比之下,我们提供了一组技术,用于测量五种类型的代表性危害以及使用最流行的图像标题数据集获得的最终测量结果。我们的目标不是审核此图像字幕系统,而是要开发规范性的测量技术,进而提供了一个机会来反思所涉及的许多挑战。我们提出了每种危害类型的多种测量技术。我们认为,这样做可以更好地捕获每种危害的多方面性质,从而改善了所得测量值的(集体)有效性。在整个过程中,我们讨论了我们的测量方法的基础假设,并指出了它们不进行的假设。
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
在过去的几十年里,机器和深度学习界在挑战性的任务中庆祝了巨大成就,如图像分类。人工神经网络的深度建筑与可用数据的宽度一起使得可以描述高度复杂的关系。然而,仍然不可能完全捕捉深度学习模型已经了解到的深度学习模型并验证它公平,而不会产生偏见,特别是在临界任务中,例如在医学领域产生的问题。这样的任务的一个示例是检测面部图像中的不同面部表情,称为动作单位。考虑到这项特定任务,我们的研究旨在为偏见提供透明度,具体与性别和肤色有关。我们训练一个神经网络进行动作单位分类,并根据其准确性和基于热量的定性分析其性能。对我们的结果的结构化审查表明我们能够检测到偏见。尽管我们不能从我们的结果得出结论,但较低的分类表现完全来自性别和肤色偏差,这些偏差必须得到解决,这就是为什么我们通过提出关于如何避免检测到的偏差的建议。
translated by 谷歌翻译
许多现代的机器学习算法通过在与性别或种族等敏感属性相关的粗略定义的群体之间执行公平限制来减轻偏见。但是,这些算法很少说明组内异质性和偏见可能会对组的某些成员产生不成比例。在这项工作中,我们表征了社会规范偏见(Snob),这是一种微妙但因此的算法歧视类型,即使这些系统实现了群体公平目标,也可以通过机器学习模型展示。我们通过职业分类中的性别偏见来研究这个问题。我们通过衡量算法的预测与推断性别规范的一致性相关,来量化势利小人。当预测一个人是否属于男性主导的职业时,该框架表明,“公平”的分类者仍然以与推断的男性规范相符的方式写的传记。我们比较跨算法公平方法的势利小人,并表明它通常是残留的偏见,而后处理方法根本不会减轻这种偏见。
translated by 谷歌翻译
媒体报道指责人们对“偏见”',“”性别歧视“和”种族主义“的人士指责。研究文献中有共识,面部识别准确性为女性较低,妇女通常具有更高的假匹配率和更高的假非匹配率。然而,几乎没有出版的研究,旨在识别女性准确性较低的原因。例如,2019年的面部识别供应商测试将在广泛的算法和数据集中记录较低的女性准确性,并且数据集也列出了“分析原因和效果”在“我们没有做的东西”下''。我们介绍了第一个实验分析,以确定在去以前研究的数据集上对女性的较低人脸识别准确性的主要原因。在测试图像中控制相等的可见面部可见面积减轻了女性的表观更高的假非匹配率。其他分析表明,化妆平衡数据集进一步改善了女性以实现较低的虚假非匹配率。最后,聚类实验表明,两种不同女性的图像本质上比两种不同的男性更相似,潜在地占错误匹配速率的差异。
translated by 谷歌翻译
语言可以用作再现和执行有害刻板印象和偏差的手段,并被分析在许多研究中。在本文中,我们对自然语言处理中的性别偏见进行了304篇论文。我们分析了社会科学中性别及其类别的定义,并将其连接到NLP研究中性别偏见的正式定义。我们调查了在对性别偏见的研究中应用的Lexica和数据集,然后比较和对比方法来检测和减轻性别偏见。我们发现对性别偏见的研究遭受了四个核心限制。 1)大多数研究将性别视为忽视其流动性和连续性的二元变量。 2)大部分工作都在单机设置中进行英语或其他高资源语言进行。 3)尽管在NLP方法中对性别偏见进行了无数的论文,但我们发现大多数新开发的算法都没有测试他们的偏见模型,并无视他们的工作的伦理考虑。 4)最后,在这一研究线上发展的方法基本缺陷涵盖性别偏差的非常有限的定义,缺乏评估基线和管道。我们建议建议克服这些限制作为未来研究的指导。
translated by 谷歌翻译
随着数据驱动的系统越来越大规模部署,对历史上边缘化的群体的不公平和歧视结果引起了道德问题,这些群体在培训数据中的代表性不足。作为回应,围绕AI的公平和包容性的工作呼吁代表各个人口组的数据集。在本文中,我们对可访问性数据集中的年龄,性别和种族和种族的代表性进行了分析 - 数据集 - 来自拥有的数据集,这些数据集来自拥有的人。残疾和老年人 - 这可能在减轻包含AI注入的应用程序的偏见方面发挥重要作用。我们通过审查190个数据集的公开信息来检查由残疾人来源的数据集中的当前表示状态,我们称这些可访问性数据集为止。我们发现可访问性数据集代表不同的年龄,但具有性别和种族表示差距。此外,我们研究了人口统计学变量的敏感和复杂性质如何使分类变得困难和不一致(例如,性别,种族和种族),标记的来源通常未知。通过反思当前代表残疾数据贡献者的挑战和机会,我们希望我们的努力扩大了更多可能将边缘化社区纳入AI注入系统的可能性。
translated by 谷歌翻译
已经发现深层图像分类器可以从数据集中学习偏差。为了减轻偏见,大多数以前的方法都需要标签受保护的属性(例如,年龄,肤色)为全套,这有两个限制:1)当标签不可用时,它是不可行的; 2)它们无法缓解未知的偏见 - 人类没有先入为主的偏见。为了解决这些问题,我们提出了偏见的替代网络(Debian),该网络包括两个网络 - 一个发现者和一个分类器。通过以另一种方式培训,发现者试图找到分类器的多个未知偏见,而无需任何偏见注释,分类器的目的是删除发现者确定的偏见。虽然先前的作品评估了单个偏差的结果,但我们创建了多色MNIST数据集,以更好地缓解多偏差设置中的多个偏差,这不仅揭示了以前的方法中的问题,而且还展示了Debian的优势。在同时识别和减轻多种偏见时。我们进一步对现实世界数据集进行了广泛的实验,表明Debian中的发现者可以识别人类可能很难找到的未知偏见。关于辩护,Debian实现了强烈的偏见缓解绩效。
translated by 谷歌翻译
人类具有出色的能力来推理绑架并假设超出图像的字面内容的内容。通过识别散布在整个场景中的具体视觉线索,我们几乎不禁根据我们的日常经验和对世界的知识来提出可能的推论。例如,如果我们在道路旁边看到一个“ 20英里 /小时”的标志,我们可能会假设街道位于居民区(而不是在高速公路上),即使没有房屋。机器可以执行类似的视觉推理吗?我们提出了Sherlock,这是一个带注释的103K图像的语料库,用于测试机器能力,以超出字面图像内容的绑架推理。我们采用免费观看范式:参与者首先观察并识别图像中的显着线索(例如,对象,动作),然后给定线索,然后提供有关场景的合理推论。我们总共收集了363K(线索,推理)对,该对形成了首个绑架的视觉推理数据集。使用我们的语料库,我们测试了三个互补的绑架推理轴。我们评估模型的能力:i)从大型候选人语料库中检索相关推论; ii)通过边界框来定位推论的证据,iii)比较合理的推论,以匹配人类在新收集的19k李克特级判断的诊断语料库上的判断。尽管我们发现具有多任务目标的微调夹RN50x64优于强大的基准,但模型性能与人类一致之间存在着重要的净空。可在http://visualabduction.com/上获得数据,模型和排行榜
translated by 谷歌翻译
As facial recognition systems are deployed more widely, scholars and activists have studied their biases and harms. Audits are commonly used to accomplish this and compare the algorithmic facial recognition systems' performance against datasets with various metadata labels about the subjects of the images. Seminal works have found discrepancies in performance by gender expression, age, perceived race, skin type, etc. These studies and audits often examine algorithms which fall into two categories: academic models or commercial models. We present a detailed comparison between academic and commercial face detection systems, specifically examining robustness to noise. We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness, specifically by having statistically significant decreased performance on older individuals and those who present their gender in a masculine manner. When we compare the size of these disparities to that of commercial models, we conclude that commercial models - in contrast to their relatively larger development budget and industry-level fairness commitments - are always as biased or more biased than an academic model.
translated by 谷歌翻译
面部表现攻击检测(PAD)对于保护面部识别(FR)应用程序至关重要。 FR性能已被证明对某些人口统计学和非人口统计学组是不公平的。但是,面部垫的公平性是一个研究的问题,这主要是由于缺乏适当的注释数据。为了解决此问题,这项工作首先通过组合几个知名的PAD数据集,在其中提供了七个人类宣传的属性标签,从而提出了一个组合的注释数据集(CAAD-PAD)。然后,这项工作通过研究我们的CAAD-Pad上的四个面部垫方法,全面分析了一组面垫的公平及其与培训数据的性质和操作决策阈值分配(ODTA)的关系。同时代表垫子的公平性和绝对垫性能,我们引入了一种新颖的指标,即准确性平衡公平(ABF)。关于CAAD-PAD的广泛实验表明,训练数据和ODTA会引起性别,遮挡和其他属性组的不公平性。基于这些分析,我们提出了一种数据增强方法Fairswap,该方法旨在破坏身份/语义信息和指南模型以挖掘攻击线索而不是与属性相关的信息。详细的实验结果表明,Fairswap通常可以提高垫子性能和面部垫的公平性。
translated by 谷歌翻译
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
刻板印象,偏见和歧视已在机器学习(ML)方法(例如计算机视觉(CV)[18,80],自然语言处理(NLP)[6]或两者兼有大图像和大图像和两者兼而有之)标题模型,例如OpenAI剪辑[14]。在本文中,我们评估了ML偏差如何在世界内部和自主作用的机器人中表现出来。我们审核了最近发表的几种剪贴式机器人操纵方法之一,向其呈现在表面上有人脸的图片,这些物体在种族和性别之间各不相同,以及包含与常见刻板印象相关的术语的任务说明。我们的实验明确表明机器人对性别,种族和科学持有的较大的构成观念的作用,并大规模地划分了。此外,经过审核的方法不太可能认识有色人种和有色人种。我们的跨学科社会技术分析跨越了科学技术与社会(STS),批判性研究,历史,安全,机器人技术和AI等领域和应用。我们发现,由大型数据集和溶解模型提供动力的机器人(有时称为“基础模型”,例如剪辑),其中包含人类风险在物理上放大恶性刻板印象;而且,仅纠正差异将不足以使问题的复杂性和规模不足。取而代之的是,我们建议机器人学习方法在适当的时候暂停,重新设计甚至损坏,直到结果被证明是安全,有效和公正的,才能暂停,重新工作甚至损坏其他有害结果。最后,我们讨论了有关身份安全评估框架和设计正义等主题的新的跨学科研究的全面政策变化,以及更好地理解和解决这些危害的主题。
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
机器学习模型通常使用诸如“依靠人的存在来检测网球拍”的虚假模式,这不概括。在这项工作中,我们介绍了一个端到端的管道,用于识别和减轻图像分类器的虚假模式。我们首先找到“模型对网球拍预测的模式,如果我们隐藏人民的时间似的63%。”然后,如果模式是虚幻的,我们通过新颖的数据增强来减轻它。我们展示了这种方法识别了一种多样化的杂散模式,并且它通过产生一个模型来减轻它们,这些模型在虚假图案对虚假模式对分布偏移不有用和更鲁棒的分布上进行更准确。
translated by 谷歌翻译
Facial analysis systems have been deployed by large companies and critiqued by scholars and activists for the past decade. Many existing algorithmic audits examine the performance of these systems on later stage elements of facial analysis systems like facial recognition and age, emotion, or perceived gender prediction; however, a core component to these systems has been vastly understudied from a fairness perspective: face detection, sometimes called face localization. Since face detection is a pre-requisite step in facial analysis systems, the bias we observe in face detection will flow downstream to the other components like facial recognition and emotion prediction. Additionally, no prior work has focused on the robustness of these systems under various perturbations and corruptions, which leaves open the question of how various people are impacted by these phenomena. We present the first of its kind detailed benchmark of face detection systems, specifically examining the robustness to noise of commercial and academic models. We use both standard and recently released academic facial datasets to quantitatively analyze trends in face detection robustness. Across all the datasets and systems, we generally find that photos of individuals who are $\textit{masculine presenting}$, $\textit{older}$, of $\textit{darker skin type}$, or have $\textit{dim lighting}$ are more susceptible to errors than their counterparts in other identities.
translated by 谷歌翻译