这项工作的目的是通过根据求职者的简历提供无偏见的工作建议来帮助减轻已经存在的性别工资差距。我们采用生成的对抗网络来从12m职位空缺文本和900k简历的Word2VEC表示中删除性别偏见。我们的结果表明,由招聘文本创建的表示形式包含算法偏见,并且这种偏见会对推荐系统产生实际后果。在没有控制偏见的情况下,建议妇女在我们的数据中薪水明显降低。有了对手公平的代表,这种工资差距消失了,这意味着我们的辩护工作建议减少了工资歧视。我们得出的结论是,单词表示形式的对抗性偏见可以增加系统的真实世界公平性,因此可能是创建公平感知推荐系统的解决方案的一部分。
translated by 谷歌翻译
许多现代的机器学习算法通过在与性别或种族等敏感属性相关的粗略定义的群体之间执行公平限制来减轻偏见。但是,这些算法很少说明组内异质性和偏见可能会对组的某些成员产生不成比例。在这项工作中,我们表征了社会规范偏见(Snob),这是一种微妙但因此的算法歧视类型,即使这些系统实现了群体公平目标,也可以通过机器学习模型展示。我们通过职业分类中的性别偏见来研究这个问题。我们通过衡量算法的预测与推断性别规范的一致性相关,来量化势利小人。当预测一个人是否属于男性主导的职业时,该框架表明,“公平”的分类者仍然以与推断的男性规范相符的方式写的传记。我们比较跨算法公平方法的势利小人,并表明它通常是残留的偏见,而后处理方法根本不会减轻这种偏见。
translated by 谷歌翻译
机器学习模型在高赌注应用中变得普遍存在。尽管在绩效方面有明显的效益,但该模型可以表现出对少数民族群体的偏见,并导致决策过程中的公平问题,导致对个人和社会的严重负面影响。近年来,已经开发了各种技术来减轻机器学习模型的偏差。其中,加工方法已经增加了社区的关注,在模型设计期间直接考虑公平,以诱导本质上公平的模型,从根本上减轻了产出和陈述中的公平问题。在本调查中,我们审查了加工偏置减缓技术的当前进展。基于在模型中实现公平的地方,我们将它们分类为明确和隐性的方法,前者直接在培训目标中纳入公平度量,后者重点介绍精炼潜在代表学习。最后,我们在讨论该社区中的研究挑战来讨论调查,以激励未来的探索。
translated by 谷歌翻译
语言可以用作再现和执行有害刻板印象和偏差的手段,并被分析在许多研究中。在本文中,我们对自然语言处理中的性别偏见进行了304篇论文。我们分析了社会科学中性别及其类别的定义,并将其连接到NLP研究中性别偏见的正式定义。我们调查了在对性别偏见的研究中应用的Lexica和数据集,然后比较和对比方法来检测和减轻性别偏见。我们发现对性别偏见的研究遭受了四个核心限制。 1)大多数研究将性别视为忽视其流动性和连续性的二元变量。 2)大部分工作都在单机设置中进行英语或其他高资源语言进行。 3)尽管在NLP方法中对性别偏见进行了无数的论文,但我们发现大多数新开发的算法都没有测试他们的偏见模型,并无视他们的工作的伦理考虑。 4)最后,在这一研究线上发展的方法基本缺陷涵盖性别偏差的非常有限的定义,缺乏评估基线和管道。我们建议建议克服这些限制作为未来研究的指导。
translated by 谷歌翻译
我们采用自然语言处理技术来分析“ 200万首歌数据库”语料库中的377808英文歌曲歌词,重点是五十年(1960- 2010年)的性别歧视表达和性别偏见的测量。使用性别歧视分类器,我们比以前的研究使用手动注释的流行歌曲样本来确定性别歧视歌词。此外,我们通过测量在歌曲歌词中学到的单词嵌入中的关联来揭示性别偏见。我们发现性别歧视的内容可以在整个时间内增加,尤其是从男性艺术家和出现在Billboard图表中的流行歌曲。根据表演者的性别,歌曲还包含不同的语言偏见,男性独奏艺术家歌曲包含更多和更强烈的偏见。这是对此类类型的第一个大规模分析,在流行文化的如此有影响力的一部分中,可以深入了解语言使用。
translated by 谷歌翻译
住院患者的高血糖治疗对发病率和死亡率都有重大影响。这项研究使用了大型临床数据库来预测需要住院的糖尿病患者的需求,这可能会改善患者的安全性。但是,这些预测可能容易受到社会决定因素(例如种族,年龄和性别)造成的健康差异的影响。这些偏见必须在数据收集过程的早期,在进入系统之前就可以消除,并通过模型预测加强,从而导致模型决策的偏见。在本文中,我们提出了一条能够做出预测以及检测和减轻偏见的机器学习管道。该管道分析了临床数据,确定是否存在偏见,将其删除,然后做出预测。我们使用实验证明了模型预测中的分类准确性和公平性。结果表明,当我们在模型早期减轻偏见时,我们会得到更公平的预测。我们还发现,随着我们获得更好的公平性,我们牺牲了一定程度的准确性,这在先前的研究中也得到了验证。我们邀请研究界为确定可以通过本管道解决的其他因素做出贡献。
translated by 谷歌翻译
Machine learning is a tool for building models that accurately represent input training data. When undesired biases concerning demographic groups are in the training data, well-trained models will reflect those biases. We present a framework for mitigating such biases by including a variable for the group of interest and simultaneously learning a predictor and an adversary. The input to the network X, here text or census data, produces a prediction Y, such as an analogy completion or income bracket, while the adversary tries to model a protected variable Z, here gender or zip code. The objective is to maximize the predictors ability to predict Y while minimizing the adversary's ability to predict Z. Applied to analogy completion, this method results in accurate predictions that exhibit less evidence of stereotyping Z. When applied to a classification task using the UCI Adult (Census) Dataset, it results in a predictive model that does not lose much accuracy while achieving very close to equality of odds (Hardt, et al., 2016). The method is flexible and applicable to multiple definitions of fairness as well as a wide range of gradient-based learning models, including both regression and classification tasks.
translated by 谷歌翻译
在过去几年中,Word和句嵌入式已建立为各种NLP任务的文本预处理,并显着提高了性能。不幸的是,还表明这些嵌入物从训练数据中继承了各种偏见,从而通过了社会中存在的偏差到NLP解决方案。许多论文试图在单词或句子嵌入中量化偏差,以评估脱叠方法或比较不同的嵌入模型,通常具有基于余弦的指标。然而,最近有些作品对这些指标提出了疑虑,表明即使这些指标报告低偏见,其他测试仍然显示偏差。事实上,文献中提出了各种各样的偏差指标或测试,而没有任何关于最佳解决方案的共识。然而,我们缺乏评估理论级别的偏见度量或详细阐述不同偏差度量的优缺点的作品。在这项工作中,我们将探索基于余弦的偏差指标。我们根据以前的作品和偏见度量的推导条件的思想形式化偏差定义。此外,我们彻底调查了现有的基于余弦的指标及其限制,以表明为什么这些度量可以在某些情况下报告偏差。最后,我们提出了一个新的公制,同样地解决现有度量的缺点,以及数学上证明的表现相同。
translated by 谷歌翻译
Bias elimination and recent probing studies attempt to remove specific information from embedding spaces. Here it is important to remove as much of the target information as possible, while preserving any other information present. INLP is a popular recent method which removes specific information through iterative nullspace projections. Multiple iterations, however, increase the risk that information other than the target is negatively affected. We introduce two methods that find a single targeted projection: Mean Projection (MP, more efficient) and Tukey Median Projection (TMP, with theoretical guarantees). Our comparison between MP and INLP shows that (1) one MP projection removes linear separability based on the target and (2) MP has less impact on the overall space. Further analysis shows that applying random projections after MP leads to the same overall effects on the embedding space as the multiple projections of INLP. Applying one targeted (MP) projection hence is methodologically cleaner than applying multiple (INLP) projections that introduce random effects.
translated by 谷歌翻译
对自然语言处理资源中的偏置模式的提高意识,如BERT,具有许多度量来量化“偏见”和“公平”。但是,如果没有完全不可能,请比较不同指标的结果和评估这些度量的作品仍然困难。我们调查了对预用语言模型的公平度量标准的现有文献,并通过实验评估兼容性,包括语言模型中的偏差,如在其下游任务中。我们通过传统文献调查和相关分析的混合来实现这一目标,以及运行实证评估。我们发现许多指标不兼容,高度依赖于(i)模板,(ii)属性和目标种子和(iii)选择嵌入式。这些结果表明,公平或偏见评估对情境化语言模型仍然具有挑战性,如果不是至少高度主观。为了提高未来的比较和公平评估,我们建议避免嵌入基于的指标并专注于下游任务中的公平评估。
translated by 谷歌翻译
Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language-the same sort of language humans are exposed to every day. We replicate a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known psychological studies. We replicate these using a widely used, purely statistical machine-learning model-namely, the GloVe word embedding-trained on a corpus of text from the Web. Our results indicate that language itself contains recoverable and accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first names. These regularities are captured by machine learning along with the rest of semantics. In addition to our empirical findings concerning language, we also contribute new methods for evaluating bias in text, the Word Embedding Association Test (WEAT) and the Word Embedding Factual Association Test (WEFAT). Our results have implications not only for AI and machine learning, but also for the fields of psychology, sociology, and human ethics, since they raise the possibility that mere exposure to everyday language can account for the biases we replicate here.
translated by 谷歌翻译
公平性是确保机器学习(ML)预测系统不会歧视特定个人或整个子人群(尤其是少数族裔)的重要要求。鉴于观察公平概念的固有主观性,文献中已经引入了几种公平概念。本文是一项调查,说明了通过大量示例和场景之间的公平概念之间的微妙之处。此外,与文献中的其他调查不同,它解决了以下问题:哪种公平概念最适合给定的现实世界情景,为什么?我们试图回答这个问题的尝试包括(1)确定手头现实世界情景的一组与公平相关的特征,(2)分析每个公平概念的行为,然后(3)适合这两个元素以推荐每个特定设置中最合适的公平概念。结果总结在决策图中可以由从业者和政策制定者使用,以导航相对较大的ML目录。
translated by 谷歌翻译
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.
translated by 谷歌翻译
大语言模型中的表示形式包含多种类型的性别信息。我们专注于英语文本中的两种此类信号:事实性别信息,这是语法或语义属性,以及性别偏见,这是单词和特定性别之间的相关性。我们可以解开模型的嵌入,并识别编码两种类型信息的组件。我们的目标是减少表示形式中的刻板印象偏见,同时保留事实性别信号。我们的过滤方法表明,可以减少性别中立职业名称的偏见,而不会严重恶化能力。这些发现可以应用于语言生成,以减轻对刻板印象的依赖,同时保留核心方面的性别协议。
translated by 谷歌翻译
尽管在算法招聘中使用的NLP模型中的性别偏差产生了越来越多的疑虑,但几乎没有实证工作,研究了简历中性别语言的程度和性质。使用709K的语料库从IT公司恢复,我们培养一系列模型来分类申请人的性别,从而测量恢复中编码的性别信息的程度。我们还调查是否可以通过删除性别标识符,爱好,性别子空间在嵌入模型等中,从恢复等方面进行调查。我们发现即使在混淆后也有大量的性别信息。简单的TF-IDF模型可以学习将性别分类为AUTOC = 0.75,更复杂的基于变压器的模型实现AUROC = 0.8。我们进一步发现,性别预测值与嵌入性别方向具有低相关性 - 这意味着,在男性/女性意义上的性别预测是什么比“性别”中的“成年人”是什么。我们在招聘背景下讨论这些发现的算法偏差和公平影响。
translated by 谷歌翻译
由于越来越多的用户使用它们来寻求和决策,推荐制度对人类和社会的影响增加了对人类和社会的影响。因此,在建议中解决潜在的不公平问题至关重要。就像用户在物品上具有个性化的偏好,用户对公平性的要求也是个性化的许多情况。因此,为用户提供个性化的公平建议,以满足其个性化的公平需求。此外,以前的公平建议作品主要关注基于关联的公平性。但是,重要的是从联合公平概念前进,以便在推荐系统中更适当地评估公平性的因果公平概念。本文根据上述考虑,侧重于为推荐系统中的用户实现个性化的反事实公平。为此,我们介绍了一个框架,通过对建议产生特征 - 独立的用户嵌入来实现通过对抗学习来实现反转公平的建议。该框架允许推荐系统为用户实现个性化的公平,同时也涵盖非个性化情况。在浅层和深刻的推荐算法上的两个现实数据集的实验表明,我们的方法可以为具有理想的推荐性能的用户生成更公平的建议。
translated by 谷歌翻译
现代语言模型中的检测和缓解有害偏见被广泛认为是至关重要的开放问题。在本文中,我们退后一步,研究语言模型首先是如何偏见的。我们使用在英语Wikipedia语料库中训练的LSTM架构,使用相对较小的语言模型。在培训期间的每一步中,在每个步骤中都会更改数据和模型参数,我们可以详细介绍性别表示形式的发展,数据集中的哪些模式驱动器以及模型的内部状态如何与偏差相关在下游任务(语义文本相似性)中。我们发现性别的表示是动态的,并在训练过程中确定了不同的阶段。此外,我们表明,性别信息在模型的输入嵌入中越来越多地表示,因此,对这些性别的态度可以有效地减少下游偏置。监测训练动力学,使我们能够检测出在输入嵌入中如何表示男性和男性性别的不对称性。这很重要,因为这可能会导致幼稚的缓解策略引入新的不良偏见。我们更普遍地讨论了发现与缓解策略的相关性,以及将我们的方法推广到更大语言模型,变压器体系结构,其他语言和其他不良偏见的前景。
translated by 谷歌翻译
At the core of insurance business lies classification between risky and non-risky insureds, actuarial fairness meaning that risky insureds should contribute more and pay a higher premium than non-risky or less-risky ones. Actuaries, therefore, use econometric or machine learning techniques to classify, but the distinction between a fair actuarial classification and "discrimination" is subtle. For this reason, there is a growing interest about fairness and discrimination in the actuarial community Lindholm, Richman, Tsanakas, and Wuthrich (2022). Presumably, non-sensitive characteristics can serve as substitutes or proxies for protected attributes. For example, the color and model of a car, combined with the driver's occupation, may lead to an undesirable gender bias in the prediction of car insurance prices. Surprisingly, we will show that debiasing the predictor alone may be insufficient to maintain adequate accuracy (1). Indeed, the traditional pricing model is currently built in a two-stage structure that considers many potentially biased components such as car or geographic risks. We will show that this traditional structure has significant limitations in achieving fairness. For this reason, we have developed a novel pricing model approach. Recently some approaches have Blier-Wong, Cossette, Lamontagne, and Marceau (2021); Wuthrich and Merz (2021) shown the value of autoencoders in pricing. In this paper, we will show that (2) this can be generalized to multiple pricing factors (geographic, car type), (3) it perfectly adapted for a fairness context (since it allows to debias the set of pricing components): We extend this main idea to a general framework in which a single whole pricing model is trained by generating the geographic and car pricing components needed to predict the pure premium while mitigating the unwanted bias according to the desired metric.
translated by 谷歌翻译
在本文中,作为一个案例研究,我们在与谷歌翻译的机器翻译中对性别偏差进行了系统研究。我们翻译了包含匈牙利语的职业名称的句子,这是一种与性别中性代词的语言,进入英语。我们的目标是通过将翻译与最佳非偏见翻译者进行比较来提出偏见的公平措施。在评估偏见时,我们使用以下参考点:(1)源和目标语言国家的职业中的男女分布,以及(2)匈牙利调查结果,审查某些工作是通常被认为是女性化或男性化的。我们还研究了如何将句子扩展到职业的形容词效应了翻译代词的性别。因此,我们发现对双方的偏见,但对女性的偏见结果更频繁。翻译更接近我们对客观职业统计的看法。最后,职业对翻译产生了更大的效果而不是形容词。
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译