在过去几年中,Word和句嵌入式已建立为各种NLP任务的文本预处理,并显着提高了性能。不幸的是,还表明这些嵌入物从训练数据中继承了各种偏见,从而通过了社会中存在的偏差到NLP解决方案。许多论文试图在单词或句子嵌入中量化偏差,以评估脱叠方法或比较不同的嵌入模型,通常具有基于余弦的指标。然而,最近有些作品对这些指标提出了疑虑,表明即使这些指标报告低偏见,其他测试仍然显示偏差。事实上,文献中提出了各种各样的偏差指标或测试,而没有任何关于最佳解决方案的共识。然而,我们缺乏评估理论级别的偏见度量或详细阐述不同偏差度量的优缺点的作品。在这项工作中,我们将探索基于余弦的偏差指标。我们根据以前的作品和偏见度量的推导条件的思想形式化偏差定义。此外,我们彻底调查了现有的基于余弦的指标及其限制,以表明为什么这些度量可以在某些情况下报告偏差。最后,我们提出了一个新的公制,同样地解决现有度量的缺点,以及数学上证明的表现相同。
translated by 谷歌翻译
对自然语言处理资源中的偏置模式的提高意识,如BERT,具有许多度量来量化“偏见”和“公平”。但是,如果没有完全不可能,请比较不同指标的结果和评估这些度量的作品仍然困难。我们调查了对预用语言模型的公平度量标准的现有文献,并通过实验评估兼容性,包括语言模型中的偏差,如在其下游任务中。我们通过传统文献调查和相关分析的混合来实现这一目标,以及运行实证评估。我们发现许多指标不兼容,高度依赖于(i)模板,(ii)属性和目标种子和(iii)选择嵌入式。这些结果表明,公平或偏见评估对情境化语言模型仍然具有挑战性,如果不是至少高度主观。为了提高未来的比较和公平评估,我们建议避免嵌入基于的指标并专注于下游任务中的公平评估。
translated by 谷歌翻译
语言可以用作再现和执行有害刻板印象和偏差的手段,并被分析在许多研究中。在本文中,我们对自然语言处理中的性别偏见进行了304篇论文。我们分析了社会科学中性别及其类别的定义,并将其连接到NLP研究中性别偏见的正式定义。我们调查了在对性别偏见的研究中应用的Lexica和数据集,然后比较和对比方法来检测和减轻性别偏见。我们发现对性别偏见的研究遭受了四个核心限制。 1)大多数研究将性别视为忽视其流动性和连续性的二元变量。 2)大部分工作都在单机设置中进行英语或其他高资源语言进行。 3)尽管在NLP方法中对性别偏见进行了无数的论文,但我们发现大多数新开发的算法都没有测试他们的偏见模型,并无视他们的工作的伦理考虑。 4)最后,在这一研究线上发展的方法基本缺陷涵盖性别偏差的非常有限的定义,缺乏评估基线和管道。我们建议建议克服这些限制作为未来研究的指导。
translated by 谷歌翻译
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.
translated by 谷歌翻译
词汇嵌入在很大程度上仅限于个人和独立的社会类别。但是,现实世界中的语料库通常提出可能相互关联或相交的多个社会类别。例如,“头发编织”与非洲裔美国女性刻板印象相关,但非洲裔美国人也不是女性。因此,这项工作研究与多个社会类别相关的偏见:由不同类别和交叉偏见的联合引起的联合偏见,这些偏见与组成类别的偏见不重叠。我们首先从经验上观察到,单个偏见是非琐事相交的(即在一维子空间上)。从社会科学和语言理论中的交叉理论中,我们使用单个偏见的非线性几何形状为多个社会类别构建了一个交叉子空间。经验评估证实了我们方法的功效。数据和实现代码可以在https://github.com/githublucheng/implementation-of-josec-coling-22下载。
translated by 谷歌翻译
Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language-the same sort of language humans are exposed to every day. We replicate a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known psychological studies. We replicate these using a widely used, purely statistical machine-learning model-namely, the GloVe word embedding-trained on a corpus of text from the Web. Our results indicate that language itself contains recoverable and accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first names. These regularities are captured by machine learning along with the rest of semantics. In addition to our empirical findings concerning language, we also contribute new methods for evaluating bias in text, the Word Embedding Association Test (WEAT) and the Word Embedding Factual Association Test (WEFAT). Our results have implications not only for AI and machine learning, but also for the fields of psychology, sociology, and human ethics, since they raise the possibility that mere exposure to everyday language can account for the biases we replicate here.
translated by 谷歌翻译
Bias elimination and recent probing studies attempt to remove specific information from embedding spaces. Here it is important to remove as much of the target information as possible, while preserving any other information present. INLP is a popular recent method which removes specific information through iterative nullspace projections. Multiple iterations, however, increase the risk that information other than the target is negatively affected. We introduce two methods that find a single targeted projection: Mean Projection (MP, more efficient) and Tukey Median Projection (TMP, with theoretical guarantees). Our comparison between MP and INLP shows that (1) one MP projection removes linear separability based on the target and (2) MP has less impact on the overall space. Further analysis shows that applying random projections after MP leads to the same overall effects on the embedding space as the multiple projections of INLP. Applying one targeted (MP) projection hence is methodologically cleaner than applying multiple (INLP) projections that introduce random effects.
translated by 谷歌翻译
How do we design measures of social bias that we trust? While prior work has introduced several measures, no measure has gained widespread trust: instead, mounting evidence argues we should distrust these measures. In this work, we design bias measures that warrant trust based on the cross-disciplinary theory of measurement modeling. To combat the frequently fuzzy treatment of social bias in NLP, we explicitly define social bias, grounded in principles drawn from social science research. We operationalize our definition by proposing a general bias measurement framework DivDist, which we use to instantiate 5 concrete bias measures. To validate our measures, we propose a rigorous testing protocol with 8 testing criteria (e.g. predictive validity: do measures predict biases in US employment?). Through our testing, we demonstrate considerable evidence to trust our measures, showing they overcome conceptual, technical, and empirical deficiencies present in prior measures.
translated by 谷歌翻译
现代语言模型中的检测和缓解有害偏见被广泛认为是至关重要的开放问题。在本文中,我们退后一步,研究语言模型首先是如何偏见的。我们使用在英语Wikipedia语料库中训练的LSTM架构,使用相对较小的语言模型。在培训期间的每一步中,在每个步骤中都会更改数据和模型参数,我们可以详细介绍性别表示形式的发展,数据集中的哪些模式驱动器以及模型的内部状态如何与偏差相关在下游任务(语义文本相似性)中。我们发现性别的表示是动态的,并在训练过程中确定了不同的阶段。此外,我们表明,性别信息在模型的输入嵌入中越来越多地表示,因此,对这些性别的态度可以有效地减少下游偏置。监测训练动力学,使我们能够检测出在输入嵌入中如何表示男性和男性性别的不对称性。这很重要,因为这可能会导致幼稚的缓解策略引入新的不良偏见。我们更普遍地讨论了发现与缓解策略的相关性,以及将我们的方法推广到更大语言模型,变压器体系结构,其他语言和其他不良偏见的前景。
translated by 谷歌翻译
我们研究了掩盖语言模型(MLMS)的任务无关内在和特定于任务的外在社会偏见评估措施之间的关系,并发现这两种评估措施之间仅存在弱相关性。此外,我们发现在下游任务进行微调期间,使用不同方法的MLMS DEBIAS进行了重新划分。我们确定两个培训实例中的社会偏见及其分配的标签是内在偏见评估测量值之间差异的原因。总体而言,我们的发现突出了现有的MLM偏见评估措施的局限性,并提出了使用这些措施在下游应用程序中部署MLM的担忧。
translated by 谷歌翻译
News articles both shape and reflect public opinion across the political spectrum. Analyzing them for social bias can thus provide valuable insights, such as prevailing stereotypes in society and the media, which are often adopted by NLP models trained on respective data. Recent work has relied on word embedding bias measures, such as WEAT. However, several representation issues of embeddings can harm the measures' accuracy, including low-resource settings and token frequency differences. In this work, we study what kind of embedding algorithm serves best to accurately measure types of social bias known to exist in US online news articles. To cover the whole spectrum of political bias in the US, we collect 500k articles and review psychology literature with respect to expected social bias. We then quantify social bias using WEAT along with embedding algorithms that account for the aforementioned issues. We compare how models trained with the algorithms on news articles represent the expected social bias. Our results suggest that the standard way to quantify bias does not align well with knowledge from psychology. While the proposed algorithms reduce the~gap, they still do not fully match the literature.
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
Several works have proven that finetuning is an applicable approach for debiasing contextualized word embeddings. Similarly, discrete prompts with semantic meanings have shown to be effective in debiasing tasks. With unfixed mathematical representation at the token level, continuous prompts usually surpass discrete ones at providing a pre-trained language model (PLM) with additional task-specific information. Despite this, relatively few efforts have been made to debias PLMs by prompt tuning with continuous prompts compared to its discrete counterpart. Furthermore, for most debiasing methods that alter a PLM's original parameters, a major problem is the need to not only decrease the bias in the PLM but also to ensure that the PLM does not lose its representation ability. Finetuning methods typically have a hard time maintaining this balance, as they tend to violently remove meanings of attribute words. In this paper, we propose ADEPT, a method to debias PLMs using prompt tuning while maintaining the delicate balance between removing biases and ensuring representation ability. To achieve this, we propose a new training criterion inspired by manifold learning and equip it with an explicit debiasing term to optimize prompt tuning. In addition, we conduct several experiments with regard to the reliability, quality, and quantity of a previously proposed attribute training corpus in order to obtain a clearer prototype of a certain attribute, which indicates the attribute's position and relative distances to other words on the manifold. We evaluate ADEPT on several widely acknowledged debiasing benchmarks and downstream tasks, and find that it achieves competitive results while maintaining (and in some cases even improving) the PLM's representation ability. We further visualize words' correlation before and after debiasing a PLM, and give some possible explanations for the visible effects.
translated by 谷歌翻译
基于变压器的语言模型最近在许多自然语言任务中取得了显着的结果。但是,通常通过利用大量培训数据来实现排行榜的性能,并且很少通过将明确的语言知识编码为神经模型。这使许多人质疑语言学对现代自然语言处理的相关性。在本文中,我介绍了几个案例研究,以说明理论语言学和神经语言模型仍然相互关联。首先,语言模型通过提供一个客观的工具来测量语义距离,这对语言学家很有用,语义距离很难使用传统方法。另一方面,语言理论通过提供框架和数据源来探究我们的语言模型,以了解语言理解的特定方面,从而有助于语言建模研究。本论文贡献了三项研究,探讨了语言模型中语法 - 听觉界面的不同方面。在论文的第一部分中,我将语言模型应用于单词类灵活性的问题。我将Mbert作为语义距离测量的来源,我提供了有利于将单词类灵活性分析为方向过程的证据。在论文的第二部分中,我提出了一种方法来测量语言模型中间层的惊奇方法。我的实验表明,包含形态句法异常的句子触发了语言模型早期的惊喜,而不是语义和常识异常。最后,在论文的第三部分中,我适应了一些心理语言学研究,以表明语言模型包含了论证结构结构的知识。总而言之,我的论文在自然语言处理,语言理论和心理语言学之间建立了新的联系,以为语言模型的解释提供新的观点。
translated by 谷歌翻译
基于AI和机器学习的决策系统已在各种现实世界中都使用,包括医疗保健,执法,教育和金融。不再是牵强的,即设想一个未来,自治系统将推动整个业务决策,并且更广泛地支持大规模决策基础设施以解决社会最具挑战性的问题。当人类做出决定时,不公平和歧视的问题普遍存在,并且当使用几乎没有透明度,问责制和公平性的机器做出决定时(或可能会放大)。在本文中,我们介绍了\ textit {Causal公平分析}的框架,目的是填补此差距,即理解,建模,并可能解决决策设置中的公平性问题。我们方法的主要见解是将观察到数据中存在的差异的量化与基本且通常是未观察到的因果机制收集的因果机制的收集,这些机制首先会产生差异,挑战我们称之为因果公平的基本问题分析(FPCFA)。为了解决FPCFA,我们研究了分解差异和公平性的经验度量的问题,将这种变化归因于结构机制和人群的不同单位。我们的努力最终达到了公平地图,这是组织和解释文献中不同标准之间关系的首次系统尝试。最后,我们研究了进行因果公平分析并提出一本公平食谱的最低因果假设,该假设使数据科学家能够评估不同影响和不同治疗的存在。
translated by 谷歌翻译
随着日常生活中的自然语言处理(NLP)的部署扩大,来自NLP模型的继承的社会偏见变得更加严重和有问题。以前的研究表明,在人生成的Corpora上培训的单词嵌入式具有强烈的性别偏见,可以在下游任务中产生鉴别结果。以前的脱叠方法主要侧重于建模偏差,并且仅隐含地考虑语义信息,同时完全忽略偏置和语义组件之间的复杂潜在的因果结构。为了解决这些问题,我们提出了一种新的方法,利用了因果推断框架来有效消除性别偏见。所提出的方法允许我们构建和分析促进性别信息流程的复杂因果机制,同时保留单词嵌入中的Oracle语义信息。我们的综合实验表明,该方法达到了最先进的性别脱叠任务。此外,我们的方法在字相似性评估和各种外在下游NLP任务中产生了更好的性能。
translated by 谷歌翻译
我们采用自然语言处理技术来分析“ 200万首歌数据库”语料库中的377808英文歌曲歌词,重点是五十年(1960- 2010年)的性别歧视表达和性别偏见的测量。使用性别歧视分类器,我们比以前的研究使用手动注释的流行歌曲样本来确定性别歧视歌词。此外,我们通过测量在歌曲歌词中学到的单词嵌入中的关联来揭示性别偏见。我们发现性别歧视的内容可以在整个时间内增加,尤其是从男性艺术家和出现在Billboard图表中的流行歌曲。根据表演者的性别,歌曲还包含不同的语言偏见,男性独奏艺术家歌曲包含更多和更强烈的偏见。这是对此类类型的第一个大规模分析,在流行文化的如此有影响力的一部分中,可以深入了解语言使用。
translated by 谷歌翻译
大语言模型中的表示形式包含多种类型的性别信息。我们专注于英语文本中的两种此类信号:事实性别信息,这是语法或语义属性,以及性别偏见,这是单词和特定性别之间的相关性。我们可以解开模型的嵌入,并识别编码两种类型信息的组件。我们的目标是减少表示形式中的刻板印象偏见,同时保留事实性别信号。我们的过滤方法表明,可以减少性别中立职业名称的偏见,而不会严重恶化能力。这些发现可以应用于语言生成,以减轻对刻板印象的依赖,同时保留核心方面的性别协议。
translated by 谷歌翻译
众所周知,端到端的神经NLP体系结构很难理解,这引起了近年来为解释性建模的许多努力。模型解释的基本原则是忠诚,即,解释应准确地代表模型预测背后的推理过程。这项调查首先讨论了忠诚的定义和评估及其对解释性的意义。然后,我们通过将方法分为五类来介绍忠实解释的最新进展:相似性方法,模型内部结构的分析,基于反向传播的方法,反事实干预和自我解释模型。每个类别将通过其代表性研究,优势和缺点来说明。最后,我们从它们的共同美德和局限性方面讨论了上述所有方法,并反思未来的工作方向忠实的解释性。对于有兴趣研究可解释性的研究人员,这项调查将为该领域提供可访问且全面的概述,为进一步探索提供基础。对于希望更好地了解自己的模型的用户,该调查将是一项介绍性手册,帮助选择最合适的解释方法。
translated by 谷歌翻译
语言语料库中的统计规律将众所周知的社会偏见编码为单词嵌入。在这里,我们专注于性别,以全面分析在互联网语料库中训练的广泛使用的静态英语单词嵌入式(Glove 2014,FastText 2017)。使用单类单词嵌入关联测试,我们证明了性别偏见的广泛流行,这些偏见也显示出:(1)与男性与女性相关的单词频率; (b)与性别相关的单词中的言论部分; (c)与性别相关的单词中的语义类别; (d)性别相关的单词中的价,唤醒和优势。首先,就单词频率而言:我们发现,在词汇量中,有1000个最常见的单词与男性相比,有77%的人与男性相关,这是在英语世界的日常语言中直接证明男性默认的证据。其次,转向言论的部分:顶级男性相关的单词通常是动词(例如,战斗,压倒性),而顶级女性相关的单词通常是形容词和副词(例如,奉献,情感上)。嵌入中的性别偏见也渗透到言论部分。第三,对于语义类别:自下而上,对与每个性别相关的前1000个单词的群集分析。与男性相关的顶级概念包括大技术,工程,宗教,体育和暴力的角色和领域;相比之下,顶级女性相关的概念较少关注角色,包括女性特定的诽谤和性内容以及外观和厨房用语。第四,使用〜20,000个单词词典的人类评级,唤醒和主导地位,我们发现与男性相关的单词在唤醒和优势上较高,而与女性相关的单词在价上更高。
translated by 谷歌翻译