如今,教育技术越来越多地使用数据和机器学习(ML)模型。这为学生,讲师和管理员提供了最佳政策的支持和见解。但是,人们众所周知,ML模型受到偏见的影响,这引起了人们对在教育中使用这些自动化的ML算法的公平,偏见和歧视的担忧,以及其意外且不可预见的负面后果。决策过程中偏见的贡献来自用于培训ML模型和模型体系结构的数据集。本文介绍了两个表格数据集上变压器神经网络公平性的初步调查:法学院和学生会学。与经典ML模型相反,基于变压器的模型在求解分类任务时将这些表格数据集转换为更丰富的表示。我们使用不同的公平指标来评估并检查表格数据集中基于变压器模型的公平性和准确性之间的权衡。从经验上讲,我们的方法在法学院数据集中的公平与绩效之间的权衡显示了令人印象深刻的结果。
translated by 谷歌翻译
预测学生的学习成绩是教育数据挖掘(EDM)的关键任务之一。传统上,这种模型的高预测质量被认为至关重要。最近,公平和歧视W.R.T.受保护的属性(例如性别或种族)引起了人们的关注。尽管EDM中有几种公平感知的学习方法,但对这些措施的比较评估仍然缺失。在本文中,我们评估了各种教育数据集和公平感知学习模型上学生绩效预测问题的不同群体公平措施。我们的研究表明,公平度量的选择很重要,对于选择等级阈值的选择同样。
translated by 谷歌翻译
Colleges and universities use predictive analytics in a variety of ways to increase student success rates. Despite the potential for predictive analytics, two major barriers exist to their adoption in higher education: (a) the lack of democratization in deployment, and (b) the potential to exacerbate inequalities. Education researchers and policymakers encounter numerous challenges in deploying predictive modeling in practice. These challenges present in different steps of modeling including data preparation, model development, and evaluation. Nevertheless, each of these steps can introduce additional bias to the system if not appropriately performed. Most large-scale and nationally representative education data sets suffer from a significant number of incomplete responses from the research participants. While many education-related studies addressed the challenges of missing data, little is known about the impact of handling missing values on the fairness of predictive outcomes in practice. In this paper, we set out to first assess the disparities in predictive modeling outcomes for college-student success, then investigate the impact of imputation techniques on the model performance and fairness using a commonly used set of metrics. We conduct a prospective evaluation to provide a less biased estimation of future performance and fairness than an evaluation of historical data. Our comprehensive analysis of a real large-scale education dataset reveals key insights on modeling disparities and how imputation techniques impact the fairness of the student-success predictive outcome under different testing scenarios. Our results indicate that imputation introduces bias if the testing set follows the historical distribution. However, if the injustice in society is addressed and consequently the upcoming batch of observations is equalized, the model would be less biased.
translated by 谷歌翻译
由于决策越来越依赖机器学习和(大)数据,数据驱动AI系统的公平问题正在接受研究和行业的增加。已经提出了各种公平知识的机器学习解决方案,该解决方案提出了数据,学习算法和/或模型输出中的公平相关的干预措施。然而,提出新方法的重要组成部分正在经验上对其进行验证在代表现实和不同的设置的基准数据集上。因此,在本文中,我们概述了用于公平知识机器学习的真实数据集。我们专注于表格数据作为公平感知机器学习的最常见的数据表示。我们通过识别不同属性之间的关系,特别是w.r.t.来开始分析。受保护的属性和类属性,使用贝叶斯网络。为了更深入地了解数据集中的偏见和公平性,我们调查使用探索性分析的有趣关系。
translated by 谷歌翻译
A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations, which spreads through collected data. When not properly accounted for, machine learning (ML) models learned from data can reinforce the structural biases already present in society. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches show regularly biased behaviors. However, we show that standard mitigation techniques, and our own post-hoc method, can be effective in reducing the level of unfair bias. We provide practical recommendations to develop ML models for depression risk prediction with increased fairness and trust in the real world. No single best ML model for depression prediction provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions.
translated by 谷歌翻译
住院患者的高血糖治疗对发病率和死亡率都有重大影响。这项研究使用了大型临床数据库来预测需要住院的糖尿病患者的需求,这可能会改善患者的安全性。但是,这些预测可能容易受到社会决定因素(例如种族,年龄和性别)造成的健康差异的影响。这些偏见必须在数据收集过程的早期,在进入系统之前就可以消除,并通过模型预测加强,从而导致模型决策的偏见。在本文中,我们提出了一条能够做出预测以及检测和减轻偏见的机器学习管道。该管道分析了临床数据,确定是否存在偏见,将其删除,然后做出预测。我们使用实验证明了模型预测中的分类准确性和公平性。结果表明,当我们在模型早期减轻偏见时,我们会得到更公平的预测。我们还发现,随着我们获得更好的公平性,我们牺牲了一定程度的准确性,这在先前的研究中也得到了验证。我们邀请研究界为确定可以通过本管道解决的其他因素做出贡献。
translated by 谷歌翻译
What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process.When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses.We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.
translated by 谷歌翻译
A recent explosion of research focuses on developing methods and tools for building fair predictive models. However, most of this work relies on the assumption that the training and testing data are representative of the target population on which the model will be deployed. However, real-world training data often suffer from selection bias and are not representative of the target population for many reasons, including the cost and feasibility of collecting and labeling data, historical discrimination, and individual biases. In this paper, we introduce a new framework for certifying and ensuring the fairness of predictive models trained on biased data. We take inspiration from query answering over incomplete and inconsistent databases to present and formalize the problem of consistent range approximation (CRA) of answers to queries about aggregate information for the target population. We aim to leverage background knowledge about the data collection process, biased data, and limited or no auxiliary data sources to compute a range of answers for aggregate queries over the target population that are consistent with available information. We then develop methods that use CRA of such aggregate queries to build predictive models that are certifiably fair on the target population even when no external information about that population is available during training. We evaluate our methods on real data and demonstrate improvements over state of the art. Significantly, we show that enforcing fairness using our methods can lead to predictive models that are not only fair, but more accurate on the target population.
translated by 谷歌翻译
Fairness is an essential factor for machine learning systems deployed in high-stake applications. Among all fairness notions, individual fairness, following a consensus that `similar individuals should be treated similarly,' is a vital notion to guarantee fair treatment for individual cases. Previous methods typically characterize individual fairness as a prediction-invariant problem when perturbing sensitive attributes, and solve it by adopting the Distributionally Robust Optimization (DRO) paradigm. However, adversarial perturbations along a direction covering sensitive information do not consider the inherent feature correlations or innate data constraints, and thus mislead the model to optimize at off-manifold and unrealistic samples. In light of this, we propose a method to learn and generate antidote data that approximately follows the data distribution to remedy individual unfairness. These on-manifold antidote data can be used through a generic optimization procedure with original training data, resulting in a pure pre-processing approach to individual unfairness, or can also fit well with the in-processing DRO paradigm. Through extensive experiments, we demonstrate our antidote data resists individual unfairness at a minimal or zero cost to the model's predictive utility.
translated by 谷歌翻译
机器学习(ML)型号越来越多地用于高股份应用,可以极大地影响人们的生活。尽管他们使用了,但这些模型有可能在种族,性别或种族的基础上向某些社会群体偏见。许多先前的作品已经尝试通过更新训练数据(预处理),改变模型学习过程(处理)或操纵模型输出(后处理)来减轻这种“模型歧视”。但是,这些作品尚未扩展到多敏感参数和敏感选项(MSPSO)的领域,其中敏感参数是可以歧视(例如竞争)和敏感选项的属性(例如,敏感参数(例如黑色或黑色)白色),从而给他们有限的真实可用性。在公平的前后工作也遭受了精度公平的权衡,这可以防止高度的准确性和公平性。此外,以前的文献未能提供与MSPSO的整体公平度量。在本文中,我们通过(a)通过(a)创建一个名为dualfair的新型偏差减轻技术,并开发可以处理MSPSO的新公平度量(即AWI)的新型偏压减轻技术。最后,我们使用全面的U.S抵押贷款数据集测试我们的新型缓解方法,并显示我们的分类器或公平贷款预测仪,比当前最先进的模型获得更好的公平性和准确性指标。
translated by 谷歌翻译
Algorithmic decision making systems are ubiquitous across a wide variety of online as well as offline services. These systems rely on complex learning methods and vast amounts of data to optimize the service functionality, satisfaction of the end user and profitability. However, there is a growing concern that these automated decisions can lead, even in the absence of intent, to a lack of fairness, i.e., their outcomes can disproportionately hurt (or, benefit) particular groups of people sharing one or more sensitive attributes (e.g., race, sex). In this paper, we introduce a flexible mechanism to design fair classifiers by leveraging a novel intuitive measure of decision boundary (un)fairness. We instantiate this mechanism with two well-known classifiers, logistic regression and support vector machines, and show on real-world data that our mechanism allows for a fine-grained control on the degree of fairness, often at a small cost in terms of accuracy. A Python implementation of our mechanism is available at fate-computing.mpi-sws.org
translated by 谷歌翻译
在许多机器学习应用中已经显示了歧视,该应用程序要求在与道德相关的领域(例如面部识别,医学诊断和刑事判决)中部署之前进行足够的公平测试。现有的公平测试方法主要设计用于识别个人歧视,即对个人的歧视。然而,作为另一种广泛的歧视类型,对群体歧视(大多数隐藏)的测试却少得多。为了解决差距,在这项工作中,我们提出了测试,一种可解释的测试方法,它系统地识别和措施隐藏了一个神经网络的隐藏(我们称为“微妙”群体歧视},该神经网络的特征是敏感特征的条件。一个神经网络,TestsgDFirst自动生成可解释的规则集,该规则集将输入空间分为两组,以暴露模型的组歧视。鉴于,Testsgdalso提供了基于对输入空间进行采样的估计组公平得分,以衡量确定的SIXTEL组歧视程度,这可以确保准确地达到错误的限制。我们评估了在包括结构化数据和文本数据在内的流行数据集中训练的测试多个神经网络模型。实验结果表明,测试有效地有效地识别和测量了如此微妙的群体歧视,以至于该测试效率以前从未透露过。矿石,我们表明,测试的测试结果指南生成新样品的测试结果,以通过可忽略不计的准确性下降来减轻这种歧视。
translated by 谷歌翻译
公平性是确保机器学习(ML)预测系统不会歧视特定个人或整个子人群(尤其是少数族裔)的重要要求。鉴于观察公平概念的固有主观性,文献中已经引入了几种公平概念。本文是一项调查,说明了通过大量示例和场景之间的公平概念之间的微妙之处。此外,与文献中的其他调查不同,它解决了以下问题:哪种公平概念最适合给定的现实世界情景,为什么?我们试图回答这个问题的尝试包括(1)确定手头现实世界情景的一组与公平相关的特征,(2)分析每个公平概念的行为,然后(3)适合这两个元素以推荐每个特定设置中最合适的公平概念。结果总结在决策图中可以由从业者和政策制定者使用,以导航相对较大的ML目录。
translated by 谷歌翻译
我们考虑为多类分类任务生产公平概率分类器的问题。我们以“投射”预先培训(且可能不公平的)分类器在满足目标群体对要求的一组模型上的“投影”来提出这个问题。新的投影模型是通过通过乘法因子后处理预训练的分类器的输出来给出的。我们提供了一种可行的迭代算法,用于计算投影分类器并得出样本复杂性和收敛保证。与最先进的基准测试的全面数值比较表明,我们的方法在准确性权衡曲线方面保持了竞争性能,同时在大型数据集中达到了有利的运行时。我们还在具有多个类别,多个相互保护组和超过1M样本的开放数据集上评估了我们的方法。
translated by 谷歌翻译
在机器学习模型道德偏见已经成为软件工程界关注的一个问题。大多数现有软件工程的作品集中在模型寻找道德偏见,而不是修复它。发现偏差后,下一步就是缓解。在此之前研究人员主要是试图利用监督的方法来实现公平。与值得信赖的地面实况然而,在现实世界中,获得的数据是具有挑战性的,也基本事实可以包含人为偏差。半监督学习是一种机器学习技术,其中,递增地,标记的数据被用于生成伪标签中的数据的剩余部分(然后全部数据被用于模型训练)。在这项工作中,我们采用四种常用的半监督技术作为伪贴标创造公平分类模型。我们的框架,公平SSL,需要标记的数据的一个非常小的量(10%)作为输入,并为未标记的数据生成伪标签。然后,我们综合生成新的数据点,以平衡基础类,并提议Chakraborty等人的保护属性的训练数据。在2021年FSE最后,分类模型被训练在平衡伪标记的数据和测试数据进行了验证。实验十项数据集和三个学生后,我们发现,公平SSL实现了性能先进设备,最先进的三个偏置抑制算法类似。这就是说,公平SSL的明显优势在于,它仅需要10%的标记的训练数据。据我们所知,这是在半监督技术被用来针对SE型号ML道德偏见争第一SE工作。
translated by 谷歌翻译
机器学习(ML)在渲染影响社会各个群体的决策中起着越来越重要的作用。 ML模型为刑事司法的决定,银行业中的信贷延长以及公司的招聘做法提供了信息。这提出了模型公平性的要求,这表明自动化的决策对于受保护特征(例如,性别,种族或年龄)通常是公平的,这些特征通常在数据中代表性不足。我们假设这个代表性不足的问题是数据学习不平衡问题的必然性。此类不平衡通常反映在两个类别和受保护的功能中。例如,一个班级(那些获得信用的班级)对于另一个班级(未获得信用的人)可能会过分代表,而特定组(女性)(女性)的代表性可能与另一组(男性)有关。相对于受保护组的算法公平性的关键要素是同时减少了基础培训数据中的类和受保护的群体失衡,这促进了模型准确性和公平性的提高。我们通过展示这些领域中的关键概念如何重叠和相互补充,讨论弥合失衡学习和群体公平的重要性;并提出了一种新颖的过采样算法,即公平的过采样,该算法既解决偏斜的类别分布和受保护的特征。我们的方法:(i)可以用作标准ML算法的有效预处理算法,以共同解决不平衡和群体权益; (ii)可以与公平感知的学习算法结合使用,以提高其对不同水平不平衡水平的稳健性。此外,我们迈出了一步,将公平和不平衡学习之间的差距与新的公平实用程序之间的差距弥合,从而将平衡的准确性与公平性结合在一起。
translated by 谷歌翻译
软件偏见是软件工程师越来越重要的操作问题。我们提出了17种代表性缓解方法的大规模,全面的经验评估,该方法通过1​​2个机器学习(ML)绩效指标,4项公平度量指标和24种类型的公平性 - 性能权衡评估,应用于8种广泛采用的公平性折衷评估基准软件决策/预测任务。与以前在此重要的操作软件特征上的工作相比,经验覆盖范围是全面的,涵盖了最多的偏见缓解方法,评估指标和公平性的绩效权衡措施。我们发现(1)偏置缓解方法大大降低了所有ML性能指标(包括先前工作中未考虑的指标)所报告的值,在很大一部分的情况下(根据不同的ML性能指标为42%〜75%) ; (2)在所有情况和指标中,偏置缓解方法仅在约50%的情况下获得公平性改善(根据用于评估偏见/公平性的指标,介于29%〜59%之间); (3)缓解偏见的方法的表现不佳,甚至导致37%的情况下的公平性和ML性能下降; (4)缓解偏差方法的有效性取决于任务,模型,公平性和ML性能指标,并且没有证明对所有研究的情况有效的“银弹”缓解方法。在仅29%的方案中,我们发现优于其他方法的最佳缓解方法。我们已公开提供本研究中使用的脚本和数据,以便将来复制和扩展我们的工作。
translated by 谷歌翻译
机器学习模型在高赌注应用中变得普遍存在。尽管在绩效方面有明显的效益,但该模型可以表现出对少数民族群体的偏见,并导致决策过程中的公平问题,导致对个人和社会的严重负面影响。近年来,已经开发了各种技术来减轻机器学习模型的偏差。其中,加工方法已经增加了社区的关注,在模型设计期间直接考虑公平,以诱导本质上公平的模型,从根本上减轻了产出和陈述中的公平问题。在本调查中,我们审查了加工偏置减缓技术的当前进展。基于在模型中实现公平的地方,我们将它们分类为明确和隐性的方法,前者直接在培训目标中纳入公平度量,后者重点介绍精炼潜在代表学习。最后,我们在讨论该社区中的研究挑战来讨论调查,以激励未来的探索。
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
本文旨在改善多敏感属性的机器学习公平。自机学习软件越来越多地用于高赌注和高风险决策,机器学习公平吸引了越来越多的关注。大多数现有的机器学习公平解决方案一次只针对一个敏感的属性(例如性别),或者具有魔法参数来调整,或者具有昂贵的计算开销。为了克服这些挑战,我们在培训机器学习模型之前,我们建议平衡每种敏感属性的培训数据分布。我们的研究结果表明,在低计算开销的情况下,在低计算开销的情况下,Fairbalancy可以在每一个已知的敏感属性上显着减少公平度量(AOD,EOD和SPD),如果对预测性能有任何损坏,则可以在没有多大的情况下进行任何已知的敏感属性。此外,FairbalanceClass是非游价的变种,可以平衡培训数据中的班级分布。通过FairbalanceClass,预测将不再支持多数阶级,从而在少数阶级获得更高的F $ _1 $得分。 Fairbalance和FairbalanceClass还以预测性能和公平度量而言,在其他最先进的偏置缓解算法中也优于其他最先进的偏置缓解算法。本研究将通过提供一种简单但有效的方法来利用社会来改善具有多个敏感属性数据的机器学习软件的公平性。我们的结果还验证了在具有无偏见的地面真理标签上的数据集上的假设,学习模型中的道德偏置在很大程度上属于每个组内具有(2)类分布中的组大小和(2)差异的训练数据。
translated by 谷歌翻译