算法公平吸引了机器学习社区越来越多的关注。文献中提出了各种定义,但是它们之间的差异和联系并未清楚地解决。在本文中,我们回顾并反思了机器学习文献中先前提出的各种公平概念,并试图与道德和政治哲学,尤其是正义理论的论点建立联系。我们还从动态的角度考虑了公平的询问,并进一步考虑了当前预测和决策引起的长期影响。鉴于特征公平性的差异,我们提出了一个流程图,该流程图包括对数据生成过程,预测结果和诱导的影响的不同类型的公平询问的隐式假设和预期结果。本文展示了与任务相匹配的重要性(人们希望执行哪种公平性)和实现预期目的的手段(公平分析的范围是什么,什么是适当的分析计划)。
translated by 谷歌翻译
Recommender systems can strongly influence which information we see online, e.g., on social media, and thus impact our beliefs, decisions, and actions. At the same time, these systems can create substantial business value for different stakeholders. Given the growing potential impact of such AI-based systems on individuals, organizations, and society, questions of fairness have gained increased attention in recent years. However, research on fairness in recommender systems is still a developing area. In this survey, we first review the fundamental concepts and notions of fairness that were put forward in the area in the recent past. Afterward, through a review of more than 150 scholarly publications, we present an overview of how research in this field is currently operationalized, e.g., in terms of general research methodology, fairness measures, and algorithmic approaches. Overall, our analysis of recent works points to specific research gaps. In particular, we find that in many research works in computer science, very abstract problem operationalizations are prevalent, and questions of the underlying normative claims and what represents a fair recommendation in the context of a given application are often not discussed in depth. These observations call for more interdisciplinary research to address fairness in recommendation in a more comprehensive and impactful manner.
translated by 谷歌翻译
业务分析(BA)的广泛采用带来了财务收益和提高效率。但是,当BA以公正的影响为决定时,这些进步同时引起了人们对法律和道德挑战的不断增加。作为对这些关注的回应,对算法公平性的新兴研究涉及算法输出,这些算法可能会导致不同的结果或其他形式的对人群亚组的不公正现象,尤其是那些在历史上被边缘化的人。公平性是根据法律合规,社会责任和效用是相关的;如果不充分和系统地解决,不公平的BA系统可能会导致社会危害,也可能威胁到组织自己的生存,其竞争力和整体绩效。本文提供了有关算法公平的前瞻性,注重BA的评论。我们首先回顾有关偏见来源和措施的最新研究以及偏见缓解算法。然后,我们对公用事业关系的详细讨论进行了详细的讨论,强调经常假设这两种构造之间经常是错误的或短视的。最后,我们通过确定企业学者解决有效和负责任的BA的关键的有影响力的公开挑战的机会来绘制前进的道路。
translated by 谷歌翻译
基于AI和机器学习的决策系统已在各种现实世界中都使用,包括医疗保健,执法,教育和金融。不再是牵强的,即设想一个未来,自治系统将推动整个业务决策,并且更广泛地支持大规模决策基础设施以解决社会最具挑战性的问题。当人类做出决定时,不公平和歧视的问题普遍存在,并且当使用几乎没有透明度,问责制和公平性的机器做出决定时(或可能会放大)。在本文中,我们介绍了\ textit {Causal公平分析}的框架,目的是填补此差距,即理解,建模,并可能解决决策设置中的公平性问题。我们方法的主要见解是将观察到数据中存在的差异的量化与基本且通常是未观察到的因果机制收集的因果机制的收集,这些机制首先会产生差异,挑战我们称之为因果公平的基本问题分析(FPCFA)。为了解决FPCFA,我们研究了分解差异和公平性的经验度量的问题,将这种变化归因于结构机制和人群的不同单位。我们的努力最终达到了公平地图,这是组织和解释文献中不同标准之间关系的首次系统尝试。最后,我们研究了进行因果公平分析并提出一本公平食谱的最低因果假设,该假设使数据科学家能够评估不同影响和不同治疗的存在。
translated by 谷歌翻译
公平性是确保机器学习(ML)预测系统不会歧视特定个人或整个子人群(尤其是少数族裔)的重要要求。鉴于观察公平概念的固有主观性,文献中已经引入了几种公平概念。本文是一项调查,说明了通过大量示例和场景之间的公平概念之间的微妙之处。此外,与文献中的其他调查不同,它解决了以下问题:哪种公平概念最适合给定的现实世界情景,为什么?我们试图回答这个问题的尝试包括(1)确定手头现实世界情景的一组与公平相关的特征,(2)分析每个公平概念的行为,然后(3)适合这两个元素以推荐每个特定设置中最合适的公平概念。结果总结在决策图中可以由从业者和政策制定者使用,以导航相对较大的ML目录。
translated by 谷歌翻译
近年来,解决机器学习公平性(ML)和自动决策的问题引起了处理人工智能的科学社区的大量关注。已经提出了ML中的公平定义的一种不同的定义,认为不同概念是影响人口中个人的“公平决定”的不同概念。这些概念之间的精确差异,含义和“正交性”尚未在文献中完全分析。在这项工作中,我们试图在这个解释中汲取一些订单。
translated by 谷歌翻译
通常,公平的机器学习研究集中在一个决策者上,并假设潜在的人口是静止的。但是,许多激励这项工作的关键领域的特征是与许多决策者的竞争市场。实际上,我们可能只期望其中的一部分采用任何非强制性公平意识的政策,这一情况是政治哲学家称之为部分合规性的情况。这种可能性提出了重要的问题:部分合规设置中决策主体的战略行为如何影响分配结果?如果K%的雇主要自愿采取公平性的干预措施,我们是否应该期望K%的进步(总计)对普遍采用的利益,或者部分合规性的动态是否会消除希望的好处?采用全球(与本地)观点会如何影响审计师的结论?在本文中,我们提出了一个简单的就业市场模型,利用模拟作为探索互动效应和激励效果对结果和审计指标的影响的工具。我们的主要发现是,在平衡下:(1)部分合规性(k%的雇主)可能导致远远远远远小于比例(k%)在全部合规性结果方面的进展; (2)当公平的雇主与全球(与本地)统计数据相匹配时,差距更为严重; (3)本地与全球统计数据的选择可以绘制符合规定与不符合雇主的公平性的表现的不同图片; (4)部分遵守当地均等措施可以引起极端的隔离。
translated by 谷歌翻译
Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it is the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school. * Equal contribution. This work was done while JL was a Research Fellow at the Alan Turing Institute. 2 https://obamawhitehouse.archives.gov/blog/2016/05/04/big-risks-big-opportunities-intersection-big-dataand-civil-rights 31st Conference on Neural Information Processing Systems (NIPS 2017),
translated by 谷歌翻译
现有的制定公平计算定义的努力主要集中在平等的分布概念上,在这种情况下,平等是由系统中给出的资源或决策定义的。然而,现有的歧视和不公正通常是社会关系不平等的结果,而不是资源分配不平等。在这里,我们展示了对公平和平等的现有计算和经济定义的优化,无法防止不平等的社会关系。为此,我们提供了一个在简单的招聘市场中具有自我融合平衡的示例,该市场在关系上不平等,但满足了现有的公平分布概念。在此过程中,我们引入了公然的关系不公平的概念,对完整信息游戏进行了讨论,并讨论了该定义如何有助于启动一种将关系平等纳入计算系统的新方法。
translated by 谷歌翻译
解决公平问题对于安全使用机器学习算法来支持对人们的生活产生关键影响的决策,例如雇用工作,儿童虐待,疾病诊断,贷款授予等。过去十年,例如统计奇偶校验和均衡的赔率。然而,最新的公平概念是基于因果关系的,反映了现在广泛接受的想法,即使用因果关系对于适当解决公平问题是必要的。本文研究了基于因果关系的公平概念的详尽清单,并研究了其在现实情况下的适用性。由于大多数基于因果关系的公平概念都是根据不可观察的数量(例如干预措施和反事实)来定义的,因此它们在实践中的部署需要使用观察数据来计算或估计这些数量。本文提供了有关从观察数据(包括可识别性(Pearl的SCM框架))和估计(潜在结果框架)中推断出因果量的不同方法的全面报告。该调查论文的主要贡献是(1)指南,旨在在特定的现实情况下帮助选择合适的公平概念,以及(2)根据Pearl的因果关系阶梯的公平概念的排名,表明它很难部署。实践中的每个概念。
translated by 谷歌翻译
机器学习显着增强了机器人的能力,使他们能够在人类环境中执行广泛的任务并适应我们不确定的现实世界。机器学习各个领域的最新作品强调了公平性的重要性,以确保这些算法不会再现人类的偏见并导致歧视性结果。随着机器人学习系统在我们的日常生活中越来越多地执行越来越多的任务,了解这种偏见的影响至关重要,以防止对某些人群的意外行为。在这项工作中,我们从跨学科的角度进行了关于机器人学习公平性的首次调查,该研究跨越了技术,道德和法律挑战。我们提出了偏见来源的分类法和由此产生的歧视类型。使用来自不同机器人学习域的示例,我们研究了不公平结果和减轻策略的场景。我们通过涵盖不同的公平定义,道德和法律考虑以及公平机器人学习的方法来介绍该领域的早期进步。通过这项工作,我们旨在为公平机器人学习中的开创性发展铺平道路。
translated by 谷歌翻译
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm's use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court. This Essay examines these concerns through the lens of American antidiscrimination law-more particularly, through Title
translated by 谷歌翻译
A significant body of research in the data sciences considers unfair discrimination against social categories such as race or gender that could occur or be amplified as a result of algorithmic decisions. Simultaneously, real-world disparities continue to exist, even before algorithmic decisions are made. In this work, we draw on insights from the social sciences brought into the realm of causal modeling and constrained optimization, and develop a novel algorithmic framework for tackling pre-existing real-world disparities. The purpose of our framework, which we call the "impact remediation framework," is to measure real-world disparities and discover the optimal intervention policies that could help improve equity or access to opportunity for those who are underserved with respect to an outcome of interest. We develop a disaggregated approach to tackling pre-existing disparities that relaxes the typical set of assumptions required for the use of social categories in structural causal models. Our approach flexibly incorporates counterfactuals and is compatible with various ontological assumptions about the nature of social categories. We demonstrate impact remediation with a hypothetical case study and compare our disaggregated approach to an existing state-of-the-art approach, comparing its structure and resulting policy recommendations. In contrast to most work on optimal policy learning, we explore disparity reduction itself as an objective, explicitly focusing the power of algorithms on reducing inequality.
translated by 谷歌翻译
本文提出了秤,这是一个一般框架,将公平原则转化为基于约束马尔可夫决策过程(CMDP)的共同表示。借助因果语言,我们的框架可以在决策过程(程序公平)以及决策(结果公平)产生的结果上构成限制。具体而言,我们表明可以将众所周知的公平原理编码为实用程序组件,非毒性组件或鳞片中心中的因果分量。我们使用涉及模拟医疗方案和现实世界中Compas数据集的一组案例研究来说明量表。实验表明,我们的框架产生了公平的政策,这些政策在单步和顺序决策方案中体现了替代公平原则。
translated by 谷歌翻译
分类,一种重大研究的数据驱动机器学习任务,驱动越来越多的预测系统,涉及批准的人类决策,如贷款批准和犯罪风险评估。然而,分类器经常展示歧视性行为,特别是当呈现有偏置数据时。因此,分类公平已经成为一个高优先级的研究区。数据管理研究显示与数据和算法公平有关的主题的增加和兴趣,包括公平分类的主题。公平分类的跨学科努力,具有最大存在的机器学习研究,导致大量的公平概念和尚未系统地评估和比较的广泛方法。在本文中,我们对13个公平分类方法和额外变种的广泛分析,超越,公平,公平,效率,可扩展性,对数据误差的鲁棒性,对潜在的ML模型,数据效率和使用各种指标的稳定性的敏感性和稳定性现实世界数据集。我们的分析突出了对不同指标的影响的新颖见解和高级方法特征对不同方面的性能方面。我们还讨论了选择适合不同实际设置的方法的一般原则,并确定以数据管理为中心的解决方案可能产生最大影响的区域。
translated by 谷歌翻译
因果推理在人类如何理解世界并在日常生活中做出决策中具有必不可少的作用。虽然20美元的$ Century Science是因为使因果的主张过于强大且无法实现,但第21美元的$ Century是由因果关系的数学化和引入非确定性原因概念的因果关系的重返标志的。 \ cite {illari2011look}。除了其流行病学,政治和社会科学方面的常见用例外,因果关系对于在法律和日常意义上评估自动决定的公平性至关重要。我们提供了为什么因果关系对于公平评估特别重要的论点和例子。特别是,我们指出了非因果预测的社会影响以及依赖因果主张的法律反歧视过程。最后,我们讨论了在实际情况以及可能的解决方案中应用因果关系的挑战和局限性。
translated by 谷歌翻译
We study critical systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing. These systems often support communities disproportionately affected by systemic racial, gender, or other injustices, so it is crucial to design these systems with fairness considerations in mind. To address this problem, we propose a framework for evaluating fairness in contextual resource allocation systems that is inspired by fairness metrics in machine learning. This framework can be applied to evaluate the fairness properties of a historical policy, as well as to impose constraints in the design of new (counterfactual) allocation policies. Our work culminates with a set of incompatibility results that investigate the interplay between the different fairness metrics we propose. Notably, we demonstrate that: 1) fairness in allocation and fairness in outcomes are usually incompatible; 2) policies that prioritize based on a vulnerability score will usually result in unequal outcomes across groups, even if the score is perfectly calibrated; 3) policies using contextual information beyond what is needed to characterize baseline risk and treatment effects can be fairer in their outcomes than those using just baseline risk and treatment effects; and 4) policies using group status in addition to baseline risk and treatment effects are as fair as possible given all available information. Our framework can help guide the discussion among stakeholders in deciding which fairness metrics to impose when allocating scarce resources.
translated by 谷歌翻译
“算法公平性”的新兴领域提供了一种用于推理算法预测和决策的公平的一组新颖的方法。甚至作为算法公平已经成为提高域名在此类公共政策中平等的努力的突出成分,它也面临着显着的限制和批评。最基本的问题是称为“公平性不可能”的数学结果(公平的数学定义之间的不相容性)。此外,满足公平标准的许多算法实际上加剧了压迫。这两个问题呼吁质疑算法公平是否可以在追求平等中发挥富有成效的作用。在本文中,我将这些问题诊断为算法公平方法的乘积,并提出了该领域的替代路径。 “正式算法公平”的主导方法遭受了基本限制:它依赖于狭窄的分析框架,这些分析框架仅限于特定决策过程,孤立于这些决定的背景。鉴于这种缺点,我借鉴了法律和哲学的实质性平等的理论,提出了一种替代方法:“实质性算法公平。”实质性算法公平性采用更广泛的范围来分析公平性,超出特定决策点,以考虑社会等级,以及算法促进的决策的影响。因此,实质性算法公平表明,改革,使压迫压迫和逃避公平的不可能性。此外,实质性算法公平呈现出算法公平领域的新方向:远离“公平性”的正式数学模型,并朝着算法促进平等的实质性评估。
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
对性别或种族偏见等偏见的研究是社会和行为科学中的重要话题。但是,文献中并不总是清楚地定义偏见的概念。偏见的定义通常是模棱两可的,或者根本不提供定义。要精确研究偏见,重要的是要有明确的偏见概念。我们建议将偏见定义为不合理的直接因果效应。我们建议将差异密切相关的概念定义为包括偏见的直接或间接因果效应。我们提出的定义可用于以更严格和系统的方式研究偏见和差异。我们将对偏见和差异的定义与人工智能文献中引入的各种公平定义进行了比较。我们还在两个案例研究中说明了我们的定义,重点是警察枪击案中的科学和种族偏见。我们提出的定义旨在更好地欣赏偏见和差异研究的因果关系。希望这也会导致人们对此类研究的政策含义有了深刻的了解。
translated by 谷歌翻译