关于机器学习(ML)公平性的主要担忧是,为了实现它,人们可能必须缩短一些准确性。克服这个问题,Hardt等人。提出了机会平等的概念(EO),当目标标签相对于输入特征的确定性符号时,它与最大精度兼容。然而,在概率的情况下,问题更复杂:已经表明,在差异隐私约束下,有些数据源只能在完全损害准确性下实现EO,从而有意义满足EO的分类器不能比琐碎(即常数)分类器更准确。在我们的论文中,我们通过删除隐私约束来加强这一结果。即,我们表明对于某些数据来源,满足EO的最准确的分类器是一个简单的分类器。此外,我们研究了准确性和EO损失(机会差异)之间的权衡,并在数据源提供了足够的条件,在其中EO和非琐碎的准确性兼容。
translated by 谷歌翻译
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected group are available, we show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker, who can respond by improving the classification accuracy.In line with other studies, our notion is oblivious: it depends only on the joint statistics of the predictor, the target and the protected attribute, but not on interpretation of individual features. We study the inherent limits of defining and identifying biases based on such oblivious measures, outlining what can and cannot be inferred from different oblivious tests.We illustrate our notion using a case study of FICO credit scores.
translated by 谷歌翻译
在高赌注域中的机器学习工具的实际应用通常被调节为公平,因此预测目标应该满足相对于受保护属性的奇偶校验的一些定量概念。然而,公平性和准确性之间的确切权衡并不完全清楚,即使是对分类问题的基本范式也是如此。在本文中,我们通过在任何公平分类器的群体误差之和中提供较低的界限,在分类设置中表征统计奇偶校验和准确性之间的固有权衡。我们不可能的定理可以被解释为公平的某种不确定性原则:如果基本率不同,那么符合统计奇偶校验的任何公平分类器都必须在至少一个组中产生很大的错误。我们进一步扩展了这一结果,以便在学习公平陈述的角度下给出任何(大约)公平分类者的联合误差的下限。为了表明我们的下限是紧张的,假设Oracle访问贝叶斯(潜在不公平)分类器,我们还构造了一种返回一个随机分类器的算法,这是最佳和公平的。有趣的是,当受保护的属性可以采用超过两个值时,这个下限的扩展不承认分析解决方案。然而,在这种情况下,我们表明,通过解决线性程序,我们可以通过解决我们作为电视 - 重心问题的术语,电视距离的重心问题来有效地计算下限。在上面,我们证明,如果集团明智的贝叶斯最佳分类器是关闭的,那么学习公平的表示导致公平的替代概念,称为准确性奇偶校验,这使得错误率在组之间关闭。最后,我们还在现实世界数据集上进行实验,以确认我们的理论发现。
translated by 谷歌翻译
最近的工作突出了因果关系在设计公平决策算法中的作用。但是,尚不清楚现有的公平因果概念如何相互关系,或者将这些定义作为设计原则的后果是什么。在这里,我们首先将算法公平性的流行因果定义组装成两个广泛的家庭:(1)那些限制决策对反事实差异的影响的家庭; (2)那些限制了法律保护特征(如种族和性别)对决策的影响。然后,我们在分析和经验上表明,两个定义的家庭\ emph {几乎总是总是} - 从一种理论意义上讲 - 导致帕累托占主导地位的决策政策,这意味着每个利益相关者都有一个偏爱的替代性,不受限制的政策从大型自然级别中绘制。例如,在大学录取决定的情况下,每位利益相关者都不支持任何对学术准备和多样性的中立或积极偏好的利益相关者,将不利于因果公平定义的政策。的确,在因果公平的明显定义下,我们证明了由此产生的政策要求承认所有具有相同概率的学生,无论学术资格或小组成员身份如何。我们的结果突出了正式的局限性和因果公平的常见数学观念的潜在不利后果。
translated by 谷歌翻译
基于AI和机器学习的决策系统已在各种现实世界中都使用,包括医疗保健,执法,教育和金融。不再是牵强的,即设想一个未来,自治系统将推动整个业务决策,并且更广泛地支持大规模决策基础设施以解决社会最具挑战性的问题。当人类做出决定时,不公平和歧视的问题普遍存在,并且当使用几乎没有透明度,问责制和公平性的机器做出决定时(或可能会放大)。在本文中,我们介绍了\ textit {Causal公平分析}的框架,目的是填补此差距,即理解,建模,并可能解决决策设置中的公平性问题。我们方法的主要见解是将观察到数据中存在的差异的量化与基本且通常是未观察到的因果机制收集的因果机制的收集,这些机制首先会产生差异,挑战我们称之为因果公平的基本问题分析(FPCFA)。为了解决FPCFA,我们研究了分解差异和公平性的经验度量的问题,将这种变化归因于结构机制和人群的不同单位。我们的努力最终达到了公平地图,这是组织和解释文献中不同标准之间关系的首次系统尝试。最后,我们研究了进行因果公平分析并提出一本公平食谱的最低因果假设,该假设使数据科学家能够评估不同影响和不同治疗的存在。
translated by 谷歌翻译
The machine learning community has become increasingly concerned with the potential for bias and discrimination in predictive models. This has motivated a growing line of work on what it means for a classification procedure to be "fair." In this paper, we investigate the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. We show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These unsettling findings, which extend and generalize existing results, are empirically confirmed on several datasets. * Equal contribution, alphebetical order. 1 For the remainder of the paper, we will use Equalized Odds to refer to this notion of non-discrimination.
translated by 谷歌翻译
Omnipredictors(Gopalan,Kalai,Reingold,Sharan和Wieder ITCS 2021)的概念提出了一种新的损失最小化范式。与损失损失$ c $相比,无需基于已知的损失功能学习预测指标,而是可以轻松地进行后处理以最大程度地减少任何丰富的损失功能家族。已经表明,这种杂手已经存在,并暗示(对于所有凸和Lipschitz损失函数),通过算法公平文献的多核概念的概念。然而,通常情况下,所选的动作必须遵守一些其他约束(例如能力或奇偶校验约束)。总体而言,全能器的原始概念并不适用于这种良好动机和大量研究的损失最小化的背景。在本文中,我们介绍了综合器,以进行约束优化并研究其复杂性和含义。我们介绍的概念使学习者不知道后来将分配的损失函数以及后来将施加的约束,只要已知用于定义这些约束的亚群的范围。该论文显示了如何依靠适当的多核变体获得限制优化问题的全能器。对于一些有趣的约束和一般损失函数以及一般约束和一些有趣的损失函数,我们显示了如何通过多核的变体隐含的,该变体的复杂性与标准的多核电相似。我们证明,在一般情况下,标准的数学启动不足,表明全能器是通过相对于包含$ c $中所有级别假设集的类的多核算来暗示的。我们还研究了约束是群体公平概念时的含义。
translated by 谷歌翻译
在本文中,我们介绍了一个公平的可解释框架,用于测量和解释分布级别的分类和回归模型中的偏差。在我们的工作中,受到dwork等人的想法。 (2012),我们使用Wassersein指标测量跨子群分布的模型偏差。 Wassersein度量标准的传输理论表征使我们考虑到模型分布的偏置的迹象,这反过来又会产生模型偏差分解为正和负组件。要了解预测因子如何促进模型偏差,我们介绍和理论上表征称为偏置解释的偏置预测器归因,并调查它们的稳定性。我们还为偏见解释提供了制定,以考虑缺失值的影响。此外,由于\ v {s} Trumbelj和Kononenko(2014)和Lundberg和Lee(2017)的动机,我们通过采用合作博弈论构建了添加剂偏见解释,并调查了它们的性质。
translated by 谷歌翻译
我们在禁用的对手存在下研究公平分类,允许获得$ \ eta $,选择培训样本的任意$ \ eta $ -flaction,并任意扰乱受保护的属性。由于战略误报,恶意演员或归责的错误,受保护属性可能不正确的设定。和现有的方法,使随机或独立假设对错误可能不满足其在这种对抗环境中的保证。我们的主要贡献是在这种对抗的环境中学习公平分类器的优化框架,这些普遍存在的准确性和公平性提供了可证明的保证。我们的框架适用于多个和非二进制保护属性,专为大类线性分数公平度量设计,并且还可以处理除了受保护的属性之外的扰动。我们证明了我们框架的近密性,对自然假设类别的保证:没有算法可以具有明显更好的准确性,并且任何具有更好公平性的算法必须具有较低的准确性。凭经验,我们评估了我们对统计率的统计税务统计税率为一个对手的统计税率产生的分类机。
translated by 谷歌翻译
The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of high-level, pre-defined groups (such as race or gender), and then ask for approximate parity of some statistic of the classifier (like positive classification rate or false positive rate) across these groups. Constraints of this form are susceptible to (intentional or inadvertent) fairness gerrymandering, in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes (such as certain combinations of protected attribute values). We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness, and recently proposed individual notions of fairness, but it raises several computational challenges. It is no longer clear how to even check or audit a fixed classifier to see if it satisfies such a strong definition of fairness. We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning -which means it is computationally hard in the worst case, even for simple structured subclasses. However, it also suggests that common heuristics for learning can be applied to successfully solve the auditing problem in practice.We then derive two algorithms that provably converge to the best fair distribution over classifiers in a given class, given access to oracles which can optimally solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner (the primal player) and an Auditor (the dual player). Both algorithms compute an equilibrium of this game. We obtain our first algorithm by simulating play of the game by having Learner play an instance of the no-regret Follow the Perturbed Leader algorithm, and having Auditor play best response. This algorithm provably converges to an approximate Nash equilibrium (and thus to an approximately optimal subgroup-fair distribution over classifiers) in a polynomial number of steps. We obtain our second algorithm by simulating play of the game by having both players play Fictitious Play, which enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the Fictitious Play version using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.
translated by 谷歌翻译
Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it is the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school. * Equal contribution. This work was done while JL was a Research Fellow at the Alan Turing Institute. 2 https://obamawhitehouse.archives.gov/blog/2016/05/04/big-risks-big-opportunities-intersection-big-dataand-civil-rights 31st Conference on Neural Information Processing Systems (NIPS 2017),
translated by 谷歌翻译
We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of "fair affirmative action," which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.
translated by 谷歌翻译
本文是我们早期工作的伴侣纸Miroshnikov等。(2021)在公平的解释性上引入偏见解释。在目前的工作中,我们提出了一种基于与基于Wasserstein的公平度量的公平回归分布的后处理模型的构建偏置缓解方法。通过识别为偏差贡献最大的预测变量列表,我们通过减轻源自这些预测器的偏差来降低问题的维度。后处理方法涉及通过平衡正面和负偏差解释并允许回归偏差减小来重塑预测器分布。我们设计一种使用贝叶斯优化的算法来构建在后处理模型的家庭上的偏置性能高效边界,从中选择了最佳模型。我们的小说方法在低维空间中进行了优化,避免了昂贵的模型再培训。
translated by 谷歌翻译
当训练概率分类器和校准时,可以容易地忽略校准损耗的所谓分组损耗组件。分组损失是指观察信息与实际校准运动中的信息之间的差距。我们调查分组损失与充足概念之间的关系,将Conoonotonics识别为充足的有用标准。我们重新审视Langford&Zadrozny(2005)的探测方法,发现它产生了减少分组损失的概率分类器的估计。最后,我们将Brier曲线讨论为支持培训的工具和“足够”概率分类器的校准。
translated by 谷歌翻译
Learned classifiers should often possess certain invariance properties meant to encourage fairness, robustness, or out-of-distribution generalization. However, multiple recent works empirically demonstrate that common invariance-inducing regularizers are ineffective in the over-parameterized regime, in which classifiers perfectly fit (i.e. interpolate) the training data. This suggests that the phenomenon of ``benign overfitting," in which models generalize well despite interpolating, might not favorably extend to settings in which robustness or fairness are desirable. In this work we provide a theoretical justification for these observations. We prove that -- even in the simplest of settings -- any interpolating learning rule (with arbitrarily small margin) will not satisfy these invariance properties. We then propose and analyze an algorithm that -- in the same setting -- successfully learns a non-interpolating classifier that is provably invariant. We validate our theoretical observations on simulated data and the Waterbirds dataset.
translated by 谷歌翻译
这项工作提供了在人口统计学限制下的最佳分类函数的几种基本特征。在意识框架中,类似于经典的不受限制的分类案例,我们表明,在这种公平性约束下,最大化准确性等于解决相应的回归问题,然后在级别$ 1/2 $上进行阈值。我们将此结果扩展到线性分类分类度量(例如,$ {\ rm f} $ - 得分,AM度量,平衡准确性等),突出了回归问题在此框架中所起的基本作用。我们的结果利用了最近在人口统计学限制与多界限最佳运输公式之间建立了联系。从非正式的角度来看,我们的结果表明,通过解决公平回归问题的解决方案来代替标签的有条件期望,可以实现无约束的问题与公平问题之间的过渡。最后,利用我们的分析,我们证明了在两个敏感群体的情况下,意识和不认识的设置之间的等效性。
translated by 谷歌翻译
我们介绍了统计实验的两种新的信息度量,它们概括和包含$ \ phi $ -diverences,积分概率指标,$ \ mathfrak {n} $ - distances(mmd)和$(f,\ gamma)$ divergences $ divergences在两个或多个分布之间。这使我们能够在信息的度量与统计决策问题的贝叶斯风险之间得出简单的几何关系,从而将变异的$ \ phi $ -divergence代表扩展到多个分布,以完全对称的方式。在马尔可夫运营商的行动下,新的分歧家庭被关闭,该家族产生了信息处理平等,这是经典数据处理不平等的完善和概括。这种平等使人深入了解假设类别在经典风险最小化中的重要性。
translated by 谷歌翻译
解决机器学习模型的公平关注是朝着实际采用现实世界自动化系统中的至关重要的一步。尽管已经开发了许多方法来从数据培训公平模型,但对这些方法对数据损坏的鲁棒性知之甚少。在这项工作中,我们考虑在最坏情况下的数据操作下进行公平意识学习。我们表明,在某些情况下,对手可能会迫使任何学习者返回过度偏见的分类器,无论样本量如何,有或没有降解的准确性,并且多余的偏见的强度会增加数据中数据不足的受保护组的学习问题,而数据中有代表性不足的组。我们还证明,我们的硬度结果紧密到不断的因素。为此,我们研究了两种自然学习算法,以优化准确性和公平性,并表明这些算法在损坏比和较大数据限制中受保护的群体频率方面享有订单最佳的保证。
translated by 谷歌翻译
Algorithmic fairness plays an increasingly critical role in machine learning research. Several group fairness notions and algorithms have been proposed. However, the fairness guarantee of existing fair classification methods mainly depends on specific data distributional assumptions, often requiring large sample sizes, and fairness could be violated when there is a modest number of samples, which is often the case in practice. In this paper, we propose FaiREE, a fair classification algorithm that can satisfy group fairness constraints with finite-sample and distribution-free theoretical guarantees. FaiREE can be adapted to satisfy various group fairness notions (e.g., Equality of Opportunity, Equalized Odds, Demographic Parity, etc.) and achieve the optimal accuracy. These theoretical guarantees are further supported by experiments on both synthetic and real data. FaiREE is shown to have favorable performance over state-of-the-art algorithms.
translated by 谷歌翻译
我们建立了量子算法设计与电路下限之间的第一一般连接。具体来说,让$ \ mathfrak {c} $是一类多项式大小概念,假设$ \ mathfrak {c} $可以在统一分布下的成员查询,错误$ 1/2 - \ gamma $通过时间$ t $量子算法。我们证明如果$ \ gamma ^ 2 \ cdot t \ ll 2 ^ n / n $,则$ \ mathsf {bqe} \ nsubseteq \ mathfrak {c} $,其中$ \ mathsf {bqe} = \ mathsf {bque} [2 ^ {o(n)}] $是$ \ mathsf {bqp} $的指数时间模拟。在$ \ gamma $和$ t $中,此结果是最佳的,因为它不难学习(经典)时间$ t = 2 ^ n $(没有错误) ,或在Quantum Time $ t = \ mathsf {poly}(n)$以傅立叶采样为单位为1/2美元(2 ^ { - n / 2})$。换句话说,即使对这些通用学习算法的边际改善也会导致复杂性理论的主要后果。我们的证明在学习理论,伪随机性和计算复杂性的几个作品上构建,并且至关重要地,在非凡的经典学习算法与由Oliveira和Santhanam建立的电路下限之间的联系(CCC 2017)。扩展他们对量子学习算法的方法,结果产生了重大挑战。为此,我们展示了伪随机发电机如何以通用方式意味着学习到较低的连接,构建针对均匀量子计算的第一个条件伪随机发生器,并扩展了Impagliazzo,JaiSwal的本地列表解码算法。 ,Kabanets和Wigderson(Sicomp 2010)通过微妙的分析到量子电路。我们认为,这些贡献是独立的兴趣,可能会发现其他申请。
translated by 谷歌翻译