解决机器学习模型的公平关注是朝着实际采用现实世界自动化系统中的至关重要的一步。尽管已经开发了许多方法来从数据培训公平模型,但对这些方法对数据损坏的鲁棒性知之甚少。在这项工作中,我们考虑在最坏情况下的数据操作下进行公平意识学习。我们表明,在某些情况下,对手可能会迫使任何学习者返回过度偏见的分类器,无论样本量如何,有或没有降解的准确性,并且多余的偏见的强度会增加数据中数据不足的受保护组的学习问题,而数据中有代表性不足的组。我们还证明,我们的硬度结果紧密到不断的因素。为此,我们研究了两种自然学习算法,以优化准确性和公平性,并表明这些算法在损坏比和较大数据限制中受保护的群体频率方面享有订单最佳的保证。
translated by 谷歌翻译
可实现和不可知性的可读性的等价性是学习理论的基本现象。与PAC学习和回归等古典设置范围的变种,近期趋势,如对冲强劲和私人学习,我们仍然缺乏统一理论;等同性的传统证据往往是不同的,并且依赖于强大的模型特异性假设,如统一的收敛和样本压缩。在这项工作中,我们给出了第一个独立的框架,解释了可实现和不可知性的可读性的等价性:三行黑箱减少简化,统一,并在各种各样的环境中扩展了我们的理解。这包括没有已知的学报的模型,例如学习任意分布假设或一般损失,以及许多其他流行的设置,例如强大的学习,部分学习,公平学习和统计查询模型。更一般地,我们认为可实现和不可知的学习的等价性实际上是我们调用属性概括的更广泛现象的特殊情况:可以满足有限的学习算法(例如\噪声公差,隐私,稳定性)的任何理想性质假设类(可能在某些变化中)延伸到任何学习的假设类。
translated by 谷歌翻译
我们考虑在对抗环境中的强大学习模型。学习者获得未腐败的培训数据,并访问可能受到测试期间对手影响的可能腐败。学习者的目标是建立一个强大的分类器,该分类器将在未来的对抗示例中进行测试。每个输入的对手仅限于$ k $可能的损坏。我们将学习者 - 对手互动建模为零和游戏。该模型与Schmidt等人的对抗示例模型密切相关。 (2018); Madry等。 (2017)。我们的主要结果包括对二进制和多类分类的概括界限,以及实现的情况(回归)。对于二元分类设置,我们都拧紧Feige等人的概括。 (2015年),也能够处理无限假设类别。样本复杂度从$ o(\ frac {1} {\ epsilon^4} \ log(\ frac {| h |} {\ delta})$ to $ o \ big(\ frac {1} { epsilon^2}(kvc(h)\ log^{\ frac {3} {2}+\ alpha}(kvc(h))+\ log(\ frac {1} {\ delta} {\ delta})\ big)\ big)\ big)$ for任何$ \ alpha> 0 $。此外,我们将算法和概括从二进制限制到多类和真实价值的案例。一路上,我们获得了脂肪震惊的尺寸和$ k $ fold的脂肪的尺寸和Rademacher复杂性的结果最大值的功能类别;这些可能具有独立的兴趣。对于二进制分类,Feige等人(2015年)使用遗憾的最小化算法和Erm Oracle作为黑匣子;我们适应了多类和回归设置。该算法为我们提供了给定培训样本中的球员的近乎最佳政策。
translated by 谷歌翻译
我们在禁用的对手存在下研究公平分类,允许获得$ \ eta $,选择培训样本的任意$ \ eta $ -flaction,并任意扰乱受保护的属性。由于战略误报,恶意演员或归责的错误,受保护属性可能不正确的设定。和现有的方法,使随机或独立假设对错误可能不满足其在这种对抗环境中的保证。我们的主要贡献是在这种对抗的环境中学习公平分类器的优化框架,这些普遍存在的准确性和公平性提供了可证明的保证。我们的框架适用于多个和非二进制保护属性,专为大类线性分数公平度量设计,并且还可以处理除了受保护的属性之外的扰动。我们证明了我们框架的近密性,对自然假设类别的保证:没有算法可以具有明显更好的准确性,并且任何具有更好公平性的算法必须具有较低的准确性。凭经验,我们评估了我们对统计率的统计税务统计税率为一个对手的统计税率产生的分类机。
translated by 谷歌翻译
后门数据中毒攻击是一种对抗的攻击,其中攻击者将几个水印,误标记的训练示例注入训练集中。水印不会影响典型数据模型的测试时间性能;但是,该模型在水印示例中可靠地错误。为获得对后门数据中毒攻击的更好的基础认识,我们展示了一个正式的理论框架,其中一个人可以讨论对分类问题的回溯数据中毒攻击。然后我们使用它来分析这些攻击的重要统计和计算问题。在统计方面,我们识别一个参数,我们称之为记忆能力,捕捉到后门攻击的学习问题的内在脆弱性。这使我们能够争论几个自然学习问题的鲁棒性与后门攻击。我们的结果,攻击者涉及介绍后门攻击的明确建设,我们的鲁棒性结果表明,一些自然问题设置不能产生成功的后门攻击。从计算的角度来看,我们表明,在某些假设下,对抗训练可以检测训练集中的后门的存在。然后,我们表明,在类似的假设下,我们称之为呼叫滤波和鲁棒概括的两个密切相关的问题几乎等同。这意味着它既是渐近必要的,并且足以设计算法,可以识别训练集中的水印示例,以便获得既广泛概念的学习算法,以便在室外稳健。
translated by 谷歌翻译
我们研究了算法收到I.I.D的统计问题中对抗噪声模型的基本问题。从分发$ \ mathcal {d} $绘制。这些对手的定义指定了允许的损坏类型(噪声模型)以及可以进行这些损坏(适应性);后者区别了唯一可以损坏分发$ \ mathcal {d} $和适应性对手的疏忽,这些对手可以损坏他们的腐败依赖于从$ \ mathcal {d} $绘制的特定样本$ s $。在这项工作中,我们调查了在文献中研究的所有噪声模型中是否有效地相当于自适应对手。具体而言,算法$ \ mathcal {a} $的行为可以在不受算法$ \ mathcal {a}'$的情况下始终受到适应性对手的存在的良好近似?我们的第一个结果表明,这确实是在所有合理的噪声模型下广泛的统计查询算法的情况。然后,我们显示在附加噪声的具体情况下,这种等价物适用于所有算法。最后,我们将所有算法和所有合理的噪声模型中的最丰富的一般性映射到最完整的普遍性的方法。
translated by 谷歌翻译
现代机器学习任务通常不仅需要考虑一个目标,而且需要考虑多个目标。例如,除了预测质量外,这可能是学识渊博的模型或其任何组合的效率,稳健性或公平性。多目标学习为处理此类问题提供了自然框架,而无需提交早期权衡。令人惊讶的是,到目前为止,统计学习理论几乎没有深入了解多目标学习的概括属性。在这项工作中,我们采取了第一步来填补这一空白:我们为多目标设置建立了基础概括范围,以及通过标量化学习的概括和超级界限。我们还提供了对真实目标的帕累托最佳集合与他们从训练数据中经验近似的帕累托(Pareto)最佳选择之间的关系的第一个理论分析。特别是,我们表现出令人惊讶的不对称性:所有帕累托最佳的解决方案都可以通过经验上的帕累托(Pareto)优势近似,但反之亦然。
translated by 谷歌翻译
多集团不可知学习是一个正式的学习标准,涉及人口亚组内的预测因子的条件风险。标准解决了最近的实际问题,如亚组公平和隐藏分层。本文研究了对多组学习问题的解决方案的结构,为学习问题提供了简单和近最佳的算法。
translated by 谷歌翻译
在高赌注域中的机器学习工具的实际应用通常被调节为公平,因此预测目标应该满足相对于受保护属性的奇偶校验的一些定量概念。然而,公平性和准确性之间的确切权衡并不完全清楚,即使是对分类问题的基本范式也是如此。在本文中,我们通过在任何公平分类器的群体误差之和中提供较低的界限,在分类设置中表征统计奇偶校验和准确性之间的固有权衡。我们不可能的定理可以被解释为公平的某种不确定性原则:如果基本率不同,那么符合统计奇偶校验的任何公平分类器都必须在至少一个组中产生很大的错误。我们进一步扩展了这一结果,以便在学习公平陈述的角度下给出任何(大约)公平分类者的联合误差的下限。为了表明我们的下限是紧张的,假设Oracle访问贝叶斯(潜在不公平)分类器,我们还构造了一种返回一个随机分类器的算法,这是最佳和公平的。有趣的是,当受保护的属性可以采用超过两个值时,这个下限的扩展不承认分析解决方案。然而,在这种情况下,我们表明,通过解决线性程序,我们可以通过解决我们作为电视 - 重心问题的术语,电视距离的重心问题来有效地计算下限。在上面,我们证明,如果集团明智的贝叶斯最佳分类器是关闭的,那么学习公平的表示导致公平的替代概念,称为准确性奇偶校验,这使得错误率在组之间关闭。最后,我们还在现实世界数据集上进行实验,以确认我们的理论发现。
translated by 谷歌翻译
The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of high-level, pre-defined groups (such as race or gender), and then ask for approximate parity of some statistic of the classifier (like positive classification rate or false positive rate) across these groups. Constraints of this form are susceptible to (intentional or inadvertent) fairness gerrymandering, in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes (such as certain combinations of protected attribute values). We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness, and recently proposed individual notions of fairness, but it raises several computational challenges. It is no longer clear how to even check or audit a fixed classifier to see if it satisfies such a strong definition of fairness. We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning -which means it is computationally hard in the worst case, even for simple structured subclasses. However, it also suggests that common heuristics for learning can be applied to successfully solve the auditing problem in practice.We then derive two algorithms that provably converge to the best fair distribution over classifiers in a given class, given access to oracles which can optimally solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner (the primal player) and an Auditor (the dual player). Both algorithms compute an equilibrium of this game. We obtain our first algorithm by simulating play of the game by having Learner play an instance of the no-regret Follow the Perturbed Leader algorithm, and having Auditor play best response. This algorithm provably converges to an approximate Nash equilibrium (and thus to an approximately optimal subgroup-fair distribution over classifiers) in a polynomial number of steps. We obtain our second algorithm by simulating play of the game by having both players play Fictitious Play, which enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the Fictitious Play version using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.
translated by 谷歌翻译
机器学习中的歧视通常沿多个维度(又称保护属性)出现;因此,希望确保\ emph {交叉公平} - 即,没有任何子组受到歧视。众所周知,确保\ emph {边际公平}对于每个维度而言,独立不够。但是,由于亚组的指数数量,直接测量数据交叉公平性是不可能的。在本文中,我们的主要目标是通过统计分析详细了解边际和交叉公平之间的关系。我们首先确定一组足够的条件,在这些条件下可以获得确切的关系。然后,在一般情况下,我们证明了相交公平性的高概率的界限(通过边际公平和其他有意义的统计量很容易计算)。除了它们的描述价值之外,我们还可以利用这些理论界限来得出一种启发式,从而通过以相关的方式选择了我们描述相交子组的保护属性来改善交叉公平的近似和边界。最后,我们测试了实际和合成数据集的近似值和界限的性能。
translated by 谷歌翻译
Learned classifiers should often possess certain invariance properties meant to encourage fairness, robustness, or out-of-distribution generalization. However, multiple recent works empirically demonstrate that common invariance-inducing regularizers are ineffective in the over-parameterized regime, in which classifiers perfectly fit (i.e. interpolate) the training data. This suggests that the phenomenon of ``benign overfitting," in which models generalize well despite interpolating, might not favorably extend to settings in which robustness or fairness are desirable. In this work we provide a theoretical justification for these observations. We prove that -- even in the simplest of settings -- any interpolating learning rule (with arbitrarily small margin) will not satisfy these invariance properties. We then propose and analyze an algorithm that -- in the same setting -- successfully learns a non-interpolating classifier that is provably invariant. We validate our theoretical observations on simulated data and the Waterbirds dataset.
translated by 谷歌翻译
Machine learning models are often susceptible to adversarial perturbations of their inputs. Even small perturbations can cause state-of-the-art classifiers with high "standard" accuracy to produce an incorrect prediction with high confidence. To better understand this phenomenon, we study adversarially robust learning from the viewpoint of generalization. We show that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning. This gap is information theoretic and holds irrespective of the training algorithm or the model family. We complement our theoretical results with experiments on popular image classification datasets and show that a similar gap exists here as well. We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
公司跨行业对机器学习(ML)的快速传播采用了重大的监管挑战。一个这样的挑战就是可伸缩性:监管机构如何有效地审核这些ML模型,以确保它们是公平的?在本文中,我们启动基于查询的审计算法的研究,这些算法可以以查询有效的方式估算ML模型的人口统计学率。我们提出了一种最佳的确定性算法,以及具有可比保证的实用随机,甲骨文效率的算法。此外,我们进一步了解了随机活动公平估计算法的最佳查询复杂性。我们对主动公平估计的首次探索旨在将AI治理置于更坚定的理论基础上。
translated by 谷歌翻译
This paper studies offline policy learning, which aims at utilizing observations collected a priori (from either fixed or adaptively evolving behavior policies) to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics are lower bounded in the offline dataset; put differently, the performance of the existing methods depends on the worst-case propensity in the offline dataset. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities for certain actions. In this paper, we propose a new algorithm that optimizes lower confidence bounds (LCBs) -- instead of point estimates -- of the policy values. The LCBs are constructed using knowledge of the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class we optimize over. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our theoretical analysis, we develop a new self-normalized type concentration inequality for inverse-propensity-weighting estimators, generalizing the well-known empirical Bernstein's inequality to unbounded and non-i.i.d. data.
translated by 谷歌翻译
针对社会福利计划中个人的干预措施的主要问题之一是歧视:个性化治疗可能导致跨年龄,性别或种族等敏感属性的差异。本文解决了公平有效的治疗分配规则的设计问题。我们采用了第一次的非遗憾视角,没有危害:我们选择了帕累托边境中最公平的分配。我们将优化投入到混合构成线性程序公式中,可以使用现成的算法来解决。我们对估计的政策功能的不公平性和在帕累托前沿的不公平保证在一般公平概念下的不公平性范围内得出了遗憾。最后,我们使用教育经济学的应用来说明我们的方法。
translated by 谷歌翻译
使用增强的框架,我们证明所有基于杂质的决策树学习算法(包括经典的ID3,C4.5和CART)都具有很高的噪音耐受性。我们的保证在讨厌的噪声的最强噪声模型下保持,我们在允许的噪声速率上提供了近乎匹配的上和下限。我们进一步表明,这些算法简单,长期以来一直是日常机器学习的核心,在嘈杂的环境中享受可证明的保证,这些环境是由关于决策树学习的理论文献中现有算法无与伦比的。综上所述,我们的结果增加了一项持续的研究线,该研究旨在将这些实际决策树算法的经验成功放在牢固的理论基础上。
translated by 谷歌翻译
Recently, Robey et al. propose a notion of probabilistic robustness, which, at a high-level, requires a classifier to be robust to most but not all perturbations. They show that for certain hypothesis classes where proper learning under worst-case robustness is \textit{not} possible, proper learning under probabilistic robustness \textit{is} possible with sample complexity exponentially smaller than in the worst-case robustness setting. This motivates the question of whether proper learning under probabilistic robustness is always possible. In this paper, we show that this is \textit{not} the case. We exhibit examples of hypothesis classes $\mathcal{H}$ with finite VC dimension that are \textit{not} probabilistically robustly PAC learnable with \textit{any} proper learning rule. However, if we compare the output of the learner to the best hypothesis for a slightly \textit{stronger} level of probabilistic robustness, we show that not only is proper learning \textit{always} possible, but it is possible via empirical risk minimization.
translated by 谷歌翻译
在这项工作中,我们调查了Steinke和Zakynthinou(2020)的“条件互信息”(CMI)框架的表现力,以及使用它来提供统一框架,用于在可实现的环境中证明泛化界限。我们首先证明可以使用该框架来表达任何用于从一类界限VC维度输出假设的任何学习算法的非琐碎(但是次优)界限。我们证明了CMI框架在用于学习半个空间的预期风险上产生最佳限制。该结果是我们的一般结果的应用,显示稳定的压缩方案Bousquet al。 (2020)尺寸$ k $有统一有限的命令$ o(k)$。我们进一步表明,适当学习VC类的固有限制与恒定的CMI存在适当的学习者的存在,并且它意味着对Steinke和Zakynthinou(2020)的开放问题的负面分辨率。我们进一步研究了价值最低限度(ERMS)的CMI的级别$ H $,并表明,如果才能使用有界CMI输出所有一致的分类器(版本空间),只有在$ H $具有有界的星号(Hanneke和杨(2015)))。此外,我们证明了一般性的减少,表明“休假”分析通过CMI框架表示。作为推论,我们研究了Haussler等人提出的一包图算法的CMI。 (1994)。更一般地说,我们表明CMI框架是通用的,因为对于每一项一致的算法和数据分布,当且仅当其评估的CMI具有样品的载位增长时,预期的风险就会消失。
translated by 谷歌翻译