A reconstruction attack on a private dataset $D$ takes as input some publicly accessible information about the dataset and produces a list of candidate elements of $D$. We introduce a new class of data reconstruction attacks based on randomized methods for non-convex optimization. We empirically demonstrate that our attacks can not only reconstruct full rows of $D$ from aggregate query statistics $Q(D)\in \mathbb{R}^m$, but can do so in a way that reliably ranks reconstructed rows by their odds of appearing in the private data, providing a signature that could be used for prioritizing reconstructed rows for further actions such as identify theft or hate crime. We also design a sequence of baselines for evaluating reconstruction attacks. Our attacks significantly outperform those that are based only on access to a public distribution or population from which the private dataset $D$ was sampled, demonstrating that they are exploiting information in the aggregate statistics $Q(D)$, and not simply the overall structure of the distribution. In other words, the queries $Q(D)$ are permitting reconstruction of elements of this dataset, not the distribution from which $D$ was drawn. These findings are established both on 2010 U.S. decennial Census data and queries and Census-derived American Community Survey datasets. Taken together, our methods and experiments illustrate the risks in releasing numerically precise aggregate statistics of a large dataset, and provide further motivation for the careful application of provably private techniques such as differential privacy.
translated by 谷歌翻译
我们通过策略提取(MSVIPER)提出了多种可验证的增强学习,这是一种策略蒸馏到决策树以改进机器人导航的新方法。 MSVIPER使用任何强化学习(RL)技术来学习一项“专家”政策,涉及学习国家行动映射,然后使用模仿学习来从中学习决策树策略。我们证明,MSVIPER会导致有效的决策树,并可以准确模仿专家政策的行为。此外,我们提出了有效的政策蒸馏和树修改技术,这些技术利用决策树结构,可以改进政策而无需再培训。我们使用我们的方法来改善用于室内和室外场景的基于RL的机器人导航算法的性能。我们证明了在减少冻结和振荡行为(减少95 \%降低)方面的好处。
translated by 谷歌翻译
我们提供了一种差异化私有算法,用于同时生成多个任务的合成数据:边际查询和多任务机器学习(ML)。我们算法中的一个关键创新是能够直接处理数值特征的能力,与许多相关的先验方法相反,这些方法需要首先通过{binning策略}将数值特征转换为{高基数}分类特征。为了提高准确性,需要较高的分子粒度,但这会对可伸缩性产生负面影响。消除对套在一起的需求使我们能够产生合成数据,以保留大量统计查询,例如数值特征的边际和条件线性阈值查询。保留后者意味着在特定半空间上方的每个类标记的点的比例在实际数据和合成数据中都大致相同。这是在多任务设置中训练线性分类器所需的属性。我们的算法还使我们能够为混合边缘查询提供高质量的合成数据,这些数据结合了分类和数值特征。我们的方法始终比最佳可比技术快2-5倍,并在边缘查询和混合型数据集的线性预测任务方面提供了显着的准确性改进。
translated by 谷歌翻译
我们展示了如何采用回归函数$ \ hat {f} $,该{f} $适当地``多校准''并有效地将其后处理成近似错误的分类器,使分类器满足各种公平限制。后处理不需要标记的数据,只有一定数量的未标记数据和计算。计算$ \ hat f $的计算和样本复杂性要求与解决单个公平学习任务的要求相媲美,但实际上可以用来有效地解决许多不同的下游公平约束的学习问题。我们的后处理方法可以轻松处理相交组,从而将先前的工作推广到后处理回归功能上,以满足仅应用于分离组的公平约束。我们的工作扩展了最近的工作,表明多校准的回归函数是``omnipredictors''(即可以在后处理以最佳解决无约束的ERM问题)以进行约束优化。
translated by 谷歌翻译
个人概率是指仅实现一次的结果的概率:明天下雨的可能性,爱丽丝在未来12个月内死亡的可能性,鲍勃在未来18个月内因暴力犯罪而被捕的可能性等等。个人概率从根本上是不可知的。但是,我们表明,有两个在数据分发中的数据或如何从数据分发中进行采样的当事方不同意在如何建模个人概率上不同意。这是因为实质上不同意的任何两个模型的个人概率模型都可以用来凭经验伪造和改善两个模型之一。在“和解”过程中,这可以有效地迭代,该过程导致双方同意的模型优于他们开始的模型,并且(几乎)本身(几乎)都同意了各个概率(几乎)到处的预测。我们得出的结论是,尽管个人概率是不可知的,但它们是通过必须导致共识的计算和数据有效过程来竞争的。因此,我们无法发现自己​​有两个同样准确且不可解决的模型,这些模型在其预测中基本上不同意 - 为有时所谓的预测性或模型多样性问题提供答案。
translated by 谷歌翻译
我们考虑了一个单方面反馈的在线学习问题,其中学习者只能观察到真正的标签以进行积极的预测实例。在每回合中,$ k $实例到达并根据学习者部署的随机策略来收到分类结果,其目标是在部署单独的公平政策的同时最大化准确性。我们首先扩展了Bechavod等人的框架。 (2020)依靠人类公平审计师的存在来检测公平性违规行为,而是将动态选择的审计师的动态选择面板的反馈纳入了反馈。然后,我们通过单方面反馈的在线学习问题构建了有效的降低,并对上下文组合半伴侣问题报告公平性违反了公平性(Cesa-Bianchi&Lugosi,2009年,Gy \ \ \'{O} Rgy等。 ,2007年)。最后,我们展示了如何利用上下文组合半循环设置中两种算法的保证:exp2(Bubeck等,2012)和Oracle-seversefficited context-semi-bandit-ftpl(Syrgkanis等人(Syrgkanis等) 。以及任何单一人类审核员可能存在的人类偏见,但可以通过选择一个精选的面板来减轻。
translated by 谷歌翻译
The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of high-level, pre-defined groups (such as race or gender), and then ask for approximate parity of some statistic of the classifier (like positive classification rate or false positive rate) across these groups. Constraints of this form are susceptible to (intentional or inadvertent) fairness gerrymandering, in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes (such as certain combinations of protected attribute values). We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness, and recently proposed individual notions of fairness, but it raises several computational challenges. It is no longer clear how to even check or audit a fixed classifier to see if it satisfies such a strong definition of fairness. We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning -which means it is computationally hard in the worst case, even for simple structured subclasses. However, it also suggests that common heuristics for learning can be applied to successfully solve the auditing problem in practice.We then derive two algorithms that provably converge to the best fair distribution over classifiers in a given class, given access to oracles which can optimally solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner (the primal player) and an Auditor (the dual player). Both algorithms compute an equilibrium of this game. We obtain our first algorithm by simulating play of the game by having Learner play an instance of the no-regret Follow the Perturbed Leader algorithm, and having Auditor play best response. This algorithm provably converges to an approximate Nash equilibrium (and thus to an approximately optimal subgroup-fair distribution over classifiers) in a polynomial number of steps. We obtain our second algorithm by simulating play of the game by having both players play Fictitious Play, which enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the Fictitious Play version using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.
translated by 谷歌翻译
Objectives: Discussions of fairness in criminal justice risk assessments typically lack conceptual precision. Rhetoric too often substitutes for careful analysis. In this paper, we seek to clarify the tradeoffs between different kinds of fairness and between fairness and accuracy.Methods: We draw on the existing literatures in criminology, computer science and statistics to provide an integrated examination of fairness and accuracy in criminal justice risk assessments. We also provide an empirical illustration using data from arraignments.Results: We show that there are at least six kinds of fairness, some of which are incompatible with one another and with accuracy.Conclusions: Except in trivial cases, it is impossible to maximize accuracy and fairness at the same time, and impossible simultaneously to satisfy all kinds of fairness. In practice, a major complication is different base rates across different legally protected groups. There is a need to consider challenging tradeoffs.
translated by 谷歌翻译
We introduce a new tool for stochastic convex optimization (SCO): a Reweighted Stochastic Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density. Combining ReSQue with recent advances in ball oracle acceleration [CJJJLST20, ACJJS21], we develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings. For a SCO objective constrained to the unit ball in $\mathbb{R}^d$, we obtain the following results (up to polylogarithmic factors). We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator. For $\epsilon_{\text{opt}} \in [d^{-1}, d^{-1/4}]$, our algorithm matches the state-of-the-art oracle depth of [BJLLS19] while maintaining the optimal total work of stochastic gradient descent. We give an $(\epsilon_{\text{dp}}, \delta)$-differentially private algorithm which, given $n$ samples of Lipschitz loss functions, obtains near-optimal optimization error and makes $\min(n, n^2\epsilon_{\text{dp}}^2 d^{-1}) + \min(n^{4/3}\epsilon_{\text{dp}}^{1/3}, (nd)^{2/3}\epsilon_{\text{dp}}^{-1})$ queries to the gradients of these functions. In the regime $d \le n \epsilon_{\text{dp}}^{2}$, where privacy comes at no cost in terms of the optimal loss up to constants, our algorithm uses $n + (nd)^{2/3}\epsilon_{\text{dp}}^{-1}$ queries and improves recent advancements of [KLL21, AFKT21]. In the moderately low-dimensional setting $d \le \sqrt n \epsilon_{\text{dp}}^{3/2}$, our query complexity is near-linear.
translated by 谷歌翻译
Despite the success of large language models (LLMs) in various natural language processing (NLP) tasks, the stored knowledge in these models may inevitably be incomplete, out-of-date, or incorrect. This motivates the need to utilize external knowledge to assist LLMs. Unfortunately, current methods for incorporating external knowledge often require additional training or fine-tuning, which can be costly and may not be feasible for LLMs. To address this issue, we propose a novel post-processing approach, rethinking with retrieval (RR), which retrieves relevant external knowledge based on the decomposed reasoning steps obtained from the chain-of-thought (CoT) prompting. This lightweight approach does not require additional training or fine-tuning and is not limited by the input length of LLMs. We evaluate the effectiveness of RR through extensive experiments with GPT-3 on three complex reasoning tasks: commonsense reasoning, temporal reasoning, and tabular reasoning. Our results show that RR can produce more faithful explanations and improve the performance of LLMs.
translated by 谷歌翻译