We construct a universally Bayes consistent learning rule that satisfies differential privacy (DP). We first handle the setting of binary classification and then extend our rule to the more general setting of density estimation (with respect to the total variation metric). The existence of a universally consistent DP learner reveals a stark difference with the distribution-free PAC model. Indeed, in the latter DP learning is extremely limited: even one-dimensional linear classifiers are not privately learnable in this stringent model. Our result thus demonstrates that by allowing the learning rate to depend on the target distribution, one can circumvent the above-mentioned impossibility result and in fact, learn \emph{arbitrary} distributions by a single DP algorithm. As an application, we prove that any VC class can be privately learned in a semi-supervised setting with a near-optimal \emph{labeled} sample complexity of $\tilde{O}(d/\varepsilon)$ labeled examples (and with an unlabeled sample complexity that can depend on the target distribution).
translated by 谷歌翻译
可实现和不可知性的可读性的等价性是学习理论的基本现象。与PAC学习和回归等古典设置范围的变种,近期趋势,如对冲强劲和私人学习,我们仍然缺乏统一理论;等同性的传统证据往往是不同的,并且依赖于强大的模型特异性假设,如统一的收敛和样本压缩。在这项工作中,我们给出了第一个独立的框架,解释了可实现和不可知性的可读性的等价性:三行黑箱减少简化,统一,并在各种各样的环境中扩展了我们的理解。这包括没有已知的学报的模型,例如学习任意分布假设或一般损失,以及许多其他流行的设置,例如强大的学习,部分学习,公平学习和统计查询模型。更一般地,我们认为可实现和不可知的学习的等价性实际上是我们调用属性概括的更广泛现象的特殊情况:可以满足有限的学习算法(例如\噪声公差,隐私,稳定性)的任何理想性质假设类(可能在某些变化中)延伸到任何学习的假设类。
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
聚类是数据分析中的一个根本问题。在差别私有聚类中,目标是识别$ k $群集中心,而不披露各个数据点的信息。尽管研究进展显着,但问题抵制了实际解决方案。在这项工作中,我们的目的是提供简单的可实现的差异私有聚类算法,当数据“简单”时,提供实用程序,例如,当簇之间存在显着的分离时。我们提出了一个框架,允许我们将非私有聚类算法应用于简单的实例,并私下结合结果。在高斯混合的某些情况下,我们能够改善样本复杂性界限,并获得$ k $ -means。我们与合成数据的实证评估补充了我们的理论分析。
translated by 谷歌翻译
我们启动差异私有(DP)估计的研究,并访问少量公共数据。为了对D维高斯人进行私人估计,我们假设公共数据来自高斯人,该高斯与私人数据的基础高斯人的总变化距离可能消失了。我们表明,在纯或集中DP的约束下,D+1个公共数据样本足以从私人样本复杂性中删除对私人数据分布的范围参数的任何依赖性,而在没有公共数据的情况下,这是必不可少的。对于分离的高斯混合物,我们假设基本的公共和私人分布是相同的,我们考虑两个设置:(1)当给出独立于维度的公共数据时,可以根据多种方式改善私人样本复杂性混合组件的数量以及对分布范围参数的任何依赖性都可以在近似DP情况下去除; (2)当在维度上给出了一定数量的公共数据线性时,即使在集中的DP下,也可以独立于范围参数使私有样本复杂性使得可以对整体样本复杂性进行其他改进。
translated by 谷歌翻译
Differentially private algorithms for common metric aggregation tasks, such as clustering or averaging, often have limited practicality due to their complexity or to the large number of data points that is required for accurate results. We propose a simple and practical tool, $\mathsf{FriendlyCore}$, that takes a set of points ${\cal D}$ from an unrestricted (pseudo) metric space as input. When ${\cal D}$ has effective diameter $r$, $\mathsf{FriendlyCore}$ returns a "stable" subset ${\cal C} \subseteq {\cal D}$ that includes all points, except possibly few outliers, and is {\em certified} to have diameter $r$. $\mathsf{FriendlyCore}$ can be used to preprocess the input before privately aggregating it, potentially simplifying the aggregation or boosting its accuracy. Surprisingly, $\mathsf{FriendlyCore}$ is light-weight with no dependence on the dimension. We empirically demonstrate its advantages in boosting the accuracy of mean estimation and clustering tasks such as $k$-means and $k$-GMM, outperforming tailored methods.
translated by 谷歌翻译
在数据库查询结果中添加随机噪声是实现隐私的重要工具。一个挑战是最大程度地减少这种噪音,同时仍然满足隐私要求。最近,出版了$(\ epsilon,\ delta)$的足够和必要的条件 - 高斯噪声的差异隐私。这种情况允许计算此分布的最小隐私量表。我们扩展了这项工作,并为$(\ epsilon,\ delta)$ - 差分隐私提供了足够和必要的条件,用于所有对称和对象concove噪声密度。我们的结果允许将噪声分布的细粒度调整为查询结果的维度。我们证明,与当前使用的Laplace和Gaussian机制相同的$ \ epsilon $和$ \ delta $发生的Laplace和Gaussian机制所产生的均方误差明显低得多。
translated by 谷歌翻译
In this work, we give efficient algorithms for privately estimating a Gaussian distribution in both pure and approximate differential privacy (DP) models with optimal dependence on the dimension in the sample complexity. In the pure DP setting, we give an efficient algorithm that estimates an unknown $d$-dimensional Gaussian distribution up to an arbitrary tiny total variation error using $\widetilde{O}(d^2 \log \kappa)$ samples while tolerating a constant fraction of adversarial outliers. Here, $\kappa$ is the condition number of the target covariance matrix. The sample bound matches best non-private estimators in the dependence on the dimension (up to a polylogarithmic factor). We prove a new lower bound on differentially private covariance estimation to show that the dependence on the condition number $\kappa$ in the above sample bound is also tight. Prior to our work, only identifiability results (yielding inefficient super-polynomial time algorithms) were known for the problem. In the approximate DP setting, we give an efficient algorithm to estimate an unknown Gaussian distribution up to an arbitrarily tiny total variation error using $\widetilde{O}(d^2)$ samples while tolerating a constant fraction of adversarial outliers. Prior to our work, all efficient approximate DP algorithms incurred a super-quadratic sample cost or were not outlier-robust. For the special case of mean estimation, our algorithm achieves the optimal sample complexity of $\widetilde O(d)$, improving on a $\widetilde O(d^{1.5})$ bound from prior work. Our pure DP algorithm relies on a recursive private preconditioning subroutine that utilizes the recent work on private mean estimation [Hopkins et al., 2022]. Our approximate DP algorithms are based on a substantial upgrade of the method of stabilizing convex relaxations introduced in [Kothari et al., 2022].
translated by 谷歌翻译
差异隐私通常使用比理论更大的隐私参数应用于理想的理想。已经提出了宽大隐私参数的各种非正式理由。在这项工作中,我们考虑了部分差异隐私(DP),该隐私允许以每个属性为基础量化隐私保证。在此框架中,我们研究了几个基本数据分析和学习任务,并设计了其每个属性隐私参数的算法,其较小的人(即所有属性)的最佳隐私参数比最佳的隐私参数。
translated by 谷歌翻译
We first prove that Littlestone classes, those which model theorists call stable, characterize learnability in a new statistical model: a learner in this new setting outputs the same hypothesis, up to measure zero, with probability one, after a uniformly bounded number of revisions. This fills a certain gap in the literature, and sets the stage for an approximation theorem characterizing Littlestone classes in terms of a range of learning models, by analogy to definability of types in model theory. We then give a complete analogue of Shelah's celebrated (and perhaps a priori untranslatable) Unstable Formula Theorem in the learning setting, with algorithmic arguments taking the place of the infinite.
translated by 谷歌翻译
使用差异隐私(DP)学习的大多数工作都集中在每个用户具有单个样本的设置上。在这项工作中,我们考虑每个用户持有M $ Samples的设置,并且在每个用户数据的级别强制执行隐私保护。我们展示了,在这个设置中,我们可以学习少数用户。具体而言,我们表明,只要每个用户收到足够多的样本,我们就可以通过$(\ epsilon,\ delta)$ - dp算法使用$ o(\ log(1 / \ delta)来学习任何私人学习的课程/ \ epsilon)$用户。对于$ \ epsilon $ -dp算法,我们展示我们即使在本地模型中也可以使用$ o _ {\ epsilon}(d)$用户学习,其中$ d $是概率表示维度。在这两种情况下,我们在所需用户数量上显示了几乎匹配的下限。我们的结果的一个关键组成部分是全局稳定性的概括[Bun等,Focs 2020]允许使用公共随机性。在这种轻松的概念下,我们采用相关的采样策略来表明全局稳定性可以在样品数量的多项式牺牲中被提升以任意接近一个。
translated by 谷歌翻译
We establish a simple connection between robust and differentially-private algorithms: private mechanisms which perform well with very high probability are automatically robust in the sense that they retain accuracy even if a constant fraction of the samples they receive are adversarially corrupted. Since optimal mechanisms typically achieve these high success probabilities, our results imply that optimal private mechanisms for many basic statistics problems are robust. We investigate the consequences of this observation for both algorithms and computational complexity across different statistical problems. Assuming the Brennan-Bresler secret-leakage planted clique conjecture, we demonstrate a fundamental tradeoff between computational efficiency, privacy leakage, and success probability for sparse mean estimation. Private algorithms which match this tradeoff are not yet known -- we achieve that (up to polylogarithmic factors) in a polynomially-large range of parameters via the Sum-of-Squares method. To establish an information-computation gap for private sparse mean estimation, we also design new (exponential-time) mechanisms using fewer samples than efficient algorithms must use. Finally, we give evidence for privacy-induced information-computation gaps for several other statistics and learning problems, including PAC learning parity functions and estimation of the mean of a multivariate Gaussian.
translated by 谷歌翻译
给定真实的假设类$ \ mathcal {h} $,我们在什么条件下调查有一个差异的私有算法,它从$ \ mathcal {h} $给出的最佳假设.I.i.d.数据。灵感来自最近的成果的二进制分类的相关环境(Alon等,2019; Bun等,2020),其中显示了二进制类的在线学习是必要的,并且足以追随其私人学习,Jung等人。 (2020)显示,在回归的设置中,$ \ mathcal {h} $的在线学习是私人可读性所必需的。这里的在线学习$ \ mathcal {h} $的特点是其$ \ eta $-sequentient胖胖子的优势,$ {\ rm sfat} _ \ eta(\ mathcal {h})$,适用于所有$ \ eta> 0 $。就足够的私人学习条件而言,Jung等人。 (2020)显示$ \ mathcal {h} $私下学习,如果$ \ lim _ {\ eta \ downarrow 0} {\ rm sfat} _ \ eta(\ mathcal {h})$是有限的,这是一个相当限制的健康)状况。我们展示了在轻松的条件下,\ LIM \ INF _ {\ eta \ downarrow 0} \ eta \ cdot {\ rm sfat} _ \ eta(\ mathcal {h})= 0 $,$ \ mathcal {h} $私人学习,为\ \ rm sfat} _ \ eta(\ mathcal {h})$ \ eta \ dockarrow 0 $ divering建立第一个非参数私人学习保证。我们的技术涉及一种新颖的过滤过程,以输出非参数函数类的稳定假设。
translated by 谷歌翻译
我们介绍了一种基于约翰逊·林登斯特劳斯引理的统计查询的新方法,以释放具有差异隐私的统计查询的答案。关键的想法是随机投影查询答案,以较低的维空间,以便将可行的查询答案的任何两个向量之间的距离保留到添加性错误。然后,我们使用简单的噪声机制回答投影的查询,并将答案提升到原始维度。使用这种方法,我们首次给出了纯粹的私人机制,具有最佳情况下的最佳情况样本复杂性,在平均错误下,以回答$ n $ $ n $的宇宙的$ k $ Queries的工作量。作为其他应用,我们给出了具有最佳样品复杂性的第一个纯私人有效机制,用于计算有限的高维分布的协方差,并用于回答2向边缘查询。我们还表明,直到对错误的依赖性,我们机制的变体对于每个给定的查询工作负载几乎是最佳的。
translated by 谷歌翻译
差异隐私的混合模型(Avent等人2017年)是对本地模型的增强个人。在这里,我们研究了混合模型中的机器学习问题,其中策展人数据集中的n个个体是从与一般人群(本地代理商)中的一个分布中得出的。我们为这个转移学习问题提供了一个一般方案 - 子样本测试 - 育问题,该问题将任何策展人模型的DP学习者降低到了混合模型学习者,在这种情况下,使用迭代的亚采样和重新授予了n个示例。基于乘法算法的平滑变化(由Bun等人,2020年引入)。我们的方案具有样本复杂性,依赖于两个分布之间的卡方差异。我们对私人减少所需的样本复杂性进行了最差的分析范围。为了降低上述样本复杂性,我们提供了两个特定的实例,我们的样本复杂性可以大大降低(一个实例是数学分析的,而另一个实例则在经验上 - 经验上),并为后续工作构成了多个方向。
translated by 谷歌翻译
Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Rényi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy."
translated by 谷歌翻译
State-of-the-art results on image recognition tasks are achieved using over-parameterized learning algorithms that (nearly) perfectly fit the training set and are known to fit well even random labels. This tendency to memorize the labels of the training data is not explained by existing theoretical analyses. Memorization of the training data also presents significant privacy risks when the training data contains sensitive personal information and thus it is important to understand whether such memorization is necessary for accurate learning.We provide the first conceptual explanation and a theoretical model for this phenomenon. Specifically, we demonstrate that for natural data distributions memorization of labels is necessary for achieving closeto-optimal generalization error. Crucially, even labels of outliers and noisy labels need to be memorized. The model is motivated and supported by the results of several recent empirical works. In our model, data is sampled from a mixture of subpopulations and our results show that memorization is necessary whenever the distribution of subpopulation frequencies is long-tailed. Image and text data is known to be long-tailed and therefore our results establish a formal link between these empirical phenomena. Our results allow to quantify the cost of limiting memorization in learning and explain the disparate effects that privacy and model compression have on different subgroups.
translated by 谷歌翻译
在这项工作中,我们在用户级差异隐私下研究高维平均值估计,并设计$(\ varepsilon,\ delta)$ - 使用尽可能少的用户差异化私人机制。特别是,即使用户数量低至$ o(\ frac {1} {\ varepsilon } \ log \ frac {1} {\ delta})$。有趣的是,这对\ emph {users}的数量绑定到独立于维度(尽管\ emph {samples aper users}的数量被允许以多项式依赖于尺寸),这与先前需要用户数量的工作数量不同。在多项式上依赖于维度。这解决了Amin等人首先提出的问题。此外,我们的机制可抵抗高达$ 49 \%用户的损坏。最后,我们的结果还适用于与少数用户私下学习离散分布的最佳算法,回答Liu等人的问题,以及更广泛的问题,例如随机凸优化和通过差异化的随机梯度优化和随机梯度下降的变体私人平均估计。
translated by 谷歌翻译
特征在于构图的隐私劣化,即隐私会计,是差异隐私(DP)的基本话题,许多应用于差异私有机器学习和联合学习。我们提出了近期进步(Renyi DP,Privacy Compiles,$-D $ -dp和Pld形式主义)的统一,通过\ emph {phi $ \ phi $ -function){占主导地位}隐私损失随机变量。我们展示了我们的方法允许\ emph {natural}自适应组成等renyi dp,提供\ emph {完全紧张}隐私会计,如pld,并且可以(通常是\ memph {docklyly})转换为隐私权概况和$ f $ -dp ,从而提供$(\ epsilon,\ delta)$ - DP保证和可解释的权衡职能。算法,我们提出了一个\ xper {分析傅里叶会计师},它象征性地表示$ \ phi $ -functions的\ icph {complex}对数,并使用高斯正交进行数值计算。在几个受欢迎的DP机制及其撤销的对应物上,我们展示了我们在理论和实验中的方法的灵活性和紧张性。
translated by 谷歌翻译
我们介绍了一个普遍的框架,用于表征差异隐私保证的统计估算问题的统计效率。我们的框架,我们呼叫高维建议 - 试验释放(HPTR),在三个重要组件上建立:指数机制,强大的统计和提议 - 试验释放机制。将所有这些粘在一起是恢复力的概念,这是强大的统计估计的核心。弹性指导算法的设计,灵敏度分析和试验步骤的成功概率分析。关键识别是,如果我们设计了一种仅通过一维鲁棒统计数据访问数据的指数机制,则可以大大减少所产生的本地灵敏度。使用弹性,我们可以提供紧密的本地敏感界限。这些紧张界限在几个案例中容易转化为近乎最佳的实用程序。我们给出了将HPTR应用于统计估计问题的给定实例的一般配方,并在平均估计,线性回归,协方差估计和主成分分析的规范问题上证明了它。我们介绍了一般的公用事业分析技术,证明了HPTR几乎在文献中研究的若干场景下实现了最佳的样本复杂性。
translated by 谷歌翻译