给定真实的假设类$ \ mathcal {h} $,我们在什么条件下调查有一个差异的私有算法,它从$ \ mathcal {h} $给出的最佳假设.I.i.d.数据。灵感来自最近的成果的二进制分类的相关环境(Alon等,2019; Bun等,2020),其中显示了二进制类的在线学习是必要的,并且足以追随其私人学习,Jung等人。 (2020)显示,在回归的设置中,$ \ mathcal {h} $的在线学习是私人可读性所必需的。这里的在线学习$ \ mathcal {h} $的特点是其$ \ eta $-sequentient胖胖子的优势,$ {\ rm sfat} _ \ eta(\ mathcal {h})$,适用于所有$ \ eta> 0 $。就足够的私人学习条件而言,Jung等人。 (2020)显示$ \ mathcal {h} $私下学习,如果$ \ lim _ {\ eta \ downarrow 0} {\ rm sfat} _ \ eta(\ mathcal {h})$是有限的,这是一个相当限制的健康)状况。我们展示了在轻松的条件下,\ LIM \ INF _ {\ eta \ downarrow 0} \ eta \ cdot {\ rm sfat} _ \ eta(\ mathcal {h})= 0 $,$ \ mathcal {h} $私人学习,为\ \ rm sfat} _ \ eta(\ mathcal {h})$ \ eta \ dockarrow 0 $ divering建立第一个非参数私人学习保证。我们的技术涉及一种新颖的过滤过程,以输出非参数函数类的稳定假设。
translated by 谷歌翻译
我们研究了非参数在线回归中的快速收敛速度,即遗憾的是关于具有有界复杂度的任意函数类来定义后悔。我们的贡献是两倍: - 在绝对损失中的非参数网上回归的可实现设置中,我们提出了一种随机适当的学习算法,该算法在假设类的顺序脂肪破碎尺寸方面获得了近乎最佳的错误。在与一类Littlestone维度$ D $的在线分类中,我们的绑定减少到$ d \ cdot {\ rm poly} \ log t $。这结果回答了一个问题,以及适当的学习者是否可以实现近乎最佳错误的界限;以前,即使在线分类,绑定的最知名错误也是$ \ tilde o(\ sqrt {dt})$。此外,对于真实值(回归)设置,在这项工作之前,界定的最佳错误甚至没有以不正当的学习者所知。 - 使用上述结果,我们展示了Littlestone维度$ D $的一般总和二进制游戏的独立学习算法,每个玩家达到后悔$ \ tilde o(d ^ {3/4} \ cdot t ^ {1 / 4})$。该结果概括了Syrgkanis等人的类似结果。 (2015)谁表明,在有限的游戏中,最佳遗憾可以从普通的o(\ sqrt {t})$中的$ o(\ sqrt {t})为游戏设置中的$ o(t ^ {1/4})$。要建立上述结果,我们介绍了几种新技术,包括:分层聚合规则,以实现对实际类别的最佳错误,Hanneke等人的适当在线可实现学习者的多尺度扩展。 (2021),一种方法来表明这种非参数学习算法的输出是稳定的,并且证明Minimax定理在所有在线学习游戏中保持。
translated by 谷歌翻译
We first prove that Littlestone classes, those which model theorists call stable, characterize learnability in a new statistical model: a learner in this new setting outputs the same hypothesis, up to measure zero, with probability one, after a uniformly bounded number of revisions. This fills a certain gap in the literature, and sets the stage for an approximation theorem characterizing Littlestone classes in terms of a range of learning models, by analogy to definability of types in model theory. We then give a complete analogue of Shelah's celebrated (and perhaps a priori untranslatable) Unstable Formula Theorem in the learning setting, with algorithmic arguments taking the place of the infinite.
translated by 谷歌翻译
可实现和不可知性的可读性的等价性是学习理论的基本现象。与PAC学习和回归等古典设置范围的变种,近期趋势,如对冲强劲和私人学习,我们仍然缺乏统一理论;等同性的传统证据往往是不同的,并且依赖于强大的模型特异性假设,如统一的收敛和样本压缩。在这项工作中,我们给出了第一个独立的框架,解释了可实现和不可知性的可读性的等价性:三行黑箱减少简化,统一,并在各种各样的环境中扩展了我们的理解。这包括没有已知的学报的模型,例如学习任意分布假设或一般损失,以及许多其他流行的设置,例如强大的学习,部分学习,公平学习和统计查询模型。更一般地,我们认为可实现和不可知的学习的等价性实际上是我们调用属性概括的更广泛现象的特殊情况:可以满足有限的学习算法(例如\噪声公差,隐私,稳定性)的任何理想性质假设类(可能在某些变化中)延伸到任何学习的假设类。
translated by 谷歌翻译
In this work, we give efficient algorithms for privately estimating a Gaussian distribution in both pure and approximate differential privacy (DP) models with optimal dependence on the dimension in the sample complexity. In the pure DP setting, we give an efficient algorithm that estimates an unknown $d$-dimensional Gaussian distribution up to an arbitrary tiny total variation error using $\widetilde{O}(d^2 \log \kappa)$ samples while tolerating a constant fraction of adversarial outliers. Here, $\kappa$ is the condition number of the target covariance matrix. The sample bound matches best non-private estimators in the dependence on the dimension (up to a polylogarithmic factor). We prove a new lower bound on differentially private covariance estimation to show that the dependence on the condition number $\kappa$ in the above sample bound is also tight. Prior to our work, only identifiability results (yielding inefficient super-polynomial time algorithms) were known for the problem. In the approximate DP setting, we give an efficient algorithm to estimate an unknown Gaussian distribution up to an arbitrarily tiny total variation error using $\widetilde{O}(d^2)$ samples while tolerating a constant fraction of adversarial outliers. Prior to our work, all efficient approximate DP algorithms incurred a super-quadratic sample cost or were not outlier-robust. For the special case of mean estimation, our algorithm achieves the optimal sample complexity of $\widetilde O(d)$, improving on a $\widetilde O(d^{1.5})$ bound from prior work. Our pure DP algorithm relies on a recursive private preconditioning subroutine that utilizes the recent work on private mean estimation [Hopkins et al., 2022]. Our approximate DP algorithms are based on a substantial upgrade of the method of stabilizing convex relaxations introduced in [Kothari et al., 2022].
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
我们研究了在存在$ \ epsilon $ - 对抗异常值的高维稀疏平均值估计的问题。先前的工作为此任务获得了该任务的样本和计算有效算法,用于辅助性Subgaussian分布。在这项工作中,我们开发了第一个有效的算法,用于强大的稀疏平均值估计,而没有对协方差的先验知识。对于$ \ Mathbb r^d $上的分布,带有“认证有限”的$ t $ tum-矩和足够轻的尾巴,我们的算法达到了$ o(\ epsilon^{1-1/t})$带有样品复杂性$的错误(\ epsilon^{1-1/t}) m =(k \ log(d))^{o(t)}/\ epsilon^{2-2/t} $。对于高斯分布的特殊情况,我们的算法达到了$ \ tilde o(\ epsilon)$的接近最佳错误,带有样品复杂性$ m = o(k^4 \ mathrm {polylog}(d)(d))/\ epsilon^^ 2 $。我们的算法遵循基于方形的总和,对算法方法的证明。我们通过统计查询和低度多项式测试的下限来补充上限,提供了证据,表明我们算法实现的样本时间 - 错误权衡在质量上是最好的。
translated by 谷歌翻译
我们考虑从数据学习树结构ising模型的问题,使得使用模型计算的后续预测是准确的。具体而言,我们的目标是学习一个模型,使得小组变量$ S $的后海报$ p(x_i | x_s)$。自推出超过50年以来,有效计算最大似然树的Chow-Liu算法一直是学习树结构图形模型的基准算法。 [BK19]示出了关于以预测的局部总变化损耗的CHOW-LIU算法的样本复杂性的界限。虽然这些结果表明,即使在恢复真正的基础图中也可以学习有用的模型是不可能的,它们的绑定取决于相互作用的最大强度,因此不会达到信息理论的最佳选择。在本文中,我们介绍了一种新的算法,仔细结合了Chow-Liu算法的元素,以便在预测的损失下有效地和最佳地学习树ising模型。我们的算法对模型拼写和对抗损坏具有鲁棒性。相比之下,我们表明庆祝的Chow-Liu算法可以任意次优。
translated by 谷歌翻译
The problem of learning threshold functions is a fundamental one in machine learning. Classical learning theory implies sample complexity of $O(\xi^{-1} \log(1/\beta))$ (for generalization error $\xi$ with confidence $1-\beta$). The private version of the problem, however, is more challenging and in particular, the sample complexity must depend on the size $|X|$ of the domain. Progress on quantifying this dependence, via lower and upper bounds, was made in a line of works over the past decade. In this paper, we finally close the gap for approximate-DP and provide a nearly tight upper bound of $\tilde{O}(\log^* |X|)$, which matches a lower bound by Alon et al (that applies even with improper learning) and improves over a prior upper bound of $\tilde{O}((\log^* |X|)^{1.5})$ by Kaplan et al. We also provide matching upper and lower bounds of $\tilde{\Theta}(2^{\log^*|X|})$ for the additive error of private quasi-concave optimization (a related and more general problem). Our improvement is achieved via the novel Reorder-Slice-Compute paradigm for private data analysis which we believe will have further applications.
translated by 谷歌翻译
我们研究了清单可解放的平均估计问题,而对手可能会破坏大多数数据集。具体来说,我们在$ \ mathbb {r} ^ $和参数$ 0 <\ alpha <\ frac 1 2 $中给出了一个$ $ n $ points的$ t $ points。$ \ alpha $ -flaction的点$ t $是iid来自乖巧的分发$ \ Mathcal {D} $的样本,剩余的$(1- \ alpha)$ - 分数是任意的。目标是输出小型的vectors列表,其中至少一个接近$ \ mathcal {d} $的均值。我们开发新的算法,用于列出可解码的平均值估计,实现几乎最佳的统计保证,运行时间$ O(n ^ {1 + \ epsilon_0} d)$,适用于任何固定$ \ epsilon_0> 0 $。所有先前的此问题算法都有额外的多项式因素在$ \ frac 1 \ alpha $。我们与额外技术一起利用此结果,以获得用于聚类混合物的第一个近几个线性时间算法,用于分开的良好表现良好的分布,几乎匹配谱方法的统计保证。先前的聚类算法本身依赖于$ k $ -pca的应用程序,从而产生$ \ omega(n d k)$的运行时。这标志着近二十年来这个基本统计问题的第一次运行时间改进。我们的方法的起点是基于单次矩阵乘法权重激发电位减少的$ \ Alpha \至1 $制度中的新颖和更简单的近线性时间较强的估计算法。在Diakonikolas等人的迭代多滤波技术的背景下,我们迫切地利用了这种新的算法框架。 '18,'20,提供一种使用一维投影的同时群集和下群点的方法 - 因此,绕过先前算法所需的$ k $ -pca子程序。
translated by 谷歌翻译
我们介绍了一个普遍的框架,用于表征差异隐私保证的统计估算问题的统计效率。我们的框架,我们呼叫高维建议 - 试验释放(HPTR),在三个重要组件上建立:指数机制,强大的统计和提议 - 试验释放机制。将所有这些粘在一起是恢复力的概念,这是强大的统计估计的核心。弹性指导算法的设计,灵敏度分析和试验步骤的成功概率分析。关键识别是,如果我们设计了一种仅通过一维鲁棒统计数据访问数据的指数机制,则可以大大减少所产生的本地灵敏度。使用弹性,我们可以提供紧密的本地敏感界限。这些紧张界限在几个案例中容易转化为近乎最佳的实用程序。我们给出了将HPTR应用于统计估计问题的给定实例的一般配方,并在平均估计,线性回归,协方差估计和主成分分析的规范问题上证明了它。我们介绍了一般的公用事业分析技术,证明了HPTR几乎在文献中研究的若干场景下实现了最佳的样本复杂性。
translated by 谷歌翻译
We study the relationship between adversarial robustness and differential privacy in high-dimensional algorithmic statistics. We give the first black-box reduction from privacy to robustness which can produce private estimators with optimal tradeoffs among sample complexity, accuracy, and privacy for a wide range of fundamental high-dimensional parameter estimation problems, including mean and covariance estimation. We show that this reduction can be implemented in polynomial time in some important special cases. In particular, using nearly-optimal polynomial-time robust estimators for the mean and covariance of high-dimensional Gaussians which are based on the Sum-of-Squares method, we design the first polynomial-time private estimators for these problems with nearly-optimal samples-accuracy-privacy tradeoffs. Our algorithms are also robust to a constant fraction of adversarially-corrupted samples.
translated by 谷歌翻译
A classical result in learning theory shows the equivalence of PAC learnability of binary hypothesis classes and the finiteness of VC dimension. Extending this to the multiclass setting was an open problem, which was settled in a recent breakthrough result characterizing multiclass PAC learnability via the DS dimension introduced earlier by Daniely and Shalev-Shwartz. In this work we consider list PAC learning where the goal is to output a list of $k$ predictions. List learning algorithms have been developed in several settings before and indeed, list learning played an important role in the recent characterization of multiclass learnability. In this work we ask: when is it possible to $k$-list learn a hypothesis class? We completely characterize $k$-list learnability in terms of a generalization of DS dimension that we call the $k$-DS dimension. Generalizing the recent characterization of multiclass learnability, we show that a hypothesis class is $k$-list learnable if and only if the $k$-DS dimension is finite.
translated by 谷歌翻译
我们研究了测试有序域上的离散概率分布是否是指定数量的垃圾箱的直方图。$ k $的简洁近似值的最常见工具之一是$ k $ [n] $,是概率分布,在一组$ k $间隔上是分段常数的。直方图测试问题如下:从$ [n] $上的未知分布中给定样品$ \ mathbf {p} $,我们想区分$ \ mathbf {p} $的情况从任何$ k $ - 组织图中,总变化距离的$ \ varepsilon $ -far。我们的主要结果是针对此测试问题的样本接近最佳和计算有效的算法,以及几乎匹配的(在对数因素内)样品复杂性下限。具体而言,我们表明直方图测试问题具有样品复杂性$ \ widetilde \ theta(\ sqrt {nk} / \ varepsilon + k / \ varepsilon^2 + \ sqrt {n} / \ varepsilon^2)$。
translated by 谷歌翻译
我们研究了顺序预测和在线minimax遗憾的问题,并在一般损失函数下具有随机生成的特征。我们介绍了一个预期的最坏情况下的概念minimax遗憾,它概括并涵盖了先前已知的minimax遗憾。对于这种极匹马的遗憾,我们通过随机全局顺序覆盖的新颖概念建立了紧密的上限。我们表明,对于VC-Dimension $ \ Mathsf {Vc} $和$ I.I.D. $生成的长度$ t $的假设类别,随机全局顺序覆盖的基数可以在上限上限制高概率(WHP) e^{o(\ mathsf {vc} \ cdot \ log^2 t)} $。然后,我们通过引入一种称为Star-Littlestone维度的新复杂度度量来改善这种束缚,并显示与Star-Littlestone dimension $ \ Mathsf {Slsf {sl} $类别的类别允许订单的随机全局顺序覆盖$ e^{o(\ Mathsf) {sl} \ cdot \ log t)} $。我们进一步建立了具有有限脂肪的数字的真实有价值类的上限。最后,通过应用固定设计的Minimax遗憾的信息理论工具,我们为预期的最坏情况下的Minimax遗憾提供了下限。我们通过在预期的最坏情况下对对数损失和一般可混合损失的遗憾建立紧密的界限来证明我们的方法的有效性。
translated by 谷歌翻译
我们介绍了一种基于约翰逊·林登斯特劳斯引理的统计查询的新方法,以释放具有差异隐私的统计查询的答案。关键的想法是随机投影查询答案,以较低的维空间,以便将可行的查询答案的任何两个向量之间的距离保留到添加性错误。然后,我们使用简单的噪声机制回答投影的查询,并将答案提升到原始维度。使用这种方法,我们首次给出了纯粹的私人机制,具有最佳情况下的最佳情况样本复杂性,在平均错误下,以回答$ n $ $ n $的宇宙的$ k $ Queries的工作量。作为其他应用,我们给出了具有最佳样品复杂性的第一个纯私人有效机制,用于计算有限的高维分布的协方差,并用于回答2向边缘查询。我们还表明,直到对错误的依赖性,我们机制的变体对于每个给定的查询工作负载几乎是最佳的。
translated by 谷歌翻译
大部分强化学习理论都建立在计算上难以实施的甲板上。专门用于在部分可观察到的马尔可夫决策过程(POMDP)中学习近乎最佳的政策,现有算法要么需要对模型动态(例如确定性过渡)做出强有力的假设,要么假设访问甲骨文作为解决艰难的计划或估算问题的访问子例程。在这项工作中,我们在合理的假设下开发了第一个用于POMDP的无Oracle学习算法。具体而言,我们给出了一种用于在“可观察” pomdps中学习的准化性时间端到端算法,其中可观察性是一个假设,即对国家而言,分离良好的分布诱导了分离良好的分布分布而不是观察。我们的技术规定了在不确定性下使用乐观原则来促进探索的更传统的方法,而是在构建策略涵盖的情况下提供了一种新颖的barycentric跨度应用。
translated by 谷歌翻译
标签排名(LR)对应于学习一个假设的问题,以通过有限一组标签将功能映射到排名。我们采用了对LR的非参数回归方法,并获得了这一基本实际问题的理论绩效保障。我们在无噪声和嘈杂的非参数回归设置中介绍了一个用于标签排名的生成模型,并为两种情况下提供学习算法的示例复杂性界限。在无噪声环境中,我们研究了全排序的LR问题,并在高维制度中使用决策树和随机林提供计算有效的算法。在嘈杂的环境中,我们考虑使用统计观点的不完整和部分排名的LR更通用的情况,并使用多种多组分类的一种方法获得样本复杂性范围。最后,我们与实验补充了我们的理论贡献,旨在了解输入回归噪声如何影响观察到的输出。
translated by 谷歌翻译
Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Rényi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy."
translated by 谷歌翻译
我们给出了第一个多项式算法来估计$ d $ -variate概率分布的平均值,从$ \ tilde {o}(d)$独立的样本受到纯粹的差异隐私的界限。此问题的现有算法无论是呈指数运行时间,需要$ \ OMEGA(D ^ {1.5})$样本,或仅满足较弱的集中或近似差分隐私条件。特别地,所有先前的多项式算法都需要$ d ^ {1+ \ omega(1)} $ samples,以保证“加密”高概率,1-2 ^ { - d ^ {\ omega(1) $,虽然我们的算法保留$ \ tilde {o}(d)$ SAMPS复杂性即使在此严格设置中也是如此。我们的主要技术是使用强大的方块方法(SOS)来设计差异私有算法的新方法。算法的证据是在高维算法统计数据中的许多近期作品中的一个关键主题 - 显然需要指数运行时间,但可以通过低度方块证明可以捕获其分析可以自动变成多项式 - 时间算法具有相同的可证明担保。我们展示了私有算法的类似证据现象:工作型指数机制的实例显然需要指数时间,但可以用低度SOS样张分析的指数时间,可以自动转换为多项式差异私有算法。我们证明了捕获这种现象的元定理,我们希望在私人算法设计中广泛使用。我们的技术还在高维度之间绘制了差异私有和强大统计数据之间的新连接。特别是通过我们的校验算法镜头来看,几次研究的SOS证明在近期作品中的算法稳健统计中直接产生了我们差异私有平均估计算法的关键组成部分。
translated by 谷歌翻译