我们介绍了$ \ pi $ -test,这是一种用于测试跨多方数据分布的数据之间的统计独立性的隐私保护算法。我们的算法依赖于私人估计数据集之间的距离相关性,这是SZ \'ekely等人中引入的独立性的定量度量。[2007]。我们在差异私有测试的实用性上建立了加法和乘法误差界,我们相信在涉及敏感数据的各种分布式假设测试设置中,我们会发现应用程序。
translated by 谷歌翻译
我们介绍了一种差别的私有方法来测量遍布两个实体托管的敏感数据之间的非线性相关性。我们提供私人估算器的实用程序保障。我们是第一个非线性相关性的私人估算器,据我们在多方设置中的知识中最好。我们认为的非线性相关的重要措施是距离相关性。这项工作具有直接应用于私有功能筛选,私人独立测试,私人K样品测试,私有多方因果推断和私有数据综合,除了探索数据分析。代码访问:公开访问的链接在补充文件中提供了代码。
translated by 谷歌翻译
我们为高维分布的身份测试提供了改进的差异私有算法。具体来说,对于带有已知协方差$ \ sigma $的$ d $二维高斯分布,我们可以测试该分布是否来自$ \ Mathcal {n}(\ mu^*,\ sigma)$,对于某些固定$ \ mu^** $或从某个$ \ MATHCAL {n}(\ mu,\ sigma)$,总变化距离至少$ \ alpha $ from $ \ mathcal {n}(\ mu^*,\ sigma)$(\ varepsilon) ,0)$ - 微分隐私,仅使用\ [\ tilde {o} \ left(\ frac {d^{1/2}}} {\ alpha^2} + \ frac {d^{1/3}} {1/3}} { \ alpha^{4/3} \ cdot \ varepsilon^{2/3}}} + \ frac {1} {\ alpha \ cdot \ cdot \ cdot \ varepsilon} \ right)\]唯一\ [\ tilde {o} \ left(\ frac {d^{1/2}}} {\ alpha^2} + \ frac {d^{1/4}} {\ alpha \ alpha \ cdot \ cdot \ cdot \ varepsilon} \ right )\]用于计算有效算法的样品。我们还提供了一个匹配的下限,表明我们的计算效率低下的算法具有最佳的样品复杂性。我们还将算法扩展到各种相关问题,包括对具有有限但未知协方差的高斯人的平均测试,对$ \ { - 1,1,1 \}^d $的产品分布的均匀性测试以及耐受性测试。我们的结果改善了Canonne等人的先前最佳工作。 (\ frac {\ sqrt {d}} {\ alpha^2} \ right)$在许多标准参数设置中。此外,我们的结果表明,令人惊讶的是,可以使用$ d $二维高斯的私人身份测试,可以用少于离散分布的私人身份测试尺寸$ d $ \ cite {actharyasz18}的私人身份测试来完成,以重组猜测〜\ cite {canonnekmuz20}的下限。
translated by 谷歌翻译
我们启动差异私有(DP)估计的研究,并访问少量公共数据。为了对D维高斯人进行私人估计,我们假设公共数据来自高斯人,该高斯与私人数据的基础高斯人的总变化距离可能消失了。我们表明,在纯或集中DP的约束下,D+1个公共数据样本足以从私人样本复杂性中删除对私人数据分布的范围参数的任何依赖性,而在没有公共数据的情况下,这是必不可少的。对于分离的高斯混合物,我们假设基本的公共和私人分布是相同的,我们考虑两个设置:(1)当给出独立于维度的公共数据时,可以根据多种方式改善私人样本复杂性混合组件的数量以及对分布范围参数的任何依赖性都可以在近似DP情况下去除; (2)当在维度上给出了一定数量的公共数据线性时,即使在集中的DP下,也可以独立于范围参数使私有样本复杂性使得可以对整体样本复杂性进行其他改进。
translated by 谷歌翻译
In nonparametric independence testing, we observe i.i.d.\ data $\{(X_i,Y_i)\}_{i=1}^n$, where $X \in \mathcal{X}, Y \in \mathcal{Y}$ lie in any general spaces, and we wish to test the null that $X$ is independent of $Y$. Modern test statistics such as the kernel Hilbert-Schmidt Independence Criterion (HSIC) and Distance Covariance (dCov) have intractable null distributions due to the degeneracy of the underlying U-statistics. Thus, in practice, one often resorts to using permutation testing, which provides a nonasymptotic guarantee at the expense of recalculating the quadratic-time statistics (say) a few hundred times. This paper provides a simple but nontrivial modification of HSIC and dCov (called xHSIC and xdCov, pronounced ``cross'' HSIC/dCov) so that they have a limiting Gaussian distribution under the null, and thus do not require permutations. This requires building on the newly developed theory of cross U-statistics by Kim and Ramdas (2020), and in particular developing several nontrivial extensions of the theory in Shekhar et al. (2022), which developed an analogous permutation-free kernel two-sample test. We show that our new tests, like the originals, are consistent against fixed alternatives, and minimax rate optimal against smooth local alternatives. Numerical simulations demonstrate that compared to the full dCov or HSIC, our variants have the same power up to a $\sqrt 2$ factor, giving practitioners a new option for large problems or data-analysis pipelines where computation, not sample size, could be the bottleneck.
translated by 谷歌翻译
We study fine-grained error bounds for differentially private algorithms for counting under continual observation. Our main insight is that the matrix mechanism when using lower-triangular matrices can be used in the continual observation model. More specifically, we give an explicit factorization for the counting matrix $M_\mathsf{count}$ and upper bound the error explicitly. We also give a fine-grained analysis, specifying the exact constant in the upper bound. Our analysis is based on upper and lower bounds of the {\em completely bounded norm} (cb-norm) of $M_\mathsf{count}$. Along the way, we improve the best-known bound of 28 years by Mathias (SIAM Journal on Matrix Analysis and Applications, 1993) on the cb-norm of $M_\mathsf{count}$ for a large range of the dimension of $M_\mathsf{count}$. Furthermore, we are the first to give concrete error bounds for various problems under continual observation such as binary counting, maintaining a histogram, releasing an approximately cut-preserving synthetic graph, many graph-based statistics, and substring and episode counting. Finally, we note that our result can be used to get a fine-grained error bound for non-interactive local learning {and the first lower bounds on the additive error for $(\epsilon,\delta)$-differentially-private counting under continual observation.} Subsequent to this work, Henzinger et al. (SODA2023) showed that our factorization also achieves fine-grained mean-squared error.
translated by 谷歌翻译
The first large-scale deployment of private federated learning uses differentially private counting in the continual release model as a subroutine (Google AI blog titled "Federated Learning with Formal Differential Privacy Guarantees"). In this case, a concrete bound on the error is very relevant to reduce the privacy parameter. The standard mechanism for continual counting is the binary mechanism. We present a novel mechanism and show that its mean squared error is both asymptotically optimal and a factor 10 smaller than the error of the binary mechanism. We also show that the constants in our analysis are almost tight by giving non-asymptotic lower and upper bounds that differ only in the constants of lower-order terms. Our algorithm is a matrix mechanism for the counting matrix and takes constant time per release. We also use our explicit factorization of the counting matrix to give an upper bound on the excess risk of the private learning algorithm of Denisov et al. (NeurIPS 2022). Our lower bound for any continual counting mechanism is the first tight lower bound on continual counting under approximate differential privacy. It is achieved using a new lower bound on a certain factorization norm, denoted by $\gamma_F(\cdot)$, in terms of the singular values of the matrix. In particular, we show that for any complex matrix, $A \in \mathbb{C}^{m \times n}$, \[ \gamma_F(A) \geq \frac{1}{\sqrt{m}}\|A\|_1, \] where $\|\cdot \|$ denotes the Schatten-1 norm. We believe this technique will be useful in proving lower bounds for a larger class of linear queries. To illustrate the power of this technique, we show the first lower bound on the mean squared error for answering parity queries.
translated by 谷歌翻译
我们介绍了一个普遍的框架,用于表征差异隐私保证的统计估算问题的统计效率。我们的框架,我们呼叫高维建议 - 试验释放(HPTR),在三个重要组件上建立:指数机制,强大的统计和提议 - 试验释放机制。将所有这些粘在一起是恢复力的概念,这是强大的统计估计的核心。弹性指导算法的设计,灵敏度分析和试验步骤的成功概率分析。关键识别是,如果我们设计了一种仅通过一维鲁棒统计数据访问数据的指数机制,则可以大大减少所产生的本地灵敏度。使用弹性,我们可以提供紧密的本地敏感界限。这些紧张界限在几个案例中容易转化为近乎最佳的实用程序。我们给出了将HPTR应用于统计估计问题的给定实例的一般配方,并在平均估计,线性回归,协方差估计和主成分分析的规范问题上证明了它。我们介绍了一般的公用事业分析技术,证明了HPTR几乎在文献中研究的若干场景下实现了最佳的样本复杂性。
translated by 谷歌翻译
Classical asymptotic theory for statistical inference usually involves calibrating a statistic by fixing the dimension $d$ while letting the sample size $n$ increase to infinity. Recently, much effort has been dedicated towards understanding how these methods behave in high-dimensional settings, where $d$ and $n$ both increase to infinity together. This often leads to different inference procedures, depending on the assumptions about the dimensionality, leaving the practitioner in a bind: given a dataset with 100 samples in 20 dimensions, should they calibrate by assuming $n \gg d$, or $d/n \approx 0.2$? This paper considers the goal of dimension-agnostic inference; developing methods whose validity does not depend on any assumption on $d$ versus $n$. We introduce an approach that uses variational representations of existing test statistics along with sample splitting and self-normalization to produce a new test statistic with a Gaussian limiting distribution, regardless of how $d$ scales with $n$. The resulting statistic can be viewed as a careful modification of degenerate U-statistics, dropping diagonal blocks and retaining off-diagonal blocks. We exemplify our technique for some classical problems including one-sample mean and covariance testing, and show that our tests have minimax rate-optimal power against appropriate local alternatives. In most settings, our cross U-statistic matches the high-dimensional power of the corresponding (degenerate) U-statistic up to a $\sqrt{2}$ factor.
translated by 谷歌翻译
In this work, we give efficient algorithms for privately estimating a Gaussian distribution in both pure and approximate differential privacy (DP) models with optimal dependence on the dimension in the sample complexity. In the pure DP setting, we give an efficient algorithm that estimates an unknown $d$-dimensional Gaussian distribution up to an arbitrary tiny total variation error using $\widetilde{O}(d^2 \log \kappa)$ samples while tolerating a constant fraction of adversarial outliers. Here, $\kappa$ is the condition number of the target covariance matrix. The sample bound matches best non-private estimators in the dependence on the dimension (up to a polylogarithmic factor). We prove a new lower bound on differentially private covariance estimation to show that the dependence on the condition number $\kappa$ in the above sample bound is also tight. Prior to our work, only identifiability results (yielding inefficient super-polynomial time algorithms) were known for the problem. In the approximate DP setting, we give an efficient algorithm to estimate an unknown Gaussian distribution up to an arbitrarily tiny total variation error using $\widetilde{O}(d^2)$ samples while tolerating a constant fraction of adversarial outliers. Prior to our work, all efficient approximate DP algorithms incurred a super-quadratic sample cost or were not outlier-robust. For the special case of mean estimation, our algorithm achieves the optimal sample complexity of $\widetilde O(d)$, improving on a $\widetilde O(d^{1.5})$ bound from prior work. Our pure DP algorithm relies on a recursive private preconditioning subroutine that utilizes the recent work on private mean estimation [Hopkins et al., 2022]. Our approximate DP algorithms are based on a substantial upgrade of the method of stabilizing convex relaxations introduced in [Kothari et al., 2022].
translated by 谷歌翻译
最大信息系数(MIC)是一个强大的统计量,可以识别变量之间的依赖性。但是,它可以应用于敏感数据,并且发布可能会泄漏私人信息。作为解决方案,我们提出算法以提供差异隐私的方式近似麦克风。我们表明,经典拉普拉斯机制的自然应用产生的精度不足。因此,我们介绍了MICT统计量,这是一种新的MIC近似值,与差异隐私更加兼容。我们证明MICS是麦克风的一致估计器,我们提供了两个差异性私有版本。我们对各种真实和合成数据集进行实验。结果表明,私人微统计数据极大地超过了拉普拉斯机制的直接应用。此外,对现实世界数据集的实验显示出准确性,当样本量至少适中时可用。
translated by 谷歌翻译
我们介绍了一种基于约翰逊·林登斯特劳斯引理的统计查询的新方法,以释放具有差异隐私的统计查询的答案。关键的想法是随机投影查询答案,以较低的维空间,以便将可行的查询答案的任何两个向量之间的距离保留到添加性错误。然后,我们使用简单的噪声机制回答投影的查询,并将答案提升到原始维度。使用这种方法,我们首次给出了纯粹的私人机制,具有最佳情况下的最佳情况样本复杂性,在平均错误下,以回答$ n $ $ n $的宇宙的$ k $ Queries的工作量。作为其他应用,我们给出了具有最佳样品复杂性的第一个纯私人有效机制,用于计算有限的高维分布的协方差,并用于回答2向边缘查询。我们还表明,直到对错误的依赖性,我们机制的变体对于每个给定的查询工作负载几乎是最佳的。
translated by 谷歌翻译
We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD). We present two distributionfree tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
我们提出了一种基于最大平均差异(MMD)的新型非参数两样本测试,该测试是通过具有不同核带宽的聚合测试来构建的。这种称为MMDAGG的聚合过程可确保对所使用的内核的收集最大化测试能力,而无需持有核心选择的数据(这会导致测试能力损失)或任意内核选择,例如中位数启发式。我们在非反应框架中工作,并证明我们的聚集测试对Sobolev球具有最小自适应性。我们的保证不仅限于特定的内核,而是符合绝对可集成的一维翻译不变特性内核的任何产品。此外,我们的结果适用于流行的数值程序来确定测试阈值,即排列和野生引导程序。通过对合成数据集和现实世界数据集的数值实验,我们证明了MMDAGG优于MMD内核适应的替代方法,用于两样本测试。
translated by 谷歌翻译
Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
translated by 谷歌翻译
我们为其非私人对准减少$(\ varepsilon,\ delta)$差异私人(dp)统计估计,提供了一个相当一般的框架。作为本框架的主要应用,我们提供多项式时间和$(\ varepsilon,\ delta)$ - DP算法用于学习(不受限制的)高斯分布在$ \ mathbb {r} ^ d $。我们学习高斯的方法的样本复杂度高斯距离总变化距离$ \ alpha $是$ \ widetilde {o} \ left(\ frac {d ^ 2} {\ alpha ^ 2} + \ frac {d ^ 2 \ sqrt {\ ln {1 / \ delta}} {\ alpha \ varepsilon} \右)$,匹配(最多为对数因子)最佳已知的信息理论(非高效)样本复杂性上限的aden-ali, Ashtiani,Kamath〜(alt'21)。在一个独立的工作中,Kamath,Mouzakis,Singhal,Steinke和Ullman〜(Arxiv:2111.04609)使用不同的方法证明了类似的结果,并以$ O(d ^ {5/2})$样本复杂性依赖于$ d $ 。作为我们的框架的另一个应用,我们提供了第一次多项式时间$(\ varepsilon,\ delta)$-dp算法,用于鲁棒学习(不受限制的)高斯。
translated by 谷歌翻译
我们使用最大平均差异(MMD),Hilbert Schmidt独立标准(HSIC)和内核Stein差异(KSD),,提出了一系列针对两样本,独立性和合适性问题的计算效率,非参数测试,用于两样本,独立性和合适性问题。分别。我们的测试统计数据是不完整的$ u $统计信息,其计算成本与与经典$ u $ u $统计测试相关的样本数量和二次时间之间的线性时间之间的插值。这三个提出的测试在几个内核带宽上汇总,以检测各种尺度的零件:我们称之为结果测试mmdagginc,hsicagginc和ksdagginc。对于测试阈值,我们得出了一个针对野生引导不完整的$ U $ - 统计数据的分位数,该统计是独立的。我们得出了MMDagginc和Hsicagginc的均匀分离率,并准确量化了计算效率和可实现速率之间的权衡:据我们所知,该结果是基于不完整的$ U $统计学的测试新颖的。我们进一步表明,在二次时间案例中,野生引导程序不会对基于更广泛的基于置换的方法进行测试功率,因为​​两者都达到了相同的最小最佳速率(这反过来又与使用Oracle分位数的速率相匹配)。我们通过数值实验对计算效率和测试能力之间的权衡进行数字实验来支持我们的主张。在三个测试框架中,我们观察到我们提出的线性时间聚合测试获得的功率高于当前最新线性时间内核测试。
translated by 谷歌翻译
我们给出了第一个多项式 - 时间,多项式 - 样本,差异私人估算器,用于任意高斯分发$ \ mathcal {n}(\ mu,\ sigma)$ in $ \ mathbb {r} ^ d $。所有以前的估算器都是非变性的,具有无限的运行时间,或者要求用户在参数$ \ mu $和$ \ sigma $上指定先验的绑定。我们算法中的主要新技术工具是一个新的差别私有预处理器,它从任意高斯$ \ mathcal {n}(0,\ sigma)$中采用样本,并返回矩阵$ a $,使得$ a \ sigma a ^ t$具有恒定的条件号。
translated by 谷歌翻译
差异隐私对于具有严格的隐私保证的统计和机器学习算法的现实部署至关重要。为了释放样品平均值,最早开发了差异隐私机制的统计查询。在几何统计中,样本fr \'echet均值代表了最基本的统计摘要之一,因为它概括了属于非线性歧管的数据的样本均值。本着这种精神,到目前为止,已经开发出差异隐私机制的唯一几何统计查询是用于释放样本fr \'echet的含义:最近提出了\ emph {riemannian laplace机制},以使FR私有化FR私有化\'echet的意思是完全riemannian歧管。在许多领域中,对称正定(SPD)矩阵的流形用于对数据空间进行建模,包括在隐私要求是关键的医学成像中。我们提出了一种新颖,简单且快速的机制 - \ emph {切线高斯机构} - 以计算赋予二型e echet的差异私有fr \'echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet echet含量均为ecly -eeclidean riemannian metric。我们表明,我们的新机制在当前和仅可用的基线方面就数据维度获得了二次实用性改进。我们的机制在实践中也更简单,因为它不需要任何昂贵的马尔可夫链蒙特卡洛(MCMC)采样,并且通过多个数量级的计算速度更快 - 如广泛的实验所证实。
translated by 谷歌翻译