统计模型检查是一类顺序算法,可以验证网络物理系统集合中感兴趣的规格(例如,来自批处理的99%的汽车是否符合其能源效率的要求)。这些算法通过绘制足够数量的独立和相同分布的样本来推断具有可证明的统计保证的系统满足给定规范的概率。在统计模型检查过程中,可能会推断出样品的值(例如,用户的汽车能源效率),从而在消费者级别的应用程序(例如自闭症和医疗设备)中引起隐私问题。本文从差异隐私的角度介绍了统计模型检查算法的隐私。这些算法是顺序的,绘制样品直到满足其值的条件。我们表明,揭示绘制的样品数量可能侵犯隐私。我们还表明,在顺序算法的背景下,将算法的输出随机输出的标准指数机制无法实现。取而代之的是,我们放宽了差异隐私的保守要求,即该算法的输出的灵敏度应与任何数据集的任何扰动界定。我们提出了一个新的差异隐私概念,我们称之为预期的差异隐私。然后,我们提出了对顺序算法的新型预期灵敏度分析,并提出了一种相应的指数机制,该机制将终止时间随机,以实现预期的差异隐私。我们将提出的机制应用于统计模型检查算法,以保留其绘制样品的隐私。在案例研究中证明了所提出算法的效用。
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
In this work, we give efficient algorithms for privately estimating a Gaussian distribution in both pure and approximate differential privacy (DP) models with optimal dependence on the dimension in the sample complexity. In the pure DP setting, we give an efficient algorithm that estimates an unknown $d$-dimensional Gaussian distribution up to an arbitrary tiny total variation error using $\widetilde{O}(d^2 \log \kappa)$ samples while tolerating a constant fraction of adversarial outliers. Here, $\kappa$ is the condition number of the target covariance matrix. The sample bound matches best non-private estimators in the dependence on the dimension (up to a polylogarithmic factor). We prove a new lower bound on differentially private covariance estimation to show that the dependence on the condition number $\kappa$ in the above sample bound is also tight. Prior to our work, only identifiability results (yielding inefficient super-polynomial time algorithms) were known for the problem. In the approximate DP setting, we give an efficient algorithm to estimate an unknown Gaussian distribution up to an arbitrarily tiny total variation error using $\widetilde{O}(d^2)$ samples while tolerating a constant fraction of adversarial outliers. Prior to our work, all efficient approximate DP algorithms incurred a super-quadratic sample cost or were not outlier-robust. For the special case of mean estimation, our algorithm achieves the optimal sample complexity of $\widetilde O(d)$, improving on a $\widetilde O(d^{1.5})$ bound from prior work. Our pure DP algorithm relies on a recursive private preconditioning subroutine that utilizes the recent work on private mean estimation [Hopkins et al., 2022]. Our approximate DP algorithms are based on a substantial upgrade of the method of stabilizing convex relaxations introduced in [Kothari et al., 2022].
translated by 谷歌翻译
我们给出了第一个多项式时间和样本$(\ epsilon,\ delta)$ - 差异私有(DP)算法,以估计存在恒定的对抗性异常分数的平均值,协方差和更高的时刻。我们的算法成功用于分布的分布系列,以便在经济估计上满足两个学习的良好性质:定向时刻的可证明的子销售,以及2度多项式的可证式超分子。我们的恢复保证持有“右仿射效率规范”:Mahalanobis距离的平均值,乘法谱和相对Frobenius距离保证,适用于更高时刻的协方差和注射规范。先前的作品获得了私有稳健算法,用于界限协方差的子静脉分布的平均估计。对于协方差估算,我们的是第一算法(即使在没有异常值的情况下也是在没有任何条件号的假设的情况下成功的。我们的算法从一个新的框架出现,该框架提供了一种用于修改凸面放宽的一般蓝图,以便在算法在其运行中产生正确的正确性的证人,以满足适当的参数规范中的强烈最坏情况稳定性。我们验证了用于修改标准的平方(SOS)SEMIDEFINITE编程放松的担保,以实现鲁棒估算。我们的隐私保障是通过将稳定性保证与新的“估计依赖性”噪声注入机制相结合来获得,其中噪声比例与估计的协方差的特征值。我们认为,此框架更加有用,以获得强大的估算器的DP对应者。独立于我们的工作,Ashtiani和Liaw [Al21]还获得了高斯分布的多项式时间和样本私有鲁棒估计算法。
translated by 谷歌翻译
Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Rényi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy."
translated by 谷歌翻译
在共享数据的统计学习和分析中,在联合学习和元学习等平台上越来越广泛地采用,有两个主要问题:隐私和鲁棒性。每个参与的个人都应该能够贡献,而不会担心泄露一个人的敏感信息。与此同时,系统应该在恶意参与者的存在中插入损坏的数据。最近的算法在学习中,学习共享数据专注于这些威胁中的一个,使系统容易受到另一个威胁。我们弥合了这个差距,以获得估计意思的规范问题。样品。我们介绍了素数,这是第一算法,实现了各种分布的隐私和鲁棒性。我们通过新颖的指数时间算法进一步补充了这一结果,提高了素数的样本复杂性,实现了近最优保证并匹配(非鲁棒)私有平均估计的已知下限。这证明没有额外的统计成本同时保证隐私和稳健性。
translated by 谷歌翻译
最大信息系数(MIC)是一个强大的统计量,可以识别变量之间的依赖性。但是,它可以应用于敏感数据,并且发布可能会泄漏私人信息。作为解决方案,我们提出算法以提供差异隐私的方式近似麦克风。我们表明,经典拉普拉斯机制的自然应用产生的精度不足。因此,我们介绍了MICT统计量,这是一种新的MIC近似值,与差异隐私更加兼容。我们证明MICS是麦克风的一致估计器,我们提供了两个差异性私有版本。我们对各种真实和合成数据集进行实验。结果表明,私人微统计数据极大地超过了拉普拉斯机制的直接应用。此外,对现实世界数据集的实验显示出准确性,当样本量至少适中时可用。
translated by 谷歌翻译
研究人员和从业人员如何处理隐私 - 实用性权衡之间存在脱节。研究人员主要是从隐私的第一角度运作,设定严格的隐私要求并最大程度地限制受这些约束的风险。从业者通常希望获得准确的第一视角,可能会对他们可能获得足够小的错误的最大隐私感到满意。 Ligett等。已经引入了一种“降噪”算法来解决后一种观点。作者表明,通过添加相关的拉普拉斯噪声并逐步减少其需求,可以产生一系列越来越准确的私人参数估计值,而仅以最低噪声介绍的方式支付隐私成本。在这项工作中,我们将降噪概括为高斯噪声的设置,并引入了布朗机制。布朗机制首先添加与模拟布朗运动的最后点相对应的高方差的高斯噪声。然后,根据从业人员的酌情决定权,通过沿着布朗的路径追溯到较早的时间来逐渐降低噪音。我们的机制更自然地适用于有限的$ \ ell_2 $ - 敏感性的共同设置,从经验上优于公共统计任务上的现有工作,并在与从业者的整个交互中提供了对隐私损失的可自定义控制。我们通过简化的Brownian机制来补充我们的布朗机制,这是对提供自适应隐私保证的经典座位算法的概括。总体而言,我们的结果表明,人们可以达到公用事业的限制,同时仍保持强大的隐私水平。
translated by 谷歌翻译
我们为高维分布的身份测试提供了改进的差异私有算法。具体来说,对于带有已知协方差$ \ sigma $的$ d $二维高斯分布,我们可以测试该分布是否来自$ \ Mathcal {n}(\ mu^*,\ sigma)$,对于某些固定$ \ mu^** $或从某个$ \ MATHCAL {n}(\ mu,\ sigma)$,总变化距离至少$ \ alpha $ from $ \ mathcal {n}(\ mu^*,\ sigma)$(\ varepsilon) ,0)$ - 微分隐私,仅使用\ [\ tilde {o} \ left(\ frac {d^{1/2}}} {\ alpha^2} + \ frac {d^{1/3}} {1/3}} { \ alpha^{4/3} \ cdot \ varepsilon^{2/3}}} + \ frac {1} {\ alpha \ cdot \ cdot \ cdot \ varepsilon} \ right)\]唯一\ [\ tilde {o} \ left(\ frac {d^{1/2}}} {\ alpha^2} + \ frac {d^{1/4}} {\ alpha \ alpha \ cdot \ cdot \ cdot \ varepsilon} \ right )\]用于计算有效算法的样品。我们还提供了一个匹配的下限,表明我们的计算效率低下的算法具有最佳的样品复杂性。我们还将算法扩展到各种相关问题,包括对具有有限但未知协方差的高斯人的平均测试,对$ \ { - 1,1,1 \}^d $的产品分布的均匀性测试以及耐受性测试。我们的结果改善了Canonne等人的先前最佳工作。 (\ frac {\ sqrt {d}} {\ alpha^2} \ right)$在许多标准参数设置中。此外,我们的结果表明,令人惊讶的是,可以使用$ d $二维高斯的私人身份测试,可以用少于离散分布的私人身份测试尺寸$ d $ \ cite {actharyasz18}的私人身份测试来完成,以重组猜测〜\ cite {canonnekmuz20}的下限。
translated by 谷歌翻译
想象一组愿意集体贡献他们的个人数据的公民,以获得共同的益处,以产生社会有用的信息,由数据分析或机器学习计算产生。使用执行计算的集中式服务器共享原始的个人数据可能会引发对隐私和感知风险的担忧。相反,公民可以相互信任,并且他们自己的设备可以参与分散的计算,以协同生成要共享的聚合数据释放。在安全计算节点在运行时在安全信道交换消息的上下文中,密钥安全问题是保护对观察流量的外部攻击者,其对数据的依赖可以揭示个人信息。现有解决方案专为云设置而设计,目标是隐藏底层数据集的所有属性,并且不解决上述背景下出现的特定隐私和效率挑战。在本文中,我们定义了一般执行模型,以控制用户侧分散计算中通信的数据依赖性,其中通过组合在局部节点的局部集群上的保证来分析全局执行计划中的差异隐私保证。我们提出了一系列算法,可以在隐私,效用和效率之间进行权衡。我们的正式隐私保障利用,并通过洗牌延长隐私放大的结果。我们说明了我们对具有数据依赖通信的分散执行计划的两个代表性示例的提案的有用性。
translated by 谷歌翻译
通过随机梯度Langevin Dynamics(SGLD)的贝叶斯学习已被建议用于私人学习。尽管先前的研究在算法的初始步骤或接近融合时为SGLD提供了不同的隐私范围,但在两者之间可以解决哪些差异隐私保证的问题。这个临时区域非常重要,尤其是对于贝叶斯神经网络,因为很难保证与后部的融合。本文表明,即使从后验进行采样时,使用SGLD可能会导致此中期区域无限制的隐私损失。
translated by 谷歌翻译
我们介绍了一种基于约翰逊·林登斯特劳斯引理的统计查询的新方法,以释放具有差异隐私的统计查询的答案。关键的想法是随机投影查询答案,以较低的维空间,以便将可行的查询答案的任何两个向量之间的距离保留到添加性错误。然后,我们使用简单的噪声机制回答投影的查询,并将答案提升到原始维度。使用这种方法,我们首次给出了纯粹的私人机制,具有最佳情况下的最佳情况样本复杂性,在平均错误下,以回答$ n $ $ n $的宇宙的$ k $ Queries的工作量。作为其他应用,我们给出了具有最佳样品复杂性的第一个纯私人有效机制,用于计算有限的高维分布的协方差,并用于回答2向边缘查询。我们还表明,直到对错误的依赖性,我们机制的变体对于每个给定的查询工作负载几乎是最佳的。
translated by 谷歌翻译
我们介绍了一个普遍的框架,用于表征差异隐私保证的统计估算问题的统计效率。我们的框架,我们呼叫高维建议 - 试验释放(HPTR),在三个重要组件上建立:指数机制,强大的统计和提议 - 试验释放机制。将所有这些粘在一起是恢复力的概念,这是强大的统计估计的核心。弹性指导算法的设计,灵敏度分析和试验步骤的成功概率分析。关键识别是,如果我们设计了一种仅通过一维鲁棒统计数据访问数据的指数机制,则可以大大减少所产生的本地灵敏度。使用弹性,我们可以提供紧密的本地敏感界限。这些紧张界限在几个案例中容易转化为近乎最佳的实用程序。我们给出了将HPTR应用于统计估计问题的给定实例的一般配方,并在平均估计,线性回归,协方差估计和主成分分析的规范问题上证明了它。我们介绍了一般的公用事业分析技术,证明了HPTR几乎在文献中研究的若干场景下实现了最佳的样本复杂性。
translated by 谷歌翻译
我们提出了一种基于优化的基于优化的框架,用于计算差异私有M估算器以及构建差分私立置信区的新方法。首先,我们表明稳健的统计数据可以与嘈杂的梯度下降或嘈杂的牛顿方法结合使用,以便分别获得具有全局线性或二次收敛的最佳私人估算。我们在局部强大的凸起和自我协调下建立当地和全球融合保障,表明我们的私人估算变为对非私人M估计的几乎最佳附近的高概率。其次,我们通过构建我们私有M估计的渐近方差的差异私有估算来解决参数化推断的问题。这自然导致近​​似枢轴统计,用于构建置信区并进行假设检测。我们展示了偏置校正的有效性,以提高模拟中的小样本实证性能。我们说明了我们在若干数值例子中的方法的好处。
translated by 谷歌翻译
We study the relationship between adversarial robustness and differential privacy in high-dimensional algorithmic statistics. We give the first black-box reduction from privacy to robustness which can produce private estimators with optimal tradeoffs among sample complexity, accuracy, and privacy for a wide range of fundamental high-dimensional parameter estimation problems, including mean and covariance estimation. We show that this reduction can be implemented in polynomial time in some important special cases. In particular, using nearly-optimal polynomial-time robust estimators for the mean and covariance of high-dimensional Gaussians which are based on the Sum-of-Squares method, we design the first polynomial-time private estimators for these problems with nearly-optimal samples-accuracy-privacy tradeoffs. Our algorithms are also robust to a constant fraction of adversarially-corrupted samples.
translated by 谷歌翻译
Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ǫ-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.
translated by 谷歌翻译
许多现代的机器学习算法由简单的私人算法组成;因此,一个越来越重要的问题是有效计算组成下的整体隐私损失。在这项研究中,我们介绍了Edgeworth会计师,这是一种分析方法,用于构成私人算法的差异隐私保证。 Edgeworth会计师首先使用$ f $ - 不同的隐私框架来无误地跟踪构图下的隐私损失,该框架使我们能够使用隐私损失log-logikelihoodhiehood(pllrs)表达隐私保证。顾名思义,该会计师接下来使用Edgeworth扩展到上下界限PLLR的总和的概率分布。此外,通过依靠一种使用简单的技术近似复杂分布的技术,我们证明了Edgeworth会计师可以应用于任何噪声加成机制的组成。由于Edgeworth扩展的某些吸引人的功能,该会计师提供的$(\ epsilon,\ delta)$ - 差异隐私范围是非反应的,基本上没有额外的计算成本,而不是先前的方法运行时间随成分的数量而增加。最后,我们证明了我们的上和下部$(\ epsilon,\ delta)$ - 差异隐私范围在联合分析和培训私人深度学习模型的某些制度中紧密。
translated by 谷歌翻译
We study fine-grained error bounds for differentially private algorithms for counting under continual observation. Our main insight is that the matrix mechanism when using lower-triangular matrices can be used in the continual observation model. More specifically, we give an explicit factorization for the counting matrix $M_\mathsf{count}$ and upper bound the error explicitly. We also give a fine-grained analysis, specifying the exact constant in the upper bound. Our analysis is based on upper and lower bounds of the {\em completely bounded norm} (cb-norm) of $M_\mathsf{count}$. Along the way, we improve the best-known bound of 28 years by Mathias (SIAM Journal on Matrix Analysis and Applications, 1993) on the cb-norm of $M_\mathsf{count}$ for a large range of the dimension of $M_\mathsf{count}$. Furthermore, we are the first to give concrete error bounds for various problems under continual observation such as binary counting, maintaining a histogram, releasing an approximately cut-preserving synthetic graph, many graph-based statistics, and substring and episode counting. Finally, we note that our result can be used to get a fine-grained error bound for non-interactive local learning {and the first lower bounds on the additive error for $(\epsilon,\delta)$-differentially-private counting under continual observation.} Subsequent to this work, Henzinger et al. (SODA2023) showed that our factorization also achieves fine-grained mean-squared error.
translated by 谷歌翻译
在本文中,我们提出了一种用于在离散时间马尔可夫链(DTMC)上指定的概率超普通统计模型检查(SMC)的贝叶斯方法。尽管使用顺序概率比测试(SPRT)的HyperPCTL*的SMC曾经探索过,但我们基于贝叶斯假说检验开发了一种替代SMC算法。与PCTL*相比,由于它们在DTMC的多个路径上同时解释,验证HyperPCTL*公式是复杂的。此外,由于SMC无法返回Subformulae的满意度问题,因此扩展非稳定设置的自下而上的模型检查算法并不直接,相反,它仅通过高级返回正确的答案。信心。我们根据修改后的贝叶斯测试,提出了一种HyperPCTL* SMC的递归算法,该测试因递归满意度结果的不确定性而导致。我们已经在Python工具箱Hybrover中实现了算法,并将我们的方法与基于SPRT的SMC进行了比较。我们的实验评估表明,我们的贝叶斯SMC算法在验证时间和推断给定HyperPCTL*公式的满意度所需的样品数量方面的性能更好。
translated by 谷歌翻译
我们为其非私人对准减少$(\ varepsilon,\ delta)$差异私人(dp)统计估计,提供了一个相当一般的框架。作为本框架的主要应用,我们提供多项式时间和$(\ varepsilon,\ delta)$ - DP算法用于学习(不受限制的)高斯分布在$ \ mathbb {r} ^ d $。我们学习高斯的方法的样本复杂度高斯距离总变化距离$ \ alpha $是$ \ widetilde {o} \ left(\ frac {d ^ 2} {\ alpha ^ 2} + \ frac {d ^ 2 \ sqrt {\ ln {1 / \ delta}} {\ alpha \ varepsilon} \右)$,匹配(最多为对数因子)最佳已知的信息理论(非高效)样本复杂性上限的aden-ali, Ashtiani,Kamath〜(alt'21)。在一个独立的工作中,Kamath,Mouzakis,Singhal,Steinke和Ullman〜(Arxiv:2111.04609)使用不同的方法证明了类似的结果,并以$ O(d ^ {5/2})$样本复杂性依赖于$ d $ 。作为我们的框架的另一个应用,我们提供了第一次多项式时间$(\ varepsilon,\ delta)$-dp算法,用于鲁棒学习(不受限制的)高斯。
translated by 谷歌翻译