通过分类/回归从连续数据流学习在许多域中是普遍的。在保护数据所有者的私人信息的同时适应不断发展的数据特征(概念漂移)是一个开放的挑战。我们在两个区别的功能上向这个问题提供了一个差别的私有集合解决方案:它允许\ textit {unbounded} leeleble更新,以处理固定隐私预算下的可能永远不会结束的数据流,它是\ textit {modelagnostic},因为它将任何预先训练的差异私有分类/回归模型作为黑盒。我们的方法优于现实世界和模拟数据集的竞争对手,以不同的隐私设置,概念漂移和数据分布。
translated by 谷歌翻译
The ''Propose-Test-Release'' (PTR) framework is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i.e. those that add less noise when the input dataset is nice. We extend PTR to a more general setting by privately testing data-dependent privacy losses rather than local sensitivity, hence making it applicable beyond the standard noise-adding mechanisms, e.g. to queries with unbounded or undefined sensitivity. We demonstrate the versatility of generalized PTR using private linear regression as a case study. Additionally, we apply our algorithm to solve an open problem from ''Private Aggregation of Teacher Ensembles (PATE)'' -- privately releasing the entire model with a delicate data-dependent analysis.
translated by 谷歌翻译
深度神经网络(DNNS)铰接对大型数据集的可用性的最新成功;但是,对此类数据集的培训经常为敏感培训信息构成隐私风险。在本文中,我们的目标是探讨生成模型和梯度稀疏性的力量,并提出了一种可扩展的隐私保留生成模型数据标准。与标准展示隐私保留框架相比,允许教师对一维预测进行投票,在高维梯度向量上投票在隐私保存方面具有挑战性。随着需要尺寸减少技术,我们需要在(1)之间的改进之间导航精致的权衡空间,并进行SGD收敛的放缓。为了解决这一点,我们利用通信高效学习,并通过将顶-K压缩与相应的噪声注入机构相结合,提出一种新的噪声压缩和聚集方法TopAGG。理论上,我们证明了DataLens框架保证了其生成数据的差异隐私,并提供了其收敛性的分析。为了展示DataLens的实际使用情况,我们对不同数据集进行广泛的实验,包括Mnist,Fashion-Mnist和高维Celeba,并且我们表明,DataLens显着优于其他基线DP生成模型。此外,我们改进了所提出的Topagg方法,该方法是DP SGD培训的主要构建块之一,并表明它能够在大多数情况下实现比最先进的DP SGD方法更高的效用案件。我们的代码在HTTPS://github.com/ai-secure/datalens公开提供。
translated by 谷歌翻译
差异隐私通常使用比理论更大的隐私参数应用于理想的理想。已经提出了宽大隐私参数的各种非正式理由。在这项工作中,我们考虑了部分差异隐私(DP),该隐私允许以每个属性为基础量化隐私保证。在此框架中,我们研究了几个基本数据分析和学习任务,并设计了其每个属性隐私参数的算法,其较小的人(即所有属性)的最佳隐私参数比最佳的隐私参数。
translated by 谷歌翻译
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. * Google.† OpenAI. Work done while at Google.
translated by 谷歌翻译
在许多应用程序中,多方拥有有关相同用户的私人数据,但在属性的脱节集上,服务器希望利用数据来训练模型。为了在保护数据主体的隐私时启用模型学习,我们需要垂直联合学习(VFL)技术,其中数据派对仅共享用于培训模型的信息,而不是私人数据。但是,确保共享信息在学习准确的模型的同时保持隐私是一项挑战。据我们所知,本文提出的算法是第一个实用的解决方案,用于差异化垂直联合K-均值聚类,服务器可以在其中获得具有可证明的差异隐私保证的全球中心。我们的算法假设一个不受信任的中央服务器,该服务器汇总了本地数据派对的差异私有本地中心和成员资格编码。它基于收到的信息构建加权网格作为全局数据集的概要。最终中心是通过在加权网格上运行任何K-均值算法而产生的。我们的网格重量估计方法采用了基于Flajolet-Martin草图的新颖,轻巧和差异私有的相交基数估计算法。为了提高两个以上数据方的设置中的估计准确性,我们进一步提出了权重估计算法的精致版本和参数调整策略,以减少最终的K-均值实用程序,以便在中央私人环境中接近它。我们为由我们的算法计算的群集中心提供了理论实用性分析和实验评估结果,并表明我们的方法在理论上和经验上都比基于现有技术的两个基准在理论上和经验上的表现更好。
translated by 谷歌翻译
我们考虑一个顺序设置,其中使用单个数据集用于执行自适应选择的分析,同时确保每个参与者的差别隐私丢失不超过预先指定的隐私预算。此问题的标准方法依赖于限制所有个人对所有个人的隐私损失的最坏情况估计,以及每个单一分析的所有可能的数据值。然而,在许多情况下,这种方法过于保守,特别是对于“典型”数据点,通过参与大部分分析产生很少的隐私损失。在这项工作中,我们基于每个分析中每个人的个性化隐私损失估计的价值,给出了更严格的隐私损失会计的方法。实现我们设计R \'enyi差异隐私的过滤器。过滤器是一种工具,可确保具有自适应选择的隐私参数的组合算法序列的隐私参数不超过预先预算。我们的过滤器比以往的$(\ epsilon,\ delta)$ - rogers等人的差别隐私更简单且更紧密。我们将结果应用于对嘈杂渐变下降的分析,并显示个性化会计可以实用,易于实施,并且只能使隐私式权衡更紧密。
translated by 谷歌翻译
We study the task of training regression models with the guarantee of label differential privacy (DP). Based on a global prior distribution on label values, which could be obtained privately, we derive a label DP randomization mechanism that is optimal under a given regression loss function. We prove that the optimal mechanism takes the form of a ``randomized response on bins'', and propose an efficient algorithm for finding the optimal bin values. We carry out a thorough experimental evaluation on several datasets demonstrating the efficacy of our algorithm.
translated by 谷歌翻译
我们考虑如何私下分享客观扰动,使用每个实例差异隐私(PDP)所产生的个性化隐私损失。标准差异隐私(DP)为我们提供了一个最坏的绑定,可能是相对于固定数据集的特定个人的隐私丢失的数量级。PDP框架对目标个人的隐私保障提供了更细粒度的分析,但每个实例隐私损失本身可能是敏感数据的函数。在本文中,我们分析了通过客观扰动释放私人经验风险最小化器的每案隐私丧失,并提出一组私下和准确地公布PDP损失的方法,没有额外的隐私费用。
translated by 谷歌翻译
结合神经网络是一种长期的技术,可以通过委员会决定通过将网络与正交性结合到正交属性来改善神经网络的概括错误。我们表明,该技术非常适合在医疗数据上进行机器学习:首先,合奏可以平行和异步学习,从而有效地培训患者特定的组件神经网络。其次,基于选择不相关的患者特定网络来最大程度地减少概括错误的想法,我们表明可以建立一些选定的特定于患者特定模型的合奏,以优于在更大的合并数据集中训练的单个模型。第三,非著作集合组合步骤是一个最佳的低维入口点,用于应用输出扰动以确保患者特定的网络的隐私。我们使用临床专家标记的现实生活中的重症监护病房数据来体现差异私人合奏在败血症早期预测任务上的框架。
translated by 谷歌翻译
Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals.Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private PAC learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise.Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.
translated by 谷歌翻译
提出测试释放(PTR)是一个差异隐私框架,可符合局部功能的敏感性,而不是其全球敏感性。该框架通常用于以差异性私有方式释放强大的统计数据,例如中位数或修剪平均值。尽管PTR是十年前引入的常见框架,但在诸如Robust SGD之类的应用程序中使用它,我们需要许多自适应鲁棒的查询是具有挑战性的。这主要是由于缺乏Renyi差异隐私(RDP)分析,这是一种瞬间的私人深度学习方法的基础。在这项工作中,我们概括了标准PTR,并在目标函数界定全局灵敏度时得出了第一个RDP。我们证明,与直接分析的$(\ eps,\ delta)$ -DP相比,我们的RDP绑定的PTR可以得出更严格的DP保证。我们还得出了亚采样下PTR的算法特异性隐私扩增。我们表明,我们的界限比一般的上限和接近下限的界限要紧密得多。我们的RDP界限可以为PTR的许多自适应运行的组成而更严格的隐私损失计算。作为我们的分析的应用,我们表明PTR和我们的理论结果可用于设计私人变体,用于拜占庭强大的训练算法,这些变体使用可靠的统计数据用于梯度聚集。我们对不同数据集和体系结构的标签,功能和梯度损坏的设置进行实验。我们表明,与基线相比,基于PTR的私人和强大的培训算法可显着改善该实用性。
translated by 谷歌翻译
We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.
translated by 谷歌翻译
机器学习的最新进展主要受益于大规模的可访问培训数据。但是,大规模的数据共享提出了极大的隐私问题。在这项工作中,我们提出了一种基于PAINE框架(G-PATE)的新型隐私保留数据生成模型,旨在训练可缩放的差异私有数据生成器,其保留高生成的数据实用程序。我们的方法利用生成的对抗性网来产生数据,与不同鉴别者之间的私人聚集相结合,以确保强烈的隐私保障。与现有方法相比,G-PATE显着提高了隐私预算的使用。特别是,我们用教师鉴别者的集合训练学生数据发生器,并提出一种新颖的私人梯度聚合机制,以确保对从教师鉴别者流到学生发电机的所有信息的差异隐私。另外,通过随机投影和梯度离散化,所提出的梯度聚合机制能够有效地处理高维梯度向量。从理论上讲,我们证明了G-PATE确保了数据发生器的差异隐私。经验上,我们通过广泛的实验证明了G-PAIN的优越性。我们展示了G-PATE是第一个能够在限量隐私预算下产生高数据实用程序的高维图像数据($ \ epsilon \ LE 1 $)。我们的代码可在https://github.com/ai-secure/gate上获得。
translated by 谷歌翻译
梯度泄漏攻击被认为是深度学习中的邪恶隐私威胁之一,因为攻击者在迭代培训期间隐蔽了梯度更新,而不会影响模型培训质量,但又使用泄漏的梯度逐步重建敏感培训数据,具有高攻击成功率。虽然具有差异隐私的深度学习是发布具有差异隐私保障的深度学习模型的违法标准,但我们展示了具有固定隐私参数的差异私有算法易受梯度泄漏攻击的影响。本文调查了差异隐私(DP)的梯度泄漏弹性深度学习的替代方法。首先,我们分析了差异隐私的深度学习的现有实现,它使用固定噪声方差使用固定隐私参数将恒定噪声对所有层中的梯度注入恒定噪声。尽管提供了DP保证,但该方法遭受了低精度,并且很容易受到梯度泄漏攻击。其次,通过使用动态隐私参数,我们提出了一种梯度泄漏弹性深度学习方法,差异隐私保证。与导致恒定噪声方差导致的固定参数策略不同,不同的动态参数策略存在替代技术,以引入自适应噪声方差和自适应噪声注入,其与差别私有模型训练期间的梯度更新的趋势紧密对齐。最后,我们描述了四个互补指标来评估和比较替代方法。
translated by 谷歌翻译
Federated learning facilitates the collaborative training of models without the sharing of raw data. However, recent attacks demonstrate that simply maintaining data locality during training processes does not provide sufficient privacy guarantees. Rather, we need a federated learning system capable of preventing inference over both the messages exchanged during training and the final trained model while ensuring the resulting model also has acceptable predictive accuracy. Existing federated learning approaches either use secure multiparty computation (SMC) which is vulnerable to inference or differential privacy which can lead to low accuracy given a large number of parties with relatively small amounts of data each. In this paper, we present an alternative approach that utilizes both differential privacy and SMC to balance these trade-offs. Combining differential privacy with secure multiparty computation enables us to reduce the growth of noise injection as the number of parties increases without sacrificing privacy while maintaining a pre-defined rate of trust. Our system is therefore a scalable approach that protects against inference threats and produces models with high accuracy. Additionally, our system can be used to train a variety of machine learning models, which we validate with experimental results on 3 different machine learning algorithms. Our experiments demonstrate that our approach out-performs state of the art solutions. CCS CONCEPTS• Security and privacy → Privacy-preserving protocols; Trust frameworks; • Computing methodologies → Learning settings.
translated by 谷歌翻译
Distributing machine learning predictors enables the collection of large-scale datasets while leaving sensitive raw data at trustworthy sites. We show that locally training support vector machines (SVMs) and computing their averages leads to a learning technique that is scalable to a large number of users, satisfies differential privacy, and is applicable to non-trivial tasks, such as CIFAR-10. For a large number of participants, communication cost is one of the main challenges. We achieve a low communication cost by requiring only a single invocation of an efficient secure multiparty summation protocol. By relying on state-of-the-art feature extractors (SimCLR), we are able to utilize differentially private convex learners for non-trivial tasks such as CIFAR-10. Our experimental results illustrate that for $1{,}000$ users with $50$ data points each, our scheme outperforms state-of-the-art scalable distributed learning methods (differentially private federated learning, short DP-FL) while requiring around $500$ times fewer communication costs: For CIFAR-10, we achieve a classification accuracy of $79.7\,\%$ for an $\varepsilon = 0.59$ while DP-FL achieves $57.6\,\%$. More generally, we prove learnability properties for the average of such locally trained models: convergence and uniform stability. By only requiring strongly convex, smooth, and Lipschitz-continuous objective functions, locally trained via stochastic gradient descent (SGD), we achieve a strong utility-privacy tradeoff.
translated by 谷歌翻译
在本文中,我们研究了具有差异隐私(DP)的学习图神经网络(GNN)的问题。我们提出了一种基于聚合扰动(GAP)的新型差异私有GNN,该GNN为GNN的聚合函数添加了随机噪声,以使单个边缘(边缘级隐私)或单个节点的存在统计上的存在及其所有邻接边缘( - 级别的隐私)。 GAP的新体系结构是根据私人学习的细节量身定制的,由三个单独的模块组成:(i)编码器模块,我们在不依赖边缘信息的情况下学习私人节点嵌入; (ii)聚合模块,其中我们根据图结构计算嘈杂的聚合节点嵌入; (iii)分类模块,我们在私有聚合上训练神经网络进行节点分类,而无需进一步查询图表。 GAP比以前的方法的主要优势在于,它可以从多跳社区的聚合中受益,并保证边缘级别和节点级别的DP不仅用于培训,而且可以推断出培训的隐私预算以外的额外费用。我们使用R \'Enyi DP来分析GAP的正式隐私保证,并在三个真实世界图数据集上进行经验实验。我们证明,与最先进的DP-GNN方法和天真的MLP基线相比,GAP提供了明显更好的准确性私人权衡权衡。
translated by 谷歌翻译
我们考虑使用迷你批量梯度进行差异隐私(DP)的培训模型。现有的最先进的差异私有随机梯度下降(DP-SGD)需要通过采样或洗机来获得最佳隐私/准确性/计算权衡的隐私放大。不幸的是,在重要的实际情况下,精确采样和洗牌的精确要求可能很难获得,特别是联邦学习(FL)。我们设计和分析跟随 - 正规的领导者(DP-FTRL)的DP变体,其比较(理论上和经验地)与放大的DP-SGD相比,同时允许更灵活的数据访问模式。DP-FTRL不使用任何形式的隐私放大。该代码可在https://github.com/google-Research/federated/tree/master/dp_ftrl和https://github.com/google-reesearch/dp-ftrl处获得。
translated by 谷歌翻译
Differential privacy is a strong notion for privacy that can be used to prove formal guarantees, in terms of a privacy budget, , about how much information is leaked by a mechanism. However, implementations of privacy-preserving machine learning often select large values of in order to get acceptable utility of the model, with little understanding of the impact of such choices on meaningful privacy. Moreover, in scenarios where iterative learning procedures are used, differential privacy variants that offer tighter analyses are used which appear to reduce the needed privacy budget but present poorly understood trade-offs between privacy and utility. In this paper, we quantify the impact of these choices on privacy in experiments with logistic regression and neural network models. Our main finding is that there is a huge gap between the upper bounds on privacy loss that can be guaranteed, even with advanced mechanisms, and the effective privacy loss that can be measured using current inference attacks. Current mechanisms for differentially private machine learning rarely offer acceptable utility-privacy trade-offs with guarantees for complex learning tasks: settings that provide limited accuracy loss provide meaningless privacy guarantees, and settings that provide strong privacy guarantees result in useless models.
translated by 谷歌翻译