对协作学习的实证攻击表明,深度神经网络的梯度不仅可以披露训练数据的私有潜在属性,还可以用于重建原始数据。虽然先前的作品试图量化了梯度的隐私风险,但这些措施没有建立理论上对梯度泄漏的理解了解,而不是跨越攻击者的概括,并且不能完全解释通过实际攻击在实践中通过实证攻击观察到的内容。在本文中,我们介绍了理论上激励的措施,以量化攻击依赖和攻击无关方式的信息泄漏。具体而言,我们展示了$ \ mathcal {v} $ - 信息的适应,它概括了经验攻击成功率,并允许量化可以从任何所选择的攻击模型系列泄漏的信息量。然后,我们提出了独立的措施,只需要共享梯度,用于量化原始和潜在信息泄漏。我们的经验结果,六个数据集和四种流行型号,揭示了第一层的梯度包含最高量的原始信息,而(卷积)特征提取器层之后的(完全连接的)分类层包含最高的潜在信息。此外,我们展示了如何在训练期间诸如梯度聚集的技术如何减轻信息泄漏。我们的工作为更好的防御方式铺平了道路,例如基于层的保护或强聚合。
translated by 谷歌翻译
Privacy-preserving inference via edge or encrypted computing paradigms encourages users of machine learning services to confidentially run a model on their personal data for a target task and only share the model's outputs with the service provider; e.g., to activate further services. Nevertheless, despite all confidentiality efforts, we show that a ''vicious'' service provider can approximately reconstruct its users' personal data by observing only the model's outputs, while keeping the target utility of the model very close to that of a ''honest'' service provider. We show the possibility of jointly training a target model (to be run at users' side) and an attack model for data reconstruction (to be secretly used at server's side). We introduce the ''reconstruction risk'': a new measure for assessing the quality of reconstructed data that better captures the privacy risk of such attacks. Experimental results on 6 benchmark datasets show that for low-complexity data types, or for tasks with larger number of classes, a user's personal data can be approximately reconstructed from the outputs of a single target inference task. We propose a potential defense mechanism that helps to distinguish vicious vs. honest classifiers at inference time. We conclude this paper by discussing current challenges and open directions for future studies. We open-source our code and results, as a benchmark for future work.
translated by 谷歌翻译
联合学习使多个用户能够通过共享其模型更新(渐变)来构建联合模型,而其原始数据在其设备上保持本地。与常见的信念相比,这提供了隐私福利,我们在共享渐变时,我们在这里增加了隐私风险的最新结果。具体而言,我们调查梯度(LLG)的标签泄漏,这是一种新建攻击,从他们的共享梯度提取用户培训数据的标签。该攻击利用梯度的方向和幅度来确定任何标签的存在或不存在。 LLG简单且有效,能够泄漏由标签表示的电位敏感信息,并缩放到任意批量尺寸和多个类别。在数学上以及经验上证明了不同设置下攻击的有效性。此外,经验结果表明,LLG在模型训练的早期阶段以高精度成功提取标签。我们还讨论了针对这种泄漏的不同防御机制。我们的研究结果表明,梯度压缩是减轻攻击的实用技术。
translated by 谷歌翻译
最近的攻击表明,可以从FEDSGD更新中恢复用户数据,从而破坏隐私。但是,这些攻击具有有限的实际相关性,因为联邦学习通常使用FedAvg算法。与FEDSGD相比,从FedAvg更新中恢复数据要困难得多,因为:(i)更新是在未观察到的中间网络权重计算的,(ii)使用大量批次,并且(iii)标签和网络权重在客户端上同时不同脚步。在这项工作中,我们提出了一项新的基于优化的攻击,该攻击通过解决上述挑战来成功攻击FedAvg。首先,我们使用自动差异化解决了优化问题,该分化迫使客户端更新的仿真,该更新生成了恢复的标签和输入的未观察到的参数,以匹配接收到的客户端更新。其次,我们通过将来自不同时期的图像与置换不变的先验联系起来来解决大量批处理。第三,我们通过在每个FedAvg步骤中估算现有FEDSGD攻击的参数来恢复标签。在流行的女性数据集中,我们证明,平均而言,我们从现实的FedAvg更新中成功地恢复了> 45%的图像,该更新是在10个本地时期计算出的10批批次,每个批次,每个图像,每张5张图像,而使用基线仅<10%。我们的发现表明,基于FedAvg的许多现实世界联合学习实现非常脆弱。
translated by 谷歌翻译
鉴于对机器学习模型的访问,可以进行对手重建模型的培训数据?这项工作从一个强大的知情对手的镜头研究了这个问题,他们知道除了一个之外的所有培训数据点。通过实例化混凝土攻击,我们表明重建此严格威胁模型中的剩余数据点是可行的。对于凸模型(例如Logistic回归),重建攻击很简单,可以以封闭形式导出。对于更常规的模型(例如神经网络),我们提出了一种基于训练的攻击策略,该攻击策略接收作为输入攻击的模型的权重,并产生目标数据点。我们展示了我们对MNIST和CIFAR-10训练的图像分类器的攻击的有效性,并系统地研究了标准机器学习管道的哪些因素影响重建成功。最后,我们从理论上调查了有多差异的隐私足以通过知情对手减轻重建攻击。我们的工作提供了有效的重建攻击,模型开发人员可以用于评估超出以前作品中考虑的一般设置中的个别点的记忆(例如,生成语言模型或访问培训梯度);它表明,标准模型具有存储足够信息的能力,以实现培训数据点的高保真重建;它表明,差异隐私可以成功减轻该参数制度中的攻击,其中公用事业劣化最小。
translated by 谷歌翻译
在联合学习(FL)中,数据不会在联合培训机器学习模型时留下个人设备。相反,这些设备与中央党(例如,公司)共享梯度。因为数据永远不会“离开”个人设备,因此FL作为隐私保留呈现。然而,最近显示这种保护是一个薄的外观,甚至是一种被动攻击者观察梯度可以重建各个用户的数据。在本文中,我们争辩说,事先工作仍然很大程度上低估了FL的脆弱性。这是因为事先努力专门考虑被动攻击者,这些攻击者是诚实但好奇的。相反,我们介绍了一个活跃和不诚实的攻击者,作为中央会,他们能够在用户计算模型渐变之前修改共享模型的权重。我们称之为修改的重量“陷阱重量”。我们的活跃攻击者能够完全恢复用户数据,并在接近零成本时:攻击不需要复杂的优化目标。相反,它利用了模型梯度的固有数据泄漏,并通过恶意改变共享模型的权重来放大这种效果。这些特异性使我们的攻击能够扩展到具有大型迷你批次数据的模型。如果来自现有工作的攻击者需要小时才能恢复单个数据点,我们的方法需要毫秒来捕获完全连接和卷积的深度神经网络的完整百分之批次数据。最后,我们考虑缓解。我们观察到,FL中的差异隐私(DP)的当前实现是有缺陷的,因为它们明确地信任中央会,并在增加DP噪音的关键任务,因此不提供对恶意中央党的保护。我们还考虑其他防御,并解释为什么它们类似地不足。它需要重新设计FL,为用户提供任何有意义的数据隐私。
translated by 谷歌翻译
利用梯度泄漏以重建据称为私人培训数据,梯度反演攻击是神经网络协作学习的无处不在威胁。为了防止梯度泄漏而不会遭受模型绩效严重损失的情况,最近的工作提出了一个基于变化模型作为任意模型体系结构的扩展的隐私增强模块(预编码)。在这项工作中,我们研究了预言对梯度反转攻击的影响,以揭示其基本的工作原理。我们表明,各变化建模会引起预科及其随后的层梯度的随机性,从而阻止梯度攻击的收敛性。通过在攻击优化期间有目的地省略那些随机梯度,我们制定了一种可以禁用Precode隐私保护效果的攻击。为了确保对这种有针对性攻击的隐私保护,我们将部分扰动(PPP)提出,作为变异建模和部分梯度扰动的战略组合。我们对四个开创性模型架构和两个图像分类数据集进行了广泛的实证研究。我们发现所有架构都容易梯度泄漏,可以通过PPP预防。因此,我们表明我们的方法需要较小的梯度扰动才能有效地保留隐私而不会损害模型性能。
translated by 谷歌翻译
A distribution inference attack aims to infer statistical properties of data used to train machine learning models. These attacks are sometimes surprisingly potent, but the factors that impact distribution inference risk are not well understood and demonstrated attacks often rely on strong and unrealistic assumptions such as full knowledge of training environments even in supposedly black-box threat scenarios. To improve understanding of distribution inference risks, we develop a new black-box attack that even outperforms the best known white-box attack in most settings. Using this new attack, we evaluate distribution inference risk while relaxing a variety of assumptions about the adversary's knowledge under black-box access, like known model architectures and label-only access. Finally, we evaluate the effectiveness of previously proposed defenses and introduce new defenses. We find that although noise-based defenses appear to be ineffective, a simple re-sampling defense can be highly effective. Code is available at https://github.com/iamgroot42/dissecting_distribution_inference
translated by 谷歌翻译
从公共机器学习(ML)模型中泄漏数据是一个越来越重要的领域,因为ML的商业和政府应用可以利用多个数据源,可能包括用户和客户的敏感数据。我们对几个方面的当代进步进行了全面的调查,涵盖了非自愿数据泄漏,这对ML模型很自然,潜在的恶毒泄漏是由隐私攻击引起的,以及目前可用的防御机制。我们专注于推理时间泄漏,这是公开可用模型的最可能场景。我们首先在不同的数据,任务和模型体系结构的背景下讨论什么是泄漏。然后,我们提出了跨非自愿和恶意泄漏的分类法,可用的防御措施,然后进行当前可用的评估指标和应用。我们以杰出的挑战和开放性的问题结束,概述了一些有希望的未来研究方向。
translated by 谷歌翻译
联合学习已被提议作为隐私的机器学习框架,该框架使多个客户能够在不共享原始数据的情况下进行协作。但是,在此框架中,设计并不能保证客户隐私保护。先前的工作表明,联邦学习中的梯度共享策略可能容易受到数据重建攻击的影响。但是,实际上,考虑到高沟通成本或由于增强隐私要求,客户可能不会传输原始梯度。实证研究表明,梯度混淆,包括通过梯度噪声注入和通过梯度压缩的无意化混淆的意图混淆,可以提供更多的隐私保护,以防止重建攻击。在这项工作中,我们提出了一个针对联合学习中图像分类任务的新数据重建攻击框架。我们表明,通常采用的梯度后处理程序,例如梯度量化,梯度稀疏和梯度扰动,可能会在联合学习中具有错误的安全感。与先前的研究相反,我们认为不应将隐私增强视为梯度压缩的副产品。此外,我们在提出的框架下设计了一种新方法,以在语义层面重建图像。我们量化语义隐私泄漏,并根据图像相似性分数进行比较。我们的比较挑战了文献中图像数据泄漏评估方案。结果强调了在现有联合学习算法中重新审视和重新设计对客户数据的隐私保护机制的重要性。
translated by 谷歌翻译
对机器学习模型的会员推理攻击(MIA)可能会导致模型培训中使用的培训数据集的严重隐私风险。在本文中,我们提出了一种针对成员推理攻击(MIAS)的新颖有效的神经元引导的防御方法。我们确定了针对MIA的现有防御机制的关键弱点,在该机制中,他们不能同时防御两个常用的基于神经网络的MIA,表明应分别评估这两次攻击以确保防御效果。我们提出了Neuguard,这是一种新的防御方法,可以通过对象共同控制输出和内部神经元的激活,以指导训练集的模型输出和测试集的模型输出以具有近距离分布。 Neuguard由类别的差异最小化靶向限制最终输出神经元和层平衡输出控制的目标,旨在限制每一层中的内部神经元。我们评估Neuguard,并将其与最新的防御能力与两个基于神经网络的MIA,五个最强的基于度量的MIA,包括三个基准数据集中的新提出的仅标签MIA。结果表明,Neuguard通过提供大大改善的公用事业权衡权衡,一般性和间接费用来优于最先进的防御能力。
translated by 谷歌翻译
Federated Learning (FL) is pervasive in privacy-focused IoT environments since it enables avoiding privacy leakage by training models with gradients instead of data. Recent works show the uploaded gradients can be employed to reconstruct data, i.e., gradient leakage attacks, and several defenses are designed to alleviate the risk by tweaking the gradients. However, these defenses exhibit weak resilience against threatening attacks, as the effectiveness builds upon the unrealistic assumptions that deep neural networks are simplified as linear models. In this paper, without such unrealistic assumptions, we present a novel defense, called Refiner, instead of perturbing gradients, which refines ground-truth data to craft robust data that yields sufficient utility but with the least amount of privacy information, and then the gradients of robust data are uploaded. To craft robust data, Refiner promotes the gradients of critical parameters associated with robust data to close ground-truth ones while leaving the gradients of trivial parameters to safeguard privacy. Moreover, to exploit the gradients of trivial parameters, Refiner utilizes a well-designed evaluation network to steer robust data far away from ground-truth data, thereby alleviating privacy leakage risk. Extensive experiments across multiple benchmark datasets demonstrate the superior defense effectiveness of Refiner at defending against state-of-the-art threats.
translated by 谷歌翻译
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge.We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing stateof-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
translated by 谷歌翻译
机器学习中的隐私和安全挑战(ML)已成为ML普遍的开发以及最近对大型攻击表面的展示,已成为一个关键的话题。作为一种成熟的以系统为导向的方法,在学术界和行业中越来越多地使用机密计算来改善各种ML场景的隐私和安全性。在本文中,我们将基于机密计算辅助的ML安全性和隐私技术的发现系统化,以提供i)保密保证和ii)完整性保证。我们进一步确定了关键挑战,并提供有关ML用例现有可信赖的执行环境(TEE)系统中限制的专门分析。我们讨论了潜在的工作,包括基础隐私定义,分区的ML执行,针对ML的专用发球台设计,TEE Awawe Aware ML和ML Full Pipeline保证。这些潜在的解决方案可以帮助实现强大的TEE ML,以保证无需引入计算和系统成本。
translated by 谷歌翻译
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to normbounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
translated by 谷歌翻译
Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points-for example, specific locations-in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.
translated by 谷歌翻译
梯度泄漏攻击被认为是深度学习中的邪恶隐私威胁之一,因为攻击者在迭代培训期间隐蔽了梯度更新,而不会影响模型培训质量,但又使用泄漏的梯度逐步重建敏感培训数据,具有高攻击成功率。虽然具有差异隐私的深度学习是发布具有差异隐私保障的深度学习模型的违法标准,但我们展示了具有固定隐私参数的差异私有算法易受梯度泄漏攻击的影响。本文调查了差异隐私(DP)的梯度泄漏弹性深度学习的替代方法。首先,我们分析了差异隐私的深度学习的现有实现,它使用固定噪声方差使用固定隐私参数将恒定噪声对所有层中的梯度注入恒定噪声。尽管提供了DP保证,但该方法遭受了低精度,并且很容易受到梯度泄漏攻击。其次,通过使用动态隐私参数,我们提出了一种梯度泄漏弹性深度学习方法,差异隐私保证。与导致恒定噪声方差导致的固定参数策略不同,不同的动态参数策略存在替代技术,以引入自适应噪声方差和自适应噪声注入,其与差别私有模型训练期间的梯度更新的趋势紧密对齐。最后,我们描述了四个互补指标来评估和比较替代方法。
translated by 谷歌翻译
Federated learning is a collaborative method that aims to preserve data privacy while creating AI models. Current approaches to federated learning tend to rely heavily on secure aggregation protocols to preserve data privacy. However, to some degree, such protocols assume that the entity orchestrating the federated learning process (i.e., the server) is not fully malicious or dishonest. We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious and attempts to obtain access to private, potentially sensitive data. Furthermore, we provide a method to further defend against such a malicious server, and demonstrate effectiveness against known attacks that reconstruct data in a federated learning setting.
translated by 谷歌翻译
Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.
translated by 谷歌翻译
联邦学习(FL)引起了人们对在存储在多个用户中的数据中启用隐私的机器学习的兴趣,同时避免将数据移动到偏离设备上。但是,尽管数据永远不会留下用户的设备,但仍然无法保证隐私,因为用户培训数据的重大计算以训练有素的本地模型的形式共享。最近,这些本地模型通过不同的隐私攻击(例如模型反演攻击)构成了实质性的隐私威胁。作为一种补救措施,通过保证服务器只能学习全局聚合模型更新,而不是单个模型更新,从而开发了安全汇总(SA)作为保护佛罗里达隐私的框架。尽管SA确保没有泄漏有关单个模型更新超出汇总模型更新的其他信息,但对于SA实际上可以提供多少私密性fl,没有正式的保证;由于有关单个数据集的信息仍然可以通过在服务器上计算的汇总模型泄漏。在这项工作中,我们对使用SA的FL的正式隐私保证进行了首次分析。具体而言,我们使用共同信息(MI)作为定量度量,并在每个用户数据集的信息上可以通过汇总的模型更新泄漏有关多少信息。当使用FEDSGD聚合算法时,我们的理论界限表明,隐私泄漏量随着SA参与FL的用户数量而线性减少。为了验证我们的理论界限,我们使用MI神经估计量来凭经验评估MNIST和CIFAR10数据集的不同FL设置下的隐私泄漏。我们的实验验证了FEDSGD的理论界限,随着用户数量和本地批量的增长,隐私泄漏的减少,并且随着培训回合的数量,隐私泄漏的增加。
translated by 谷歌翻译