Neural networks are susceptible to data inference attacks such as the membership inference attack, the adversarial model inversion attack and the attribute inference attack, where the attacker could infer useful information such as the membership, the reconstruction or the sensitive attributes of a data sample from the confidence scores predicted by the target classifier. In this paper, we propose a method, namely PURIFIER, to defend against membership inference attacks. It transforms the confidence score vectors predicted by the target classifier and makes purified confidence scores indistinguishable in individual shape, statistical distribution and prediction label between members and non-members. The experimental results show that PURIFIER helps defend membership inference attacks with high effectiveness and efficiency, outperforming previous defense methods, and also incurs negligible utility loss. Besides, our further experiments show that PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks. For example, the inversion error is raised about 4+ times on the Facescrub530 classifier, and the attribute inference accuracy drops significantly when PURIFIER is deployed in our experiment.
translated by 谷歌翻译
机器学习模型容易记住敏感数据,使它们容易受到会员推理攻击的攻击,其中对手的目的是推断是否使用输入样本来训练模型。在过去的几年中,研究人员产生了许多会员推理攻击和防御。但是,这些攻击和防御采用各种策略,并在不同的模型和数据集中进行。但是,缺乏全面的基准意味着我们不了解现有攻击和防御的优势和劣势。我们通过对不同的会员推理攻击和防御措施进行大规模测量来填补这一空白。我们通过研究九项攻击和六项防御措施来系统化成员的推断,并在整体评估中衡量不同攻击和防御的性能。然后,我们量化威胁模型对这些攻击结果的影响。我们发现,威胁模型的某些假设,例如相同架构和阴影和目标模型之间的相同分布是不必要的。我们也是第一个对从Internet收集的现实世界数据而不是实验室数据集进行攻击的人。我们进一步研究是什么决定了会员推理攻击的表现,并揭示了通常认为过度拟合水平不足以成功攻击。取而代之的是,成员和非成员样本之间的熵/横向熵的詹森 - 香农距离与攻击性能的相关性更好。这为我们提供了一种新的方法,可以在不进行攻击的情况下准确预测会员推理风险。最后,我们发现数据增强在更大程度上降低了现有攻击的性能,我们提出了使用增强作用的自适应攻击来训练阴影和攻击模型,以改善攻击性能。
translated by 谷歌翻译
Machine learning models leak information about the datasets on which they are trained. An adversary can build an algorithm to trace the individual members of a model's training dataset. As a fundamental inference attack, he aims to distinguish between data points that were part of the model's training set and any other data points from the same distribution. This is known as the tracing (and also membership inference) attack. In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters. This is the current setting of machine learning as a service in the Internet.We introduce a privacy mechanism to train machine learning models that provably achieve membership privacy: the model's predictions on its training data are indistinguishable from its predictions on other data points from the same distribution. We design a strategic mechanism where the privacy mechanism anticipates the membership inference attacks. The objective is to train a model such that not only does it have the minimum prediction error (high utility), but also it is the most robust model against its corresponding strongest inference attack (high privacy). We formalize this as a min-max game optimization problem, and design an adversarial training algorithm that minimizes the classification loss of the model as well as the maximum gain of the membership inference attack against it. This strategy, which guarantees membership privacy (as prediction indistinguishability), acts also as a strong regularizer and significantly generalizes the model.We evaluate our privacy mechanism on deep neural networks using different benchmark datasets. We show that our min-max strategy can mitigate the risk of membership inference attacks (close to the random guess) with a negligible cost in terms of the classification error.
translated by 谷歌翻译
对机器学习模型的会员推理攻击(MIA)可能会导致模型培训中使用的培训数据集的严重隐私风险。在本文中,我们提出了一种针对成员推理攻击(MIAS)的新颖有效的神经元引导的防御方法。我们确定了针对MIA的现有防御机制的关键弱点,在该机制中,他们不能同时防御两个常用的基于神经网络的MIA,表明应分别评估这两次攻击以确保防御效果。我们提出了Neuguard,这是一种新的防御方法,可以通过对象共同控制输出和内部神经元的激活,以指导训练集的模型输出和测试集的模型输出以具有近距离分布。 Neuguard由类别的差异最小化靶向限制最终输出神经元和层平衡输出控制的目标,旨在限制每一层中的内部神经元。我们评估Neuguard,并将其与最新的防御能力与两个基于神经网络的MIA,五个最强的基于度量的MIA,包括三个基准数据集中的新提出的仅标签MIA。结果表明,Neuguard通过提供大大改善的公用事业权衡权衡,一般性和间接费用来优于最先进的防御能力。
translated by 谷歌翻译
In a membership inference attack, an attacker aims to infer whether a data sample is in a target classifier's training dataset or not. Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset. Membership inference attacks pose severe privacy and security threats to the training dataset. Most existing defenses leverage differential privacy when training the target classifier or regularize the training process of the target classifier. These defenses suffer from two key limitations: 1) they do not have formal utility-loss guarantees of the confidence score vectors, and 2) they achieve suboptimal privacy-utility tradeoffs.In this work, we propose MemGuard, the first defense with formal utility-loss guarantees against black-box membership inference attacks. Instead of tampering the training process of the target classifier, MemGuard adds noise to each confidence score vector predicted by the target classifier. Our key observation is that attacker uses a classifier to predict member or non-member and classifier is vulnerable to adversarial examples. Based on the observation, we propose to add a carefully crafted noise vector to a confidence score vector to turn it into an adversarial example that misleads the attacker's classifier. Specifically, MemGuard works in two phases. In Phase I, MemGuard finds a carefully crafted noise vector that can turn a confidence score vector into an adversarial example, which is likely to mislead the attacker's classifier to make a random guessing at member or non-member. We find such carefully crafted noise vector via a new method that we design to incorporate the unique utility-loss constraints on the noise vector. In Phase II, Mem-Guard adds the noise vector to the confidence score vector with a certain probability, which is selected to satisfy a given utility-loss budget on the confidence score vector. Our experimental results on
translated by 谷歌翻译
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on.We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.
translated by 谷歌翻译
神经网络修剪一直是减少对资源受限设备的深度神经网络的计算和记忆要求的重要技术。大多数现有的研究主要侧重于平衡修剪神经网络的稀疏性和准确性,通过策略性地删除无关紧要的参数并重新修剪修剪模型。由于记忆的增加而造成了严重的隐私风险,因此尚未调查这种训练样品的这种努力。在本文中,我们对神经网络修剪中的隐私风险进行了首次分析。具体而言,我们研究了神经网络修剪对培训数据隐私的影响,即成员推理攻击。我们首先探讨了神经网络修剪对预测差异的影响,在该预测差异中,修剪过程不成比例地影响了修剪的模型对成员和非会员的行为。同时,差异的影响甚至以细粒度的方式在不同类别之间有所不同。通过这种分歧,我们提出了对修剪的神经网络的自我发起会员推断攻击。进行了广泛的实验,以严格评估不同修剪方法,稀疏水平和对手知识的隐私影响。拟议的攻击表明,与现有的八次成员推理攻击相比,对修剪模型的攻击性能更高。此外,我们提出了一种新的防御机制,通过基于KL-Divergence距离来缓解预测差异,以保护修剪过程,该距离的预测差异已通过实验证明,可以有效地降低隐私风险,同时维持较修剪模型的稀疏性和准确性。
translated by 谷歌翻译
半监督学习(SSL)利用标记和未标记的数据来训练机器学习(ML)模型。最先进的SSL方法可以通过利用更少的标记数据来实现与监督学习相当的性能。但是,大多数现有作品都集中在提高SSL的性能。在这项工作中,我们通过研究SSL的培训数据隐私来采取不同的角度。具体而言,我们建议针对由SSL训练的ML模型进行的第一个基于数据增强的成员推理攻击。给定数据样本和黑框访问模型,成员推理攻击的目标是确定数据样本是否属于模型的训练数据集。我们的评估表明,拟议的攻击可以始终超过现有的成员推理攻击,并针对由SSL训练的模型实现最佳性能。此外,我们发现,SSL中会员泄漏的原因与受到监督学习中普遍认为的原因不同,即过度拟合(培训和测试准确性之间的差距)。我们观察到,SSL模型已被概括为测试数据(几乎为0个过度拟合),但“记住”训练数据通过提供更自信的预测,无论其正确性如何。我们还探索了早期停止,作为防止成员推理攻击SSL的对策。结果表明,早期停止可以减轻会员推理攻击,但由于模型的实用性降解成本。
translated by 谷歌翻译
会员推理攻击是机器学习模型中最简单的隐私泄漏形式之一:给定数据点和模型,确定该点是否用于培训模型。当查询其培训数据时,现有会员推理攻击利用模型的异常置信度。如果对手访问模型的预测标签,则不会申请这些攻击,而不会置信度。在本文中,我们介绍了仅限标签的会员资格推理攻击。我们的攻击而不是依赖置信分数,而是评估模型预测标签在扰动下的稳健性,以获得细粒度的隶属信号。这些扰动包括常见的数据增强或对抗例。我们经验表明,我们的标签占会员推理攻击与先前攻击相符,以便需要访问模型信心。我们进一步证明,仅限标签攻击违反了(隐含或明确)依赖于我们呼叫信心屏蔽的现象的员工推论攻击的多种防御。这些防御修改了模型的置信度分数以挫败攻击,但留下模型的预测标签不变。我们的标签攻击展示了置信性掩蔽不是抵御会员推理的可行的防御策略。最后,我们调查唯一的案例标签攻击,该攻击推断为少量异常值数据点。我们显示仅标签攻击也匹配此设置中基于置信的攻击。我们发现具有差异隐私和(强)L2正则化的培训模型是唯一已知的防御策略,成功地防止所有攻击。即使差异隐私预算太高而无法提供有意义的可证明担保,这仍然存在。
translated by 谷歌翻译
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge.We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing stateof-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
translated by 谷歌翻译
机器学习(ML)模型已广泛应用于各种应用,包括图像分类,文本生成,音频识别和图形数据分析。然而,最近的研究表明,ML模型容易受到隶属推导攻击(MIS),其目的是推断数据记录是否用于训练目标模型。 ML模型上的MIA可以直接导致隐私违规行为。例如,通过确定已经用于训练与某种疾病相关的模型的临床记录,攻击者可以推断临床记录的所有者具有很大的机会。近年来,MIS已被证明对各种ML模型有效,例如,分类模型和生成模型。同时,已经提出了许多防御方法来减轻米西亚。虽然ML模型上的MIAS形成了一个新的新兴和快速增长的研究区,但还没有对这一主题进行系统的调查。在本文中,我们对会员推论和防御进行了第一个全面调查。我们根据其特征提供攻击和防御的分类管理,并讨论其优点和缺点。根据本次调查中确定的限制和差距,我们指出了几个未来的未来研究方向,以激发希望遵循该地区的研究人员。这项调查不仅是研究社区的参考,而且还为该研究领域之外的研究人员带来了清晰的照片。为了进一步促进研究人员,我们创建了一个在线资源存储库,并与未来的相关作品继续更新。感兴趣的读者可以在https://github.com/hongshenghu/membership-inference-machine-learning-literature找到存储库。
translated by 谷歌翻译
Deep ensemble learning has been shown to improve accuracy by training multiple neural networks and averaging their outputs. Ensemble learning has also been suggested to defend against membership inference attacks that undermine privacy. In this paper, we empirically demonstrate a trade-off between these two goals, namely accuracy and privacy (in terms of membership inference attacks), in deep ensembles. Using a wide range of datasets and model architectures, we show that the effectiveness of membership inference attacks increases when ensembling improves accuracy. We analyze the impact of various factors in deep ensembles and demonstrate the root cause of the trade-off. Then, we evaluate common defenses against membership inference attacks based on regularization and differential privacy. We show that while these defenses can mitigate the effectiveness of membership inference attacks, they simultaneously degrade ensemble accuracy. We illustrate similar trade-off in more advanced and state-of-the-art ensembling techniques, such as snapshot ensembles and diversified ensemble networks. Finally, we propose a simple yet effective defense for deep ensembles to break the trade-off and, consequently, improve the accuracy and privacy, simultaneously.
translated by 谷歌翻译
Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications.However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model's training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains.In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
translated by 谷歌翻译
作为对培训数据隐私的长期威胁,会员推理攻击(MIA)在机器学习模型中无处不在。现有作品证明了培训的区分性与测试损失分布与模型对MIA的脆弱性之间的密切联系。在现有结果的激励下,我们提出了一个基于轻松损失的新型培训框架,并具有更可实现的学习目标,从而导致概括差距狭窄和隐私泄漏减少。 RelaseLoss适用于任何分类模型,具有易于实施和可忽略不计的开销的额外好处。通过对具有不同方式(图像,医疗数据,交易记录)的五个数据集进行广泛的评估,我们的方法始终优于针对MIA和模型效用的韧性,以最先进的防御机制优于最先进的防御机制。我们的防御是第一个可以承受广泛攻击的同时,同时保存(甚至改善)目标模型的效用。源代码可从https://github.com/dingfanchen/relaxloss获得
translated by 谷歌翻译
Privacy-preserving inference via edge or encrypted computing paradigms encourages users of machine learning services to confidentially run a model on their personal data for a target task and only share the model's outputs with the service provider; e.g., to activate further services. Nevertheless, despite all confidentiality efforts, we show that a ''vicious'' service provider can approximately reconstruct its users' personal data by observing only the model's outputs, while keeping the target utility of the model very close to that of a ''honest'' service provider. We show the possibility of jointly training a target model (to be run at users' side) and an attack model for data reconstruction (to be secretly used at server's side). We introduce the ''reconstruction risk'': a new measure for assessing the quality of reconstructed data that better captures the privacy risk of such attacks. Experimental results on 6 benchmark datasets show that for low-complexity data types, or for tasks with larger number of classes, a user's personal data can be approximately reconstructed from the outputs of a single target inference task. We propose a potential defense mechanism that helps to distinguish vicious vs. honest classifiers at inference time. We conclude this paper by discussing current challenges and open directions for future studies. We open-source our code and results, as a benchmark for future work.
translated by 谷歌翻译
身份验证系统容易受到模型反演攻击的影响,在这种攻击中,对手能够近似目标机器学习模型的倒数。生物识别模型是这种攻击的主要候选者。这是因为反相生物特征模型允许攻击者产生逼真的生物识别输入,以使生物识别认证系统欺骗。进行成功模型反转攻击的主要限制之一是所需的训练数据量。在这项工作中,我们专注于虹膜和面部生物识别系统,并提出了一种新技术,可大大减少必要的训练数据量。通过利用多个模型的输出,我们能够使用1/10进行模型反演攻击,以艾哈迈德和富勒(IJCB 2020)的训练集大小(IJCB 2020)进行虹膜数据,而Mai等人的训练集大小为1/1000。 (模式分析和机器智能2019)的面部数据。我们将新的攻击技术表示为结构性随机,并损失对齐。我们的攻击是黑框,不需要了解目标神经网络的权重,只需要输出向量的维度和值。为了显示对齐损失的多功能性,我们将攻击框架应用于会员推理的任务(Shokri等,IEEE S&P 2017),对生物识别数据。对于IRIS,针对分类网络的会员推断攻击从52%提高到62%的准确性。
translated by 谷歌翻译
依赖于并非所有输入都需要相同数量的计算来产生自信的预测的事实,多EXIT网络正在引起人们的注意,这是推动有效部署限制的重要方法。多EXIT网络赋予了具有早期退出的骨干模型,从而可以在模型的中间层获得预测,从而节省计算时间和/或能量。但是,当前的多种exit网络的各种设计仅被认为是为了实现资源使用效率和预测准确性之间的最佳权衡,从未探索过来自它们的隐私风险。这促使需要全面调查多EXIT网络中的隐私风险。在本文中,我们通过会员泄漏的镜头对多EXIT网络进行了首次隐私分析。特别是,我们首先利用现有的攻击方法来量化多exit网络对成员泄漏的脆弱性。我们的实验结果表明,多EXIT网络不太容易受到会员泄漏的影响,而在骨干模型上附加的退出(数字和深度)与攻击性能高度相关。此外,我们提出了一种混合攻击,该攻击利用退出信息以提高现有攻击的性能。我们评估了由三种不同的对手设置下的混合攻击造成的成员泄漏威胁,最终到达了无模型和无数据的对手。这些结果清楚地表明,我们的混合攻击非常广泛地适用,因此,相应的风险比现有的会员推理攻击所显示的要严重得多。我们进一步提出了一种专门针对多EXIT网络的TimeGuard的防御机制,并表明TimeGuard完美地减轻了新提出的攻击。
translated by 谷歌翻译
如今,深度学习模型的所有者和开发人员必须考虑其培训数据的严格隐私保护规则,通常是人群来源且保留敏感信息。如今,深入学习模型执行隐私保证的最广泛采用的方法依赖于实施差异隐私的优化技术。根据文献,这种方法已被证明是针对多种模型的隐私攻击的成功防御,但其缺点是对模型的性能的实质性降级。在这项工作中,我们比较了差异私有的随机梯度下降(DP-SGD)算法与使用正则化技术的标准优化实践的有效性。我们分析了生成模型的实用程序,培训性能以及成员推理和模型反转攻击对学习模型的有效性。最后,我们讨论了差异隐私的缺陷和限制,并从经验上证明了辍学和L2型规范的卓越保护特性。
translated by 谷歌翻译
在培训机器学习模型期间,它们可能会存储或“了解”有关培训数据的更多信息,而不是预测或分类任务所需的信息。属性推理攻击旨在从给定模型的培训数据中提取统计属性,而无需访问培训数据本身,从而利用了这一点。这些属性可能包括图片的质量,以识别相机模型,以揭示产品的目标受众的年龄分布或在计算机网络中使用恶意软件攻击的随附的主机类型。当攻击者可以访问所有模型参数时,即在白色盒子方案中,此攻击尤其准确。通过捍卫此类攻击,模型所有者可以确保其培训数据,相关的属性以及其知识产权保持私密,即使他们故意共享自己的模型,例如协作培训或模型泄漏。在本文中,我们介绍了属性,这是针对白盒属性推理攻击的有效防御机制,独立于培训数据类型,模型任务或属性数量。属性通过系统地更改目标模型的训练的权重和偏见来减轻属性推理攻击,从而使对手无法提取所选属性。我们在三个不同的数据集(包括表格数据和图像数据)以及两种类型的人工神经网络(包括人造神经网络)上进行了经验评估属性。我们的研究结果表明,以良好的隐私性权衡取舍,可以保护机器学习模型免受财产推理攻击的侵害,既有效又可靠。此外,我们的方法表明该机制也有效地取消了多个特性。
translated by 谷歌翻译
会员推理攻击(MIA)在机器学习模型的培训数据上提出隐私风险。使用MIA,如果目标数据是训练数据集的成员,则攻击者猜测。对MIAS的最先进的防御,蒸馏为会员隐私(DMP),不仅需要私人数据来保护但是大量未标记的公共数据。但是,在某些隐私敏感域名,如医疗和财务,公共数据的可用性并不明显。此外,通过使用生成的对策网络生成公共数据的琐碎方法显着降低了DMP的作者报道的模型精度。为了克服这个问题,我们在不需要公共数据的情况下,使用知识蒸馏提出对米西亚的小说防御。我们的实验表明,我们防御的隐私保护和准确性与MIA研究中使用的基准表格数据集的DMP相媲美,我们的国防有更好的隐私式权限远非现有防御不使用图像数据集CIFAR10的公共数据。
translated by 谷歌翻译