The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to medical) necessitates a thorough quantitative investigation into their privacy vulnerabilities -- to what extent do MLMs leak information about their training data? Prior attempts at measuring leakage of MLMs via membership inference attacks have been inconclusive, implying the potential robustness of MLMs to privacy attacks. In this work, we posit that prior attempts were inconclusive because they based their attack solely on the MLM's model score. We devise a stronger membership inference attack based on likelihood ratio hypothesis testing that involves an additional reference MLM to more accurately quantify the privacy risks of memorization in MLMs. We show that masked language models are extremely susceptible to likelihood ratio membership inference attacks: Our empirical results, on models trained on medical notes, show that our attack improves the AUC of prior membership inference attacks from 0.66 to an alarmingly high 0.90 level, with a significant improvement in the low-error region: at 1% false positive rate, our attack is 51X more powerful than prior work.
translated by 谷歌翻译
员额推理攻击允许对训练的机器学习模型进行对手以预测模型的训练数据集中包含特定示例。目前使用平均案例的“精度”度量来评估这些攻击,该攻击未能表征攻击是否可以自信地识别培训集的任何成员。我们认为,应该通过计算其低(例如<0.1%)假阳性率来计算攻击来评估攻击,并在以这种方式评估时发现大多数事先攻击差。为了解决这一问题,我们开发了一个仔细结合文献中多种想法的似然比攻击(Lira)。我们的攻击是低于虚假阳性率的10倍,并且在攻击现有度量的情况下也严格占主导地位。
translated by 谷歌翻译
在其培训集中,给定训练有素的模型泄漏了多少培训模型泄露?会员资格推理攻击用作审计工具,以量化模型在其训练集中泄漏的私人信息。会员推理攻击受到不同不确定性的影响,即攻击者必须解决培训数据,培训算法和底层数据分布。因此,攻击成功率,在文献中的许多攻击,不要精确地捕获模型的信息泄漏关于他们的数据,因为它们还反映了攻击算法具有的其他不确定性。在本文中,我们解释了隐含的假设以及使用假设检测框架在现有工作中进行的简化。我们还从框架中获得了新的攻击算法,可以实现高AUC分数,同时还突出显示影响其性能的不同因素。我们的算法捕获模型中隐私损失的非常精确的近似,并且可以用作在机器学习模型中执行准确和了解的隐私风险的工具。我们对各种机器学习任务和基准数据集的攻击策略提供了彻底的实证评估。
translated by 谷歌翻译
近年来,机器学习模型对成员推论攻击的脆弱性受到了很大的关注。然而,由于具有高假阳性率,现有攻击大多是不切实际的,其中非成员样本通常被错误地预测为成员。这种类型的错误使得预测的隶属信号不可靠,特别是因为大多数样本都是现实世界应用中的非成员。在这项工作中,我们认为会员推理攻击可以从\ emph {难度校准}剧烈地利用,其中调整攻击的预测会员评分以正确分类目标样本的难度。我们表明,在没有准确性的情况下,难度校准可以显着降低各种现有攻击的假阳性率。
translated by 谷歌翻译
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on.We empirically evaluate our inference techniques on classification models trained by commercial "machine learning as a service" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.
translated by 谷歌翻译
Named entity recognition models (NER), are widely used for identifying named entities (e.g., individuals, locations, and other information) in text documents. Machine learning based NER models are increasingly being applied in privacy-sensitive applications that need automatic and scalable identification of sensitive information to redact text for data sharing. In this paper, we study the setting when NER models are available as a black-box service for identifying sensitive information in user documents and show that these models are vulnerable to membership inference on their training datasets. With updated pre-trained NER models from spaCy, we demonstrate two distinct membership attacks on these models. Our first attack capitalizes on unintended memorization in the NER's underlying neural network, a phenomenon NNs are known to be vulnerable to. Our second attack leverages a timing side-channel to target NER models that maintain vocabularies constructed from the training data. We show that different functional paths of words within the training dataset in contrast to words not previously seen have measurable differences in execution time. Revealing membership status of training samples has clear privacy implications, e.g., in text redaction, sensitive words or phrases to be found and removed, are at risk of being detected in the training dataset. Our experimental evaluation includes the redaction of both password and health data, presenting both security risks and privacy/regulatory issues. This is exacerbated by results that show memorization with only a single phrase. We achieved 70% AUC in our first attack on a text redaction use-case. We also show overwhelming success in the timing attack with 99.23% AUC. Finally we discuss potential mitigation approaches to realize the safe use of NER models in light of the privacy and security implications of membership inference attacks.
translated by 谷歌翻译
机器学习模型容易受到会员推理攻击的影响,在这种攻击中,对手的目的是预测目标模型培训数据集中是否包含特定样本。现有的攻击方法通常仅从给定的目标模型中利用输出信息(主要是损失)。结果,在成员和非成员样本都产生类似小损失的实际情况下,这些方法自然无法区分它们。为了解决这一限制,在本文中,我们提出了一种称为\系统的新攻击方法,该方法可以利用目标模型的整个培训过程中的成员资格信息来改善攻击性能。要将攻击安装在共同的黑盒环境中,我们利用知识蒸馏,并通过在不同蒸馏时期的中间模型中评估的损失表示成员资格信息,即\ emph {蒸馏损失轨迹},以及损失来自给定的目标模型。对不同数据集和模型体系结构的实验结果证明了我们在不同指标方面的攻击优势。例如,在Cinic-10上,我们的攻击至少达到6 $ \ times $ $阳性的速率,低阳性率为0.1 \%的速率比现有方法高。进一步的分析表明,在更严格的情况下,我们攻击的总体有效性。
translated by 谷歌翻译
用于训练机器学习(ML)模型的数据可能是敏感的。成员推理攻击(MIS),试图确定特定数据记录是否用于培训ML模型,违反会员隐私。 ML模型建设者需要一个原则的定义,使他们能够有效地定量(a)单独培训数据记录,(b)的隐私风险,有效地。未在会员资格危险风险指标上均未达到所有这些标准。我们提出了这种公制,SHAPR,它通过抑制其对模型的实用程序的影响来量化朔芙值以量化模型的记忆。这个记忆是衡量成功MIA的可能性的衡量标准。使用十个基准数据集,我们显示ShapR是有效的(精确度:0.94 $ \ PM 0.06 $,回忆:0.88 $ \ PM 0.06 $)在估算MIAS的培训数据记录的易感性时,高效(可在几分钟内计算,较小数据集和最大数据集的约〜90分钟)。 ShapR也是多功能的,因为它可以用于评估数据集的子集的公平或分配估值的其他目的。例如,我们显示Shapr正确地捕获不同子组的不成比例漏洞到MIS。使用SHAPR,我们表明,通过去除高风险训练数据记录,不一定改善数据集的成员隐私风险,从而确认在显着扩展的设置中从事工作(在十个数据集中,最多可删除50%的数据)的观察。
translated by 谷歌翻译
Privacy preserving deep learning is an emerging field in machine learning that aims to mitigate the privacy risks in the use of deep neural networks. One such risk is training data extraction from language models that have been trained on datasets , which contain personal and privacy sensitive information. In our study, we investigate the extent of named entity memorization in fine-tuned BERT models. We use single-label text classification as representative downstream task and employ three different fine-tuning setups in our experiments, including one with Differentially Privacy (DP). We create a large number of text samples from the fine-tuned BERT models utilizing a custom sequential sampling strategy with two prompting strategies. We search in these samples for named entities and check if they are also present in the fine-tuning datasets. We experiment with two benchmark datasets in the domains of emails and blogs. We show that the application of DP has a huge effect on the text generation capabilities of BERT. Furthermore, we show that a fine-tuned BERT does not generate more named entities entities specific to the fine-tuning dataset than a BERT model that is pre-trained only. This suggests that BERT is unlikely to emit personal or privacy sensitive named entities. Overall, our results are important to understand to what extent BERT-based services are prone to training data extraction attacks.
translated by 谷歌翻译
Differential privacy is a strong notion for privacy that can be used to prove formal guarantees, in terms of a privacy budget, , about how much information is leaked by a mechanism. However, implementations of privacy-preserving machine learning often select large values of in order to get acceptable utility of the model, with little understanding of the impact of such choices on meaningful privacy. Moreover, in scenarios where iterative learning procedures are used, differential privacy variants that offer tighter analyses are used which appear to reduce the needed privacy budget but present poorly understood trade-offs between privacy and utility. In this paper, we quantify the impact of these choices on privacy in experiments with logistic regression and neural network models. Our main finding is that there is a huge gap between the upper bounds on privacy loss that can be guaranteed, even with advanced mechanisms, and the effective privacy loss that can be measured using current inference attacks. Current mechanisms for differentially private machine learning rarely offer acceptable utility-privacy trade-offs with guarantees for complex learning tasks: settings that provide limited accuracy loss provide meaningless privacy guarantees, and settings that provide strong privacy guarantees result in useless models.
translated by 谷歌翻译
机器学习(ML)模型已广泛应用于各种应用,包括图像分类,文本生成,音频识别和图形数据分析。然而,最近的研究表明,ML模型容易受到隶属推导攻击(MIS),其目的是推断数据记录是否用于训练目标模型。 ML模型上的MIA可以直接导致隐私违规行为。例如,通过确定已经用于训练与某种疾病相关的模型的临床记录,攻击者可以推断临床记录的所有者具有很大的机会。近年来,MIS已被证明对各种ML模型有效,例如,分类模型和生成模型。同时,已经提出了许多防御方法来减轻米西亚。虽然ML模型上的MIAS形成了一个新的新兴和快速增长的研究区,但还没有对这一主题进行系统的调查。在本文中,我们对会员推论和防御进行了第一个全面调查。我们根据其特征提供攻击和防御的分类管理,并讨论其优点和缺点。根据本次调查中确定的限制和差距,我们指出了几个未来的未来研究方向,以激发希望遵循该地区的研究人员。这项调查不仅是研究社区的参考,而且还为该研究领域之外的研究人员带来了清晰的照片。为了进一步促进研究人员,我们创建了一个在线资源存储库,并与未来的相关作品继续更新。感兴趣的读者可以在https://github.com/hongshenghu/membership-inference-machine-learning-literature找到存储库。
translated by 谷歌翻译
当模型向人们提供决定时,分销转移可能会造成不当差异。但是,由于模型及其训练集通常是专有的,因此外部实体很难检查分配变化。在本文中,我们介绍并研究了一种黑盒审计方法,以检测分配转移案例,从而导致跨人口组的模型差异。通过扩展在成员资格和属性推理攻击中使用的技术(旨在暴露于学习模型中的私人信息),我们证明了外部审核员可以仅通过查询模型来获取这些分配所需的信息,以识别这些分布的变化。我们对现实世界数据集的实验结果表明,这种方法是有效的,在检测培训集中人口统计组不足的转移方面达到了80--100%的AUC-ROC。研究人员和调查记者可以使用我们的工具对专有模型进行非授权审核,并在培训数据集中暴露出不足的案例。
translated by 谷歌翻译
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge.We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing stateof-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
translated by 谷歌翻译
As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected individuals, potential adversaries could also exploit these recourses to compromise privacy. In this work, we make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model's training data. To this end, we propose a series of novel membership inference attacks which leverage algorithmic recourse. More specifically, we extend the prior literature on membership inference attacks to the recourse setting by leveraging the distances between data instances and their corresponding counterfactuals output by state-of-the-art recourse methods. Extensive experimentation with real world and synthetic datasets demonstrates significant privacy leakage through recourses. Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.
translated by 谷歌翻译
Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set. In this work, we show that the success of these attacks is largely due to duplication in commonly used web-scraped training sets. We first show that the rate at which language models regenerate training sequences is superlinearly related to a sequence's count in the training set. For instance, a sequence that is present 10 times in the training data is on average generated ~1000 times more often than a sequence that is present only once. We next show that existing methods for detecting memorized sequences have near-chance accuracy on non-duplicated training sequences. Finally, we find that after applying methods to deduplicate training data, language models are considerably more secure against these types of privacy attacks. Taken together, our results motivate an increased focus on deduplication in privacy-sensitive applications and a reevaluation of the practicality of existing privacy attacks.
translated by 谷歌翻译
With the wide-spread application of machine learning models, it has become critical to study the potential data leakage of models trained on sensitive data. Recently, various membership inference (MI) attacks are proposed that determines if a sample was part of the training set or not. Although the first generation of MI attacks has been proven to be ineffective in practice, a few recent studies proposed practical MI attacks that achieve reasonable true positive rate at low false positive rate. The question is whether these attacks can be reliably used in practice. We showcase a practical application of membership inference attacks where it is used by an auditor (investigator) to prove to a judge/jury that an auditee unlawfully used sensitive data during training. Then, we show that the auditee can provide a dataset (with potentially unlimited number of samples) to a judge where MI attacks catastrophically fail. Hence, the auditee challenges the credibility of the auditor and can get the case dismissed. More importantly, we show that the auditee does not need to know anything about the MI attack neither a query access to it. In other words, all currently SOTA MI attacks in literature suffer from the same issue. Through comprehensive experimental evaluation, we show that our algorithms can increase the false positive rate from ten to thousands times larger than what auditor claim to the judge. Lastly, we argue that the implication of our algorithms is beyond discredibility: Current membership inference attacks can identify the memorized subpopulations, but they cannot reliably identify which exact sample in the subpopulation was used during training.
translated by 谷歌翻译
会员推理攻击是机器学习模型中最简单的隐私泄漏形式之一:给定数据点和模型,确定该点是否用于培训模型。当查询其培训数据时,现有会员推理攻击利用模型的异常置信度。如果对手访问模型的预测标签,则不会申请这些攻击,而不会置信度。在本文中,我们介绍了仅限标签的会员资格推理攻击。我们的攻击而不是依赖置信分数,而是评估模型预测标签在扰动下的稳健性,以获得细粒度的隶属信号。这些扰动包括常见的数据增强或对抗例。我们经验表明,我们的标签占会员推理攻击与先前攻击相符,以便需要访问模型信心。我们进一步证明,仅限标签攻击违反了(隐含或明确)依赖于我们呼叫信心屏蔽的现象的员工推论攻击的多种防御。这些防御修改了模型的置信度分数以挫败攻击,但留下模型的预测标签不变。我们的标签攻击展示了置信性掩蔽不是抵御会员推理的可行的防御策略。最后,我们调查唯一的案例标签攻击,该攻击推断为少量异常值数据点。我们显示仅标签攻击也匹配此设置中基于置信的攻击。我们发现具有差异隐私和(强)L2正则化的培训模型是唯一已知的防御策略,成功地防止所有攻击。即使差异隐私预算太高而无法提供有意义的可证明担保,这仍然存在。
translated by 谷歌翻译
隐私敏感数据的培训机器学习模型已成为一种流行的练习,在不断扩大的田野中推动创新。这已经向新攻击打开了门,这可能会产生严重的隐私含义。一个这样的攻击,会员推导攻击(MIA),暴露了特定数据点是否用于训练模型。一种越来越多的文献使用差异的私人(DP)训练算法作为反对这种攻击的辩护。但是,这些作品根据限制假设评估防御,即所有培训集以及非成员的所有成员都是独立的并相同分布的。这种假设没有在文献中的许多真实用例中占据。由此激励,我们评估隶属于样本之间的统计依赖性,并解释为什么DP不提供有意义的保护(在这种更常规的情况下,培训集尺寸$ N $的隐私参数$ \ epsilon $ scales)。我们使用从现实世界数据构建的培训集进行了一系列实证评估,其中包括示出样品之间的不同类型依赖性的培训集。我们的结果表明,培训集依赖关系可能会严重增加MIS的性能,因此假设数据样本在统计上独立,可以显着低估均撒的性能。
translated by 谷歌翻译
随着机器学习技术的发展,研究的注意力已从单模式学习转变为多模式学习,因为现实世界中的数据以不同的方式存在。但是,多模式模型通常比单模式模型具有更多的信息,并且通常将其应用于敏感情况,例如医疗报告生成或疾病鉴定。与针对机器学习分类器的现有会员推断相比,我们关注的是多模式模型的输入和输出的问题,例如不同的模式,例如图像字幕。这项工作通过成员推理攻击的角度研究了多模式模型的隐私泄漏,这是确定数据记录是否涉及模型培训过程的过程。为了实现这一目标,我们提出了多种模型的成员资格推理(M^4i),分别使用两种攻击方法来推断成员身份状态,分别为基于公表示的(MB)M^4i和基于特征(FB)M^4i。更具体地说,MB M^4i在攻击时采用相似性指标来推断目标数据成员资格。 FB M^4i使用预先训练的阴影多模式提取器来通过比较提取的输入和输出功能的相似性来实现数据推理攻击的目的。广泛的实验结果表明,两种攻击方法都可以实现强大的性能。在不受限制的情况下,平均可以获得攻击成功率的72.5%和94.83%。此外,我们评估了针对我们的攻击的多种防御机制。 M^4i攻击的源代码可在https://github.com/multimodalmi/multimodal-membership-inference.git上公开获得。
translated by 谷歌翻译
由于现在在许多现实世界应用中使用了深度学习,因此研究越来越集中于深度学习模型的隐私以及如何防止攻击者获得有关培训数据的敏感信息。但是,在隐私攻击的背景下,尚未对诸如剪辑之类的图像文本模型进行研究。虽然会员推理攻击旨在判断是否使用特定数据点进行培训,但我们引入了一种新型的隐私攻击,该隐私攻击名为“身份推理攻击”(IDIA),该攻击(IDIA)是为CLIP等多模式图像文本模型而设计的。使用IDIA,攻击者可以通过以黑盒方式查询模型,并以同一个人的不同图像来揭示特定人是否是培训数据的一部分。让模型从各种可能的文本标签中进行选择,攻击者可以探究该模型是否识别该人,因此可以用于培训。通过剪辑上的几个实验,我们表明攻击者可以以非常高的精度识别用于培训的个人,并且该模型学会了将名称与被描绘的人联系起来。我们的实验表明,多模式图像文本模型确实泄漏了有关其训练数据的敏感信息,因此应谨慎处理。
translated by 谷歌翻译