Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set. In this work, we show that the success of these attacks is largely due to duplication in commonly used web-scraped training sets. We first show that the rate at which language models regenerate training sequences is superlinearly related to a sequence's count in the training set. For instance, a sequence that is present 10 times in the training data is on average generated ~1000 times more often than a sequence that is present only once. We next show that existing methods for detecting memorized sequences have near-chance accuracy on non-duplicated training sequences. Finally, we find that after applying methods to deduplicate training data, language models are considerably more secure against these types of privacy attacks. Taken together, our results motivate an increased focus on deduplication in privacy-sensitive applications and a reevaluation of the practicality of existing privacy attacks.
translated by 谷歌翻译
It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model. We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. Worryingly, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.
translated by 谷歌翻译
大型语言模型被显示为记住隐私信息,例如培训数据中的社会保险号。鉴于培训语料库的巨大规模,筛选和自动筛选和过滤这些隐私数据是一项挑战。在本文中,我们提出了秘密编辑的培训(CRT),这是一种培训语言生成模型的方法,同时保护机密细分市场。我们从差异隐私(解决一个相关但独特的问题)中借鉴了想法,并表明我们的方法能够通过随机将培训过程的部分随机化来防止意外的记忆。此外,我们证明了通过近似正确的筛选策略进行修复会放大机密性保证。我们实施LSTM和GPT语言模型的方法。我们的实验结果表明,通过CRT训练的模型获得了几乎相同的困惑,同时保持了强大的机密性。
translated by 谷歌翻译
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models-a common type of machine-learning model. Because such models are sometimes trained on sensitive data (e.g., the text of users' private messages), this methodology can benefit privacy by allowing deep-learning practitioners to select means of training that minimize such memorization.In experiments, we show that unintended memorization is a persistent, hard-to-avoid issue that can have serious consequences. Specifically, for models trained without consideration of memorization, we describe new, efficient procedures that can extract unique, secret sequences, such as credit card numbers. We show that our testing strategy is a practical and easy-to-use first line of defense, e.g., by describing its application to quantitatively limit data exposure in Google's Smart Compose, a commercial text-completion neural network trained on millions of users' email messages.
translated by 谷歌翻译
最近的数据提取攻击暴露了语言模型可以记住一些培训样本逐字。这是一种漏洞,可以损害模型培训数据的隐私。在这项工作中,我们介绍了子句:私人私人下一象征预测的实用协议,旨在防止在公共语料库预训练后在私人语料库中进行微调的语言模型的隐私违规。我们展示子子句通过放松差异私密预测,限制了私人语料库中的任何单独用户所唯一的信息的泄漏。重要的是,子提M允许一个紧张,数据相关的隐私会计机制,它允许它挫败现有的数据提取攻击,同时保持语言模型的效用。子句是即使在公开释放由大型变压器的模型等基于GPT-2的基于大型变换器的模型制作的数千个下一令牌预测,也是第一个维护隐私的协议。
translated by 谷歌翻译
Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for language models has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs post hoc. We show that simply performing gradient ascent on target token sequences is effective at forgetting them with little to no degradation of general language modeling performances for larger LMs; it sometimes even substantially improves the underlying LM with just a few iterations. We also find that sequential unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with a previous data preprocessing method and a decoding method known to mitigate privacy risks for LMs, we show that unlearning can give a stronger empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being much more efficient and robust. We release the code and dataset needed to replicate our results at https://github.com/joeljang/knowledge-unlearning.
translated by 谷歌翻译
Privacy preserving deep learning is an emerging field in machine learning that aims to mitigate the privacy risks in the use of deep neural networks. One such risk is training data extraction from language models that have been trained on datasets , which contain personal and privacy sensitive information. In our study, we investigate the extent of named entity memorization in fine-tuned BERT models. We use single-label text classification as representative downstream task and employ three different fine-tuning setups in our experiments, including one with Differentially Privacy (DP). We create a large number of text samples from the fine-tuned BERT models utilizing a custom sequential sampling strategy with two prompting strategies. We search in these samples for named entities and check if they are also present in the fine-tuning datasets. We experiment with two benchmark datasets in the domains of emails and blogs. We show that the application of DP has a huge effect on the text generation capabilities of BERT. Furthermore, we show that a fine-tuned BERT does not generate more named entities entities specific to the fine-tuning dataset than a BERT model that is pre-trained only. This suggests that BERT is unlikely to emit personal or privacy sensitive named entities. Overall, our results are important to understand to what extent BERT-based services are prone to training data extraction attacks.
translated by 谷歌翻译
员额推理攻击允许对训练的机器学习模型进行对手以预测模型的训练数据集中包含特定示例。目前使用平均案例的“精度”度量来评估这些攻击,该攻击未能表征攻击是否可以自信地识别培训集的任何成员。我们认为,应该通过计算其低(例如<0.1%)假阳性率来计算攻击来评估攻击,并在以这种方式评估时发现大多数事先攻击差。为了解决这一问题,我们开发了一个仔细结合文献中多种想法的似然比攻击(Lira)。我们的攻击是低于虚假阳性率的10倍,并且在攻击现有度量的情况下也严格占主导地位。
translated by 谷歌翻译
在私人数据集上训练的机器学习模型已显示出泄漏其私人数据。尽管最近的工作发现平均数据点很少被泄漏,但离群样本通常会经历记忆和隐私泄漏。我们演示和分析了记忆的洋葱效应:删除最容易受到隐私攻击的离群点的“层”,这使以前安全的新层暴露于同一攻击。我们执行几个实验来研究这种效果,并了解其发生的原因。这种效果的存在有各种后果。例如,它表明,在没有严格的隐私保证培训的情况下防御记忆的提案不太可能有效。此外,它表明,诸如机器学习之类的隐私技术实际上可能会损害其他用户的隐私。
translated by 谷歌翻译
近年来,机器学习模型对成员推论攻击的脆弱性受到了很大的关注。然而,由于具有高假阳性率,现有攻击大多是不切实际的,其中非成员样本通常被错误地预测为成员。这种类型的错误使得预测的隶属信号不可靠,特别是因为大多数样本都是现实世界应用中的非成员。在这项工作中,我们认为会员推理攻击可以从\ emph {难度校准}剧烈地利用,其中调整攻击的预测会员评分以正确分类目标样本的难度。我们表明,在没有准确性的情况下,难度校准可以显着降低各种现有攻击的假阳性率。
translated by 谷歌翻译
端到端(E2E)模型通常通过浅融合伴随语言模型(LMS),以提高其整体质量以及对稀有单词的认可。同时,几项先前的作品表明,LMS容易在训练数据中无意中记住稀有或独特的序列。在这项工作中,我们设计了一个框架,用于检测LM培训数据中随机文本序列的记忆(我们称为Canaries),当一个人只有Black-Box(Query)访问LM融合语音识别器,而不是直接访问到达LM融合语音识别器LM。在与变压器LM融合的生产级构象体RNN-T E2E模型中,我们表明可以从300m示例的LM训练数据中检测到单一疾病的金丝雀的记忆。我们还激发了保护隐私的动机,我们还表明,通过示例梯度倾斜的LM培训而没有损害整体质量,这种记忆会大大减少。
translated by 谷歌翻译
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to medical) necessitates a thorough quantitative investigation into their privacy vulnerabilities -- to what extent do MLMs leak information about their training data? Prior attempts at measuring leakage of MLMs via membership inference attacks have been inconclusive, implying the potential robustness of MLMs to privacy attacks. In this work, we posit that prior attempts were inconclusive because they based their attack solely on the MLM's model score. We devise a stronger membership inference attack based on likelihood ratio hypothesis testing that involves an additional reference MLM to more accurately quantify the privacy risks of memorization in MLMs. We show that masked language models are extremely susceptible to likelihood ratio membership inference attacks: Our empirical results, on models trained on medical notes, show that our attack improves the AUC of prior membership inference attacks from 0.66 to an alarmingly high 0.90 level, with a significant improvement in the low-error region: at 1% false positive rate, our attack is 51X more powerful than prior work.
translated by 谷歌翻译
Language models are widely deployed to provide automatic text completion services in user products. However, recent research has revealed that language models (especially large ones) bear considerable risk of memorizing private training data, which is then vulnerable to leakage and extraction by adversaries. In this study, we test the efficacy of a range of privacy-preserving techniques to mitigate unintended memorization of sensitive user text, while varying other factors such as model size and adversarial conditions. We test both "heuristic" mitigations (those without formal privacy guarantees) and Differentially Private training, which provides provable levels of privacy at the cost of some model performance. Our experiments show that (with the exception of L2 regularization), heuristic mitigations are largely ineffective in preventing memorization in our test suite, possibly because they make too strong of assumptions about the characteristics that define "sensitive" or "private" text. In contrast, Differential Privacy reliably prevents memorization in our experiments, despite its computational and model-performance costs.
translated by 谷歌翻译
最近的大规模自然语言处理(NLP)系统对大规模和多样化的语料库使用预先培训的大型语言模型(LLM)。实际上,预训练的模型通过对特定于任务的数据集进行微调来适应各种任务。 LLMS虽然有效,但已被证明可以记住培训数据的实例,从而有可能揭示在预训练期间处理的私人信息。潜在的泄漏可能会进一步传播到LLM经过微调的下游任务。另一方面,保存隐私的算法通常涉及从头开始的重新划痕,这对于LLM来说非常昂贵。在这项工作中,我们提出了一个简单,易于解释的,并且在解码阶段将其轻巧的扰动机制应用于已经训练的模型。我们的扰动机制是模型不可抑制的,可以与任何LLM结合使用。我们提供的理论分析表明,所提出的机制是私人的,实验结果显示了隐私 - 私人权衡权衡。
translated by 谷歌翻译
随着大型预训练的语言模型(例如GPT-2和BERT)的广泛可用性,最近的趋势是微调一个预训练的模型,以在下游任务上实现最新的性能。一个自然的示例是“智能回复”应用程序,其中调整了预训练的模型以为给定的查询消息提供建议的答复。由于这些模型通常是使用敏感数据(例如电子邮件或聊天成绩单)调整的,因此了解和减轻模型泄漏其调整数据的风险很重要。我们研究了典型的智能回复管道中的潜在信息泄漏漏洞,并引入了一种新型的主动提取攻击,该攻击利用包含敏感数据的文本中的规范模式。我们通过实验表明,对手可以提取培训数据中存在的敏感用户信息。我们探讨了潜在的缓解策略,并从经验上证明了差异隐私如何成为这种模式提取攻击的有效防御机制。
translated by 谷歌翻译
As predictive models are increasingly being employed to make consequential decisions, there is a growing emphasis on developing techniques that can provide algorithmic recourse to affected individuals. While such recourses can be immensely beneficial to affected individuals, potential adversaries could also exploit these recourses to compromise privacy. In this work, we make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model's training data. To this end, we propose a series of novel membership inference attacks which leverage algorithmic recourse. More specifically, we extend the prior literature on membership inference attacks to the recourse setting by leveraging the distances between data instances and their corresponding counterfactuals output by state-of-the-art recourse methods. Extensive experimentation with real world and synthetic datasets demonstrates significant privacy leakage through recourses. Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.
translated by 谷歌翻译
机器学习模型表现出两个看似矛盾的现象:训练数据记忆和各种遗忘形式。在记忆中,模型过于适合特定的培训示例,并容易受到隐私攻击的影响。在忘记时,最终忘记了在培训初期出现的例子。在这项工作中,我们将这些现象联系起来。我们提出了一种技术,以衡量训练示例的细节在多大程度上``忘记'',从而不易受到他们最近未曾见过的示例的隐私攻击的影响。我们表明,尽管非凸性可以防止在最坏的情况下忘记发生,但标准图像和语音模型在经验上确实会随着时间的流逝而忘记示例。我们将非确定性识别为潜在的解释,表明经过确定性训练的模型不会忘记。我们的结果表明,当使用极大的数据集培训(例如用于预训练模型的示例)时,早期看到的例子可能会观察到隐私益处,而牺牲了后来看到的示例。
translated by 谷歌翻译
在其培训集中,给定训练有素的模型泄漏了多少培训模型泄露?会员资格推理攻击用作审计工具,以量化模型在其训练集中泄漏的私人信息。会员推理攻击受到不同不确定性的影响,即攻击者必须解决培训数据,培训算法和底层数据分布。因此,攻击成功率,在文献中的许多攻击,不要精确地捕获模型的信息泄漏关于他们的数据,因为它们还反映了攻击算法具有的其他不确定性。在本文中,我们解释了隐含的假设以及使用假设检测框架在现有工作中进行的简化。我们还从框架中获得了新的攻击算法,可以实现高AUC分数,同时还突出显示影响其性能的不同因素。我们的算法捕获模型中隐私损失的非常精确的近似,并且可以用作在机器学习模型中执行准确和了解的隐私风险的工具。我们对各种机器学习任务和基准数据集的攻击策略提供了彻底的实证评估。
translated by 谷歌翻译
Differential privacy is a strong notion for privacy that can be used to prove formal guarantees, in terms of a privacy budget, , about how much information is leaked by a mechanism. However, implementations of privacy-preserving machine learning often select large values of in order to get acceptable utility of the model, with little understanding of the impact of such choices on meaningful privacy. Moreover, in scenarios where iterative learning procedures are used, differential privacy variants that offer tighter analyses are used which appear to reduce the needed privacy budget but present poorly understood trade-offs between privacy and utility. In this paper, we quantify the impact of these choices on privacy in experiments with logistic regression and neural network models. Our main finding is that there is a huge gap between the upper bounds on privacy loss that can be guaranteed, even with advanced mechanisms, and the effective privacy loss that can be measured using current inference attacks. Current mechanisms for differentially private machine learning rarely offer acceptable utility-privacy trade-offs with guarantees for complex learning tasks: settings that provide limited accuracy loss provide meaningless privacy guarantees, and settings that provide strong privacy guarantees result in useless models.
translated by 谷歌翻译
随着语言模型的不断增加,它对于保护这些模型免于泄漏私人信息变得至关重要。以前的工作试图通过培训具有不同隐私保证的基于RNN的语言模型来应对这一挑战。但是,将经典的差异隐私应用于语言模型会导致模型性能差,因为基本隐私概念过于困惑,并且为数据中所有令牌提供了不体化的保护。鉴于自然语言中的私人信息很少(例如,电子邮件的大部分可能无法携带个人身份信息),我们提出了一个新的隐私概念,选择性差异隐私,以提供严格的数据,以保证数据的敏感部分改善模型实用程序。为了实现这样一个新的概念,我们为基于RNN的语言模型开发了相应的隐私机制,即选择性DPSGD。除了语言建模外,我们还将方法应用于更具体的应用程序 - dialog系统。语言建模和对话系统建设的实验表明,与基线相比,在各种隐私攻击下,提议的保留隐私机制可以实现更好的公用事业,同时保持安全。数据和代码在https://github.com/wyshi/lm_privacy上发布,以促进未来的研究。
translated by 谷歌翻译