Large language models are shown to present privacy risks through memorization of training data, and several recent works have studied such risks for the pre-training phase. Little attention, however, has been given to the fine-tuning phase and it is not well understood how different fine-tuning methods (such as fine-tuning the full model, the model head, and adapter) compare in terms of memorization risk. This presents increasing concern as the "pre-train and fine-tune" paradigm proliferates. In this paper, we empirically study memorization of fine-tuning methods using membership inference and extraction attacks, and show that their susceptibility to attacks is very different. We observe that fine-tuning the head of the model has the highest susceptibility to attacks, whereas fine-tuning smaller adapters appears to be less vulnerable to known extraction attacks.
translated by 谷歌翻译
我们为大规模训练的大规模训练语言模型提供了更简单,更稀疏,更快的算法,这些算法在许多标准的NLP任务上实现了最新的隐私与实用性权衡。我们为此问题提出了一个元框架,这是受高度参数效率方法进行微调成功的启发。我们的实验表明,这些方法的差异化适应能力在三个重要方面优于以前的私人算法:实用程序,隐私以及私人培训的计算和记忆成本。在许多经常研究的数据集中,私人模型的实用性接近了非私人模型的方法。例如,在MNLI数据集上,我们使用Roberta-large的准确度为87.8 \%$,使用Roberta-Base $ 83.5 \%$,其隐私预算为$ \ Epsilon = 6.7 $。相比之下,缺乏隐私限制,罗伯塔·莱格(Roberta-Large)的准确度为$ 90.2 \%$。我们的发现对于自然语言生成任务类似。与DART,GPT-2-SMALL,GPT-2中,GPT-2-MEDIUM,GPT-2-LARGE和GPT-2-XL的私人微调达到38.5、42.0、43.1和43.8($ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 43.8) epsilon = 6.8,\ delta = $ 1E-5),而非私人基线为$ 48.1 $。我们所有的实验都表明,较大的模型更适合私人微调:虽然众所周知,它们旨在非优先实现卓越的准确性,但我们发现当引入隐私时,它们也更好地保持其准确性。
translated by 谷歌翻译
The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to medical) necessitates a thorough quantitative investigation into their privacy vulnerabilities -- to what extent do MLMs leak information about their training data? Prior attempts at measuring leakage of MLMs via membership inference attacks have been inconclusive, implying the potential robustness of MLMs to privacy attacks. In this work, we posit that prior attempts were inconclusive because they based their attack solely on the MLM's model score. We devise a stronger membership inference attack based on likelihood ratio hypothesis testing that involves an additional reference MLM to more accurately quantify the privacy risks of memorization in MLMs. We show that masked language models are extremely susceptible to likelihood ratio membership inference attacks: Our empirical results, on models trained on medical notes, show that our attack improves the AUC of prior membership inference attacks from 0.66 to an alarmingly high 0.90 level, with a significant improvement in the low-error region: at 1% false positive rate, our attack is 51X more powerful than prior work.
translated by 谷歌翻译
随着大型预训练的语言模型(例如GPT-2和BERT)的广泛可用性,最近的趋势是微调一个预训练的模型,以在下游任务上实现最新的性能。一个自然的示例是“智能回复”应用程序,其中调整了预训练的模型以为给定的查询消息提供建议的答复。由于这些模型通常是使用敏感数据(例如电子邮件或聊天成绩单)调整的,因此了解和减轻模型泄漏其调整数据的风险很重要。我们研究了典型的智能回复管道中的潜在信息泄漏漏洞,并引入了一种新型的主动提取攻击,该攻击利用包含敏感数据的文本中的规范模式。我们通过实验表明,对手可以提取培训数据中存在的敏感用户信息。我们探讨了潜在的缓解策略,并从经验上证明了差异隐私如何成为这种模式提取攻击的有效防御机制。
translated by 谷歌翻译
It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model. We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. Worryingly, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.
translated by 谷歌翻译
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models-a common type of machine-learning model. Because such models are sometimes trained on sensitive data (e.g., the text of users' private messages), this methodology can benefit privacy by allowing deep-learning practitioners to select means of training that minimize such memorization.In experiments, we show that unintended memorization is a persistent, hard-to-avoid issue that can have serious consequences. Specifically, for models trained without consideration of memorization, we describe new, efficient procedures that can extract unique, secret sequences, such as credit card numbers. We show that our testing strategy is a practical and easy-to-use first line of defense, e.g., by describing its application to quantitatively limit data exposure in Google's Smart Compose, a commercial text-completion neural network trained on millions of users' email messages.
translated by 谷歌翻译
我们介绍了BitFit,这是一种稀疏的重点方法,其中仅修改了模型的偏差(或其中一个子集)。我们表明,通过在预训练的BERT模型上应用BITFIT的小型至中等训练数据具有竞争力(有时比)对整个模型进行微调。对于较大的数据,该方法与其他稀疏微调方法具有竞争力。除了它们的实际实用性外,这些发现与理解常用的填补过程的问题有关:它们支持以下假设:填充主要是关于揭示通过语言模型培训引起的知识,而不是学习新的任务特定的语言知识。
translated by 谷歌翻译
随着语言模型的不断增加,它对于保护这些模型免于泄漏私人信息变得至关重要。以前的工作试图通过培训具有不同隐私保证的基于RNN的语言模型来应对这一挑战。但是,将经典的差异隐私应用于语言模型会导致模型性能差,因为基本隐私概念过于困惑,并且为数据中所有令牌提供了不体化的保护。鉴于自然语言中的私人信息很少(例如,电子邮件的大部分可能无法携带个人身份信息),我们提出了一个新的隐私概念,选择性差异隐私,以提供严格的数据,以保证数据的敏感部分改善模型实用程序。为了实现这样一个新的概念,我们为基于RNN的语言模型开发了相应的隐私机制,即选择性DPSGD。除了语言建模外,我们还将方法应用于更具体的应用程序 - dialog系统。语言建模和对话系统建设的实验表明,与基线相比,在各种隐私攻击下,提议的保留隐私机制可以实现更好的公用事业,同时保持安全。数据和代码在https://github.com/wyshi/lm_privacy上发布,以促进未来的研究。
translated by 谷歌翻译
Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set. In this work, we show that the success of these attacks is largely due to duplication in commonly used web-scraped training sets. We first show that the rate at which language models regenerate training sequences is superlinearly related to a sequence's count in the training set. For instance, a sequence that is present 10 times in the training data is on average generated ~1000 times more often than a sequence that is present only once. We next show that existing methods for detecting memorized sequences have near-chance accuracy on non-duplicated training sequences. Finally, we find that after applying methods to deduplicate training data, language models are considerably more secure against these types of privacy attacks. Taken together, our results motivate an increased focus on deduplication in privacy-sensitive applications and a reevaluation of the practicality of existing privacy attacks.
translated by 谷歌翻译
最近的数据提取攻击暴露了语言模型可以记住一些培训样本逐字。这是一种漏洞,可以损害模型培训数据的隐私。在这项工作中,我们介绍了子句:私人私人下一象征预测的实用协议,旨在防止在公共语料库预训练后在私人语料库中进行微调的语言模型的隐私违规。我们展示子子句通过放松差异私密预测,限制了私人语料库中的任何单独用户所唯一的信息的泄漏。重要的是,子提M允许一个紧张,数据相关的隐私会计机制,它允许它挫败现有的数据提取攻击,同时保持语言模型的效用。子句是即使在公开释放由大型变压器的模型等基于GPT-2的基于大型变换器的模型制作的数千个下一令牌预测,也是第一个维护隐私的协议。
translated by 谷歌翻译
大型语言模型被显示为记住隐私信息,例如培训数据中的社会保险号。鉴于培训语料库的巨大规模,筛选和自动筛选和过滤这些隐私数据是一项挑战。在本文中,我们提出了秘密编辑的培训(CRT),这是一种培训语言生成模型的方法,同时保护机密细分市场。我们从差异隐私(解决一个相关但独特的问题)中借鉴了想法,并表明我们的方法能够通过随机将培训过程的部分随机化来防止意外的记忆。此外,我们证明了通过近似正确的筛选策略进行修复会放大机密性保证。我们实施LSTM和GPT语言模型的方法。我们的实验结果表明,通过CRT训练的模型获得了几乎相同的困惑,同时保持了强大的机密性。
translated by 谷歌翻译
激活功能可以对降低输入数据的拓扑复杂性产生重大影响,从而提高模型的性能。选择合适的激活函数是神经模型设计中的重要步骤。但是,在基于变压器的语言模型中很少讨论或探索激活功能的选择。事先选择它们的激活功能,然后从预训练中固定到微调。结果,在这个漫长的生命周期中,无法调整它们对模型的电感偏见。此外,随后开发的模型(例如Roberta,Bart和GPT-3)经常跟进先前的工作(例如BERT),以使用相同的激活函数而无需合理。在本文中,我们研究了变压器体系结构中使用理性激活函数(RAF)(RAF)的有效性。与常规,预定义的激活功能相反,RAF可以根据输入数据自适应地学习最佳激活功能。我们的实验表明,基于RAF的变压器(RAFT)比具有GELU函数的香草BERT的验证性更低。我们进一步评估了低和全数据设置中下游任务的筏。我们的结果表明,筏在大多数任务和设置上都优于对应模型。例如,在低数据表情况下(有100个训练示例),木筏在胶水基准上的表现平均高出5.71点,在全数据设置的小队中,平均得分为2.05分。对学到的RAF的形状的分析进一步揭示了它们在预训练模型的不同层之间有很大的变化,并且看起来与常规激活函数大多不同。 RAFT为根据学习的激活功能打开了一个新的研究方向,用于分析和解释预训练的模型。
translated by 谷歌翻译
Privacy preserving deep learning is an emerging field in machine learning that aims to mitigate the privacy risks in the use of deep neural networks. One such risk is training data extraction from language models that have been trained on datasets , which contain personal and privacy sensitive information. In our study, we investigate the extent of named entity memorization in fine-tuned BERT models. We use single-label text classification as representative downstream task and employ three different fine-tuning setups in our experiments, including one with Differentially Privacy (DP). We create a large number of text samples from the fine-tuned BERT models utilizing a custom sequential sampling strategy with two prompting strategies. We search in these samples for named entities and check if they are also present in the fine-tuning datasets. We experiment with two benchmark datasets in the domains of emails and blogs. We show that the application of DP has a huge effect on the text generation capabilities of BERT. Furthermore, we show that a fine-tuned BERT does not generate more named entities entities specific to the fine-tuning dataset than a BERT model that is pre-trained only. This suggests that BERT is unlikely to emit personal or privacy sensitive named entities. Overall, our results are important to understand to what extent BERT-based services are prone to training data extraction attacks.
translated by 谷歌翻译
差异化(DP)学习在建立大型文本模型方面的成功有限,并尝试直接将差异化私有随机梯度下降(DP-SGD)应用于NLP任务,从而导致了大量的性能下降和高度计算的开销。我们表明,通过(1)使用大型验证模型可以缓解这种性能下降; (2)适合DP优化的超参数; (3)与训练过程对齐的微调目标。通过正确设定这些因素,我们将获得私人NLP模型,以优于最先进的私人培训方法和强大的非私人基准 - 通过直接对中等大小的Corpora进行DP优化的预审计模型。为了解决使用大型变压器运行DP-SGD的计算挑战,我们提出了一种存储器保存技术,该技术允许DP-SGD中的剪辑在不实例化模型中任何层的每个示例梯度的情况下运行。该技术使私人训练变压器的内存成本几乎与非私人培训相同,并以适度的运行时间开销。与传统的观点相反,即DP优化在学习高维模型(由于尺寸缩放的噪声)方面失败的经验结果表明,使用预审预周化模型的私人学习往往不会遭受维度依赖性性能降低的障碍。
translated by 谷歌翻译
Language models are widely deployed to provide automatic text completion services in user products. However, recent research has revealed that language models (especially large ones) bear considerable risk of memorizing private training data, which is then vulnerable to leakage and extraction by adversaries. In this study, we test the efficacy of a range of privacy-preserving techniques to mitigate unintended memorization of sensitive user text, while varying other factors such as model size and adversarial conditions. We test both "heuristic" mitigations (those without formal privacy guarantees) and Differentially Private training, which provides provable levels of privacy at the cost of some model performance. Our experiments show that (with the exception of L2 regularization), heuristic mitigations are largely ineffective in preventing memorization in our test suite, possibly because they make too strong of assumptions about the characteristics that define "sensitive" or "private" text. In contrast, Differential Privacy reliably prevents memorization in our experiments, despite its computational and model-performance costs.
translated by 谷歌翻译
Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to 26 diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters per task. By contrast, fine-tuning trains 100% of the parameters per task.
translated by 谷歌翻译
Backdoor attacks represent one of the major threats to machine learning models. Various efforts have been made to mitigate backdoors. However, existing defenses have become increasingly complex and often require high computational resources or may also jeopardize models' utility. In this work, we show that fine-tuning, one of the most common and easy-to-adopt machine learning training operations, can effectively remove backdoors from machine learning models while maintaining high model utility. Extensive experiments over three machine learning paradigms show that fine-tuning and our newly proposed super-fine-tuning achieve strong defense performance. Furthermore, we coin a new term, namely backdoor sequela, to measure the changes in model vulnerabilities to other attacks before and after the backdoor has been removed. Empirical evaluation shows that, compared to other defense methods, super-fine-tuning leaves limited backdoor sequela. We hope our results can help machine learning model owners better protect their models from backdoor threats. Also, it calls for the design of more advanced attacks in order to comprehensively assess machine learning models' backdoor vulnerabilities.
translated by 谷歌翻译
机器学习模型表现出两个看似矛盾的现象:训练数据记忆和各种遗忘形式。在记忆中,模型过于适合特定的培训示例,并容易受到隐私攻击的影响。在忘记时,最终忘记了在培训初期出现的例子。在这项工作中,我们将这些现象联系起来。我们提出了一种技术,以衡量训练示例的细节在多大程度上``忘记'',从而不易受到他们最近未曾见过的示例的隐私攻击的影响。我们表明,尽管非凸性可以防止在最坏的情况下忘记发生,但标准图像和语音模型在经验上确实会随着时间的流逝而忘记示例。我们将非确定性识别为潜在的解释,表明经过确定性训练的模型不会忘记。我们的结果表明,当使用极大的数据集培训(例如用于预训练模型的示例)时,早期看到的例子可能会观察到隐私益处,而牺牲了后来看到的示例。
translated by 谷歌翻译
在预介质期间,预解压器变压器遭受梯度幅度不匹配:早期层处的梯度远远大于更高层的层。我们所提出的常规程序架构可以减轻这些问题,这为每层增加了三个归一化操作:自我注意后的一层规范,自我注意输出的头部明智的缩放,以及第一完全连接层之后的层标。额外的运营产生忽略不计的计算成本(+ 0.4%的参数增加),但是改善了从12500万到27亿个参数的因果和屏蔽语言模型的预先欣赏困惑和下游任务性能。例如,在我们最强的1.3B参数基线顶部添加NARMFORMER可以在相同的计算预算中更快地达到24%的平等困惑,或者更好地收敛0.27困惑。该模型达到GPT3大(1.3B)零拍摄性能速度快60%。对于屏蔽语言建模,Normformer平均将微调胶水性能提高1.9%。 Fairseq HTTPS://github.com/pytorch/faireq/tree/main/examples/normformer提供培训ormalformer模型的代码。
translated by 谷歌翻译
With increasing privacy concerns on data, recent studies have made significant progress using federated learning (FL) on privacy-sensitive natural language processing (NLP) tasks. Much literature suggests fully fine-tuning pre-trained language models (PLMs) in the FL paradigm can mitigate the data heterogeneity problem and close the performance gap with centralized training. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we introduce various parameter-efficient tuning (PETuning) methods into federated learning. Specifically, we provide a holistic empirical study of representative PLMs tuning methods in FL. The experimental results cover the analysis of data heterogeneity levels, data scales, and different FL scenarios. Overall communication overhead can be significantly reduced by locally tuning and globally aggregating lightweight model parameters while maintaining acceptable performance in various FL settings. To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently. The source code is available at \url{https://github.com/iezhuozhuo/FedETuning/tree/deltaTuning}.
translated by 谷歌翻译