随着语言模型的不断增加,它对于保护这些模型免于泄漏私人信息变得至关重要。以前的工作试图通过培训具有不同隐私保证的基于RNN的语言模型来应对这一挑战。但是,将经典的差异隐私应用于语言模型会导致模型性能差,因为基本隐私概念过于困惑,并且为数据中所有令牌提供了不体化的保护。鉴于自然语言中的私人信息很少(例如,电子邮件的大部分可能无法携带个人身份信息),我们提出了一个新的隐私概念,选择性差异隐私,以提供严格的数据,以保证数据的敏感部分改善模型实用程序。为了实现这样一个新的概念,我们为基于RNN的语言模型开发了相应的隐私机制,即选择性DPSGD。除了语言建模外,我们还将方法应用于更具体的应用程序 - dialog系统。语言建模和对话系统建设的实验表明,与基线相比,在各种隐私攻击下,提议的保留隐私机制可以实现更好的公用事业,同时保持安全。数据和代码在https://github.com/wyshi/lm_privacy上发布,以促进未来的研究。
translated by 谷歌翻译
大型语言模型被显示为记住隐私信息,例如培训数据中的社会保险号。鉴于培训语料库的巨大规模,筛选和自动筛选和过滤这些隐私数据是一项挑战。在本文中,我们提出了秘密编辑的培训(CRT),这是一种培训语言生成模型的方法,同时保护机密细分市场。我们从差异隐私(解决一个相关但独特的问题)中借鉴了想法,并表明我们的方法能够通过随机将培训过程的部分随机化来防止意外的记忆。此外,我们证明了通过近似正确的筛选策略进行修复会放大机密性保证。我们实施LSTM和GPT语言模型的方法。我们的实验结果表明,通过CRT训练的模型获得了几乎相同的困惑,同时保持了强大的机密性。
translated by 谷歌翻译
In this paper, we introduce a novel concept of user-entity differential privacy (UeDP) to provide formal privacy protection simultaneously to both sensitive entities in textual data and data owners in learning natural language models (NLMs). To preserve UeDP, we developed a novel algorithm, called UeDP-Alg, optimizing the trade-off between privacy loss and model utility with a tight sensitivity bound derived from seamlessly combining user and sensitive entity sampling processes. An extensive theoretical analysis and evaluation show that our UeDP-Alg outperforms baseline approaches in model utility under the same privacy budget consumption on several NLM tasks, using benchmark datasets.
translated by 谷歌翻译
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models-a common type of machine-learning model. Because such models are sometimes trained on sensitive data (e.g., the text of users' private messages), this methodology can benefit privacy by allowing deep-learning practitioners to select means of training that minimize such memorization.In experiments, we show that unintended memorization is a persistent, hard-to-avoid issue that can have serious consequences. Specifically, for models trained without consideration of memorization, we describe new, efficient procedures that can extract unique, secret sequences, such as credit card numbers. We show that our testing strategy is a practical and easy-to-use first line of defense, e.g., by describing its application to quantitatively limit data exposure in Google's Smart Compose, a commercial text-completion neural network trained on millions of users' email messages.
translated by 谷歌翻译
最近的数据提取攻击暴露了语言模型可以记住一些培训样本逐字。这是一种漏洞,可以损害模型培训数据的隐私。在这项工作中,我们介绍了子句:私人私人下一象征预测的实用协议,旨在防止在公共语料库预训练后在私人语料库中进行微调的语言模型的隐私违规。我们展示子子句通过放松差异私密预测,限制了私人语料库中的任何单独用户所唯一的信息的泄漏。重要的是,子提M允许一个紧张,数据相关的隐私会计机制,它允许它挫败现有的数据提取攻击,同时保持语言模型的效用。子句是即使在公开释放由大型变压器的模型等基于GPT-2的基于大型变换器的模型制作的数千个下一令牌预测,也是第一个维护隐私的协议。
translated by 谷歌翻译
Differential privacy is a strong notion for privacy that can be used to prove formal guarantees, in terms of a privacy budget, , about how much information is leaked by a mechanism. However, implementations of privacy-preserving machine learning often select large values of in order to get acceptable utility of the model, with little understanding of the impact of such choices on meaningful privacy. Moreover, in scenarios where iterative learning procedures are used, differential privacy variants that offer tighter analyses are used which appear to reduce the needed privacy budget but present poorly understood trade-offs between privacy and utility. In this paper, we quantify the impact of these choices on privacy in experiments with logistic regression and neural network models. Our main finding is that there is a huge gap between the upper bounds on privacy loss that can be guaranteed, even with advanced mechanisms, and the effective privacy loss that can be measured using current inference attacks. Current mechanisms for differentially private machine learning rarely offer acceptable utility-privacy trade-offs with guarantees for complex learning tasks: settings that provide limited accuracy loss provide meaningless privacy guarantees, and settings that provide strong privacy guarantees result in useless models.
translated by 谷歌翻译
最近的大规模自然语言处理(NLP)系统对大规模和多样化的语料库使用预先培训的大型语言模型(LLM)。实际上,预训练的模型通过对特定于任务的数据集进行微调来适应各种任务。 LLMS虽然有效,但已被证明可以记住培训数据的实例,从而有可能揭示在预训练期间处理的私人信息。潜在的泄漏可能会进一步传播到LLM经过微调的下游任务。另一方面,保存隐私的算法通常涉及从头开始的重新划痕,这对于LLM来说非常昂贵。在这项工作中,我们提出了一个简单,易于解释的,并且在解码阶段将其轻巧的扰动机制应用于已经训练的模型。我们的扰动机制是模型不可抑制的,可以与任何LLM结合使用。我们提供的理论分析表明,所提出的机制是私人的,实验结果显示了隐私 - 私人权衡权衡。
translated by 谷歌翻译
Pre-training large transformer models with in-domain data improves domain adaptation and helps gain performance on the domain-specific downstream tasks. However, sharing models pre-trained on potentially sensitive data is prone to adversarial privacy attacks. In this paper, we asked to which extent we can guarantee privacy of pre-training data and, at the same time, achieve better downstream performance on legal tasks without the need of additional labeled data. We extensively experiment with scalable self-supervised learning of transformer models under the formal paradigm of differential privacy and show that under specific training configurations we can improve downstream performance without sacrifying privacy protection for the in-domain data. Our main contribution is utilizing differential privacy for large-scale pre-training of transformer language models in the legal NLP domain, which, to the best of our knowledge, has not been addressed before.
translated by 谷歌翻译
We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.
translated by 谷歌翻译
深度神经网络(DNNS)铰接对大型数据集的可用性的最新成功;但是,对此类数据集的培训经常为敏感培训信息构成隐私风险。在本文中,我们的目标是探讨生成模型和梯度稀疏性的力量,并提出了一种可扩展的隐私保留生成模型数据标准。与标准展示隐私保留框架相比,允许教师对一维预测进行投票,在高维梯度向量上投票在隐私保存方面具有挑战性。随着需要尺寸减少技术,我们需要在(1)之间的改进之间导航精致的权衡空间,并进行SGD收敛的放缓。为了解决这一点,我们利用通信高效学习,并通过将顶-K压缩与相应的噪声注入机构相结合,提出一种新的噪声压缩和聚集方法TopAGG。理论上,我们证明了DataLens框架保证了其生成数据的差异隐私,并提供了其收敛性的分析。为了展示DataLens的实际使用情况,我们对不同数据集进行广泛的实验,包括Mnist,Fashion-Mnist和高维Celeba,并且我们表明,DataLens显着优于其他基线DP生成模型。此外,我们改进了所提出的Topagg方法,该方法是DP SGD培训的主要构建块之一,并表明它能够在大多数情况下实现比最先进的DP SGD方法更高的效用案件。我们的代码在HTTPS://github.com/ai-secure/datalens公开提供。
translated by 谷歌翻译
It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model. We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. Worryingly, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.
translated by 谷歌翻译
Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models. This applies isotropic Gaussian noise to gradients during training, which can perturb these gradients in any direction, damaging utility. Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable. In this paper we apply \textit{directional privacy}, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of \textit{angular distance} so that gradient direction is broadly preserved. We show that this provides $\epsilon d$-privacy for deep learning training, rather than the $(\epsilon, \delta)$-privacy of the Gaussian mechanism; and that experimentally, on key datasets, the VMF mechanism can outperform the Gaussian in the utility-privacy trade-off.
translated by 谷歌翻译
差异化(DP)学习在建立大型文本模型方面的成功有限,并尝试直接将差异化私有随机梯度下降(DP-SGD)应用于NLP任务,从而导致了大量的性能下降和高度计算的开销。我们表明,通过(1)使用大型验证模型可以缓解这种性能下降; (2)适合DP优化的超参数; (3)与训练过程对齐的微调目标。通过正确设定这些因素,我们将获得私人NLP模型,以优于最先进的私人培训方法和强大的非私人基准 - 通过直接对中等大小的Corpora进行DP优化的预审计模型。为了解决使用大型变压器运行DP-SGD的计算挑战,我们提出了一种存储器保存技术,该技术允许DP-SGD中的剪辑在不实例化模型中任何层的每个示例梯度的情况下运行。该技术使私人训练变压器的内存成本几乎与非私人培训相同,并以适度的运行时间开销。与传统的观点相反,即DP优化在学习高维模型(由于尺寸缩放的噪声)方面失败的经验结果表明,使用预审预周化模型的私人学习往往不会遭受维度依赖性性能降低的障碍。
translated by 谷歌翻译
差异隐私(DP)已被出现为严格的形式主义,以推理可量化的隐私泄漏。在机器学习(ML)中,已采用DP限制推理/披露训练示例。在现有的工作中杠杆横跨ML管道,尽管隔离,通常专注于梯度扰动等机制。在本文中,我们展示了DP-util,DP整体实用分析框架,跨越ML管道,重点是输入扰动,客观扰动,梯度扰动,输出扰动和预测扰动。在隐私敏感数据上给出ML任务,DP-Util使ML隐私从业者能够对DP在这五个扰动点中的影响,以模型公用事业丢失,隐私泄漏和真正透露的数量来测量DP的影响。训练样本。我们在视觉,医疗和金融数据集上使用两个代表性学习算法(Logistic回归和深神经网络)来评估DP-Uts,以防止会员资格推论攻击作为案例研究攻击。我们结果的一个亮点是,预测扰动一致地在所有数据集中始终如一地实现所有模型的最低实用损耗。在Logistic回归模型中,与其他扰动技术相比,客观扰动导致最低的隐私泄漏。对于深度神经网络,梯度扰动导致最低的隐私泄漏。此外,我们的结果揭示了记录的结果表明,由于隐私泄漏增加,差异私有模型揭示了更多数量的成员样本。总体而言,我们的研究结果表明,为了使使用的扰动机制有明智的决定,ML隐私从业者需要检查优化技术(凸与非凸),扰动机制,课程数量和隐私预算之间的动态。
translated by 谷歌翻译
在许多应用程序中,多方拥有有关相同用户的私人数据,但在属性的脱节集上,服务器希望利用数据来训练模型。为了在保护数据主体的隐私时启用模型学习,我们需要垂直联合学习(VFL)技术,其中数据派对仅共享用于培训模型的信息,而不是私人数据。但是,确保共享信息在学习准确的模型的同时保持隐私是一项挑战。据我们所知,本文提出的算法是第一个实用的解决方案,用于差异化垂直联合K-均值聚类,服务器可以在其中获得具有可证明的差异隐私保证的全球中心。我们的算法假设一个不受信任的中央服务器,该服务器汇总了本地数据派对的差异私有本地中心和成员资格编码。它基于收到的信息构建加权网格作为全局数据集的概要。最终中心是通过在加权网格上运行任何K-均值算法而产生的。我们的网格重量估计方法采用了基于Flajolet-Martin草图的新颖,轻巧和差异私有的相交基数估计算法。为了提高两个以上数据方的设置中的估计准确性,我们进一步提出了权重估计算法的精致版本和参数调整策略,以减少最终的K-均值实用程序,以便在中央私人环境中接近它。我们为由我们的算法计算的群集中心提供了理论实用性分析和实验评估结果,并表明我们的方法在理论上和经验上都比基于现有技术的两个基准在理论上和经验上的表现更好。
translated by 谷歌翻译
梯度泄漏攻击被认为是深度学习中的邪恶隐私威胁之一,因为攻击者在迭代培训期间隐蔽了梯度更新,而不会影响模型培训质量,但又使用泄漏的梯度逐步重建敏感培训数据,具有高攻击成功率。虽然具有差异隐私的深度学习是发布具有差异隐私保障的深度学习模型的违法标准,但我们展示了具有固定隐私参数的差异私有算法易受梯度泄漏攻击的影响。本文调查了差异隐私(DP)的梯度泄漏弹性深度学习的替代方法。首先,我们分析了差异隐私的深度学习的现有实现,它使用固定噪声方差使用固定隐私参数将恒定噪声对所有层中的梯度注入恒定噪声。尽管提供了DP保证,但该方法遭受了低精度,并且很容易受到梯度泄漏攻击。其次,通过使用动态隐私参数,我们提出了一种梯度泄漏弹性深度学习方法,差异隐私保证。与导致恒定噪声方差导致的固定参数策略不同,不同的动态参数策略存在替代技术,以引入自适应噪声方差和自适应噪声注入,其与差别私有模型训练期间的梯度更新的趋势紧密对齐。最后,我们描述了四个互补指标来评估和比较替代方法。
translated by 谷歌翻译
我们审查在机器学习(ML)中使用差异隐私(DP)对隐私保护的使用。我们表明,在维护学习模型的准确性的驱动下,基于DP的ML实现非常宽松,以至于它们不提供DP的事前隐私保证。取而代之的是,他们提供的基本上是与传统(经常受到批评的)统计披露控制方法相似的噪声。由于缺乏正式的隐私保证,因此所提供的实际隐私水平必须经过实验评估,这很少进行。在这方面,我们提出的经验结果表明,ML中的标准反拟合技术可以比DP实现更好的实用性/隐私/效率权衡。
translated by 谷歌翻译
当适用于大规模学习问题时,由于与差异性的性能下降和高记忆开销相比,所谓的隐私私人随机梯度下降(DP-SGD)的常规智慧已经满足了有限的成功。非隐私对应。我们展示了如何通过用新型DP正向传播(DP-FP)替换DP-SGD来减轻性能下降,然后是一个离上的非DP优化器。我们的DP-FP采用新的(1)表示剪辑,然后在前向传播阶段进行噪声,以及(2)微批量构建通过分置,以实现DP放大,并将噪声功率降低至1 / m $,其中$ m $是一步中的微批次数量。在培训分类模型时,我们的DP-FP与表示的所有隐私保留操作的DP-FP无天然偏离偏差,总噪声与模型大小,以及DP-SGD中的内存问题。结果,我们的DP-FP优于尖端DP-SGD,同时保持相同的隐私水平,并且它接近非私有基线,显着优于最先进的DP-SGD变体。例如,当在四个下游任务上应用于Roberta-Light时,DP-FP的平均准确性为91.34 \%,隐私预算小于3,代表了最先进的DP的3.81 \%的性能改进 - 与非私有基线相比,SGD和只有0.9 \%的损失,但具有明显降低的隐私泄漏风险。
translated by 谷歌翻译
随着移动设备和基于位置的服务越来越多地在不同的智能城市场景和应用程序中开发,由于数据收集和共享,许多意外的隐私泄漏已经出现。当与云辅助应用程序共享地理位置数据时,用户重新识别和其他敏感的推论是主要的隐私威胁。值得注意的是,四个时空点足以唯一地识别95%的个人,这加剧了个人信息泄漏。为了解决诸如用户重新识别之类的恶意目的,我们提出了一种基于LSTM的对抗机制,具有代表性学习,以实现原始地理位置数据(即移动性数据)的隐私权特征表示,以共享目的。这些表示旨在以最小的公用事业预算(即损失)最大程度地减少用户重新识别和完整数据重建的机会。我们通过量化轨迹重建风险,用户重新识别风险和移动性可预测性来量化移动性数据集的隐私性权衡权衡来训练该机制。我们报告了探索性分析,使用户能够通过特定的损失功能及其权重参数评估此权衡。四个代表性移动数据集的广泛比较结果证明了我们提出的在移动性隐私保护方面的架构的优越性以及提议的隐私权提取器提取器的效率。我们表明,流动痕迹的隐私能够以边际移动公用事业为代价获得体面的保护。我们的结果还表明,通过探索帕累托最佳设置,我们可以同时增加隐私(45%)和实用程序(32%)。
translated by 谷歌翻译
Language models are widely deployed to provide automatic text completion services in user products. However, recent research has revealed that language models (especially large ones) bear considerable risk of memorizing private training data, which is then vulnerable to leakage and extraction by adversaries. In this study, we test the efficacy of a range of privacy-preserving techniques to mitigate unintended memorization of sensitive user text, while varying other factors such as model size and adversarial conditions. We test both "heuristic" mitigations (those without formal privacy guarantees) and Differentially Private training, which provides provable levels of privacy at the cost of some model performance. Our experiments show that (with the exception of L2 regularization), heuristic mitigations are largely ineffective in preventing memorization in our test suite, possibly because they make too strong of assumptions about the characteristics that define "sensitive" or "private" text. In contrast, Differential Privacy reliably prevents memorization in our experiments, despite its computational and model-performance costs.
translated by 谷歌翻译