分层文本分类包括将文本文档分类为类和子类的层次结构。尽管人造神经网络已经证明有用的是执行这项任务,但遗憾的是,由于培训数据记忆,他们可以将培训数据信息泄漏到对手。在模型培训期间使用差异隐私可以减轻泄漏攻击训练型型号,使模型能够以降低的模型精度安全地共享。这项工作调查了具有差异隐私保证的分层文本分类中的隐私实用权折衷,并识别了提供优越权衡的神经网络架构。为此,我们使用白盒会员推理攻击来凭经验评估三种广泛使用的神经网络架构的信息泄漏。我们表明,大型差异隐私参数已经足以完全减轻隶属度推理攻击,因此仅导致模型实用程序的中等减少。更具体地说,对于具有长文本的大型数据集,我们观察了基于变压器的模型,实现了整体有利的隐私式实用工具权,而对于具有较短文本的较小的数据集是优选的。
translated by 谷歌翻译
Differential privacy is a strong notion for privacy that can be used to prove formal guarantees, in terms of a privacy budget, , about how much information is leaked by a mechanism. However, implementations of privacy-preserving machine learning often select large values of in order to get acceptable utility of the model, with little understanding of the impact of such choices on meaningful privacy. Moreover, in scenarios where iterative learning procedures are used, differential privacy variants that offer tighter analyses are used which appear to reduce the needed privacy budget but present poorly understood trade-offs between privacy and utility. In this paper, we quantify the impact of these choices on privacy in experiments with logistic regression and neural network models. Our main finding is that there is a huge gap between the upper bounds on privacy loss that can be guaranteed, even with advanced mechanisms, and the effective privacy loss that can be measured using current inference attacks. Current mechanisms for differentially private machine learning rarely offer acceptable utility-privacy trade-offs with guarantees for complex learning tasks: settings that provide limited accuracy loss provide meaningless privacy guarantees, and settings that provide strong privacy guarantees result in useless models.
translated by 谷歌翻译
会员推理攻击是机器学习模型中最简单的隐私泄漏形式之一:给定数据点和模型,确定该点是否用于培训模型。当查询其培训数据时,现有会员推理攻击利用模型的异常置信度。如果对手访问模型的预测标签,则不会申请这些攻击,而不会置信度。在本文中,我们介绍了仅限标签的会员资格推理攻击。我们的攻击而不是依赖置信分数,而是评估模型预测标签在扰动下的稳健性,以获得细粒度的隶属信号。这些扰动包括常见的数据增强或对抗例。我们经验表明,我们的标签占会员推理攻击与先前攻击相符,以便需要访问模型信心。我们进一步证明,仅限标签攻击违反了(隐含或明确)依赖于我们呼叫信心屏蔽的现象的员工推论攻击的多种防御。这些防御修改了模型的置信度分数以挫败攻击,但留下模型的预测标签不变。我们的标签攻击展示了置信性掩蔽不是抵御会员推理的可行的防御策略。最后,我们调查唯一的案例标签攻击,该攻击推断为少量异常值数据点。我们显示仅标签攻击也匹配此设置中基于置信的攻击。我们发现具有差异隐私和(强)L2正则化的培训模型是唯一已知的防御策略,成功地防止所有攻击。即使差异隐私预算太高而无法提供有意义的可证明担保,这仍然存在。
translated by 谷歌翻译
Named entity recognition models (NER), are widely used for identifying named entities (e.g., individuals, locations, and other information) in text documents. Machine learning based NER models are increasingly being applied in privacy-sensitive applications that need automatic and scalable identification of sensitive information to redact text for data sharing. In this paper, we study the setting when NER models are available as a black-box service for identifying sensitive information in user documents and show that these models are vulnerable to membership inference on their training datasets. With updated pre-trained NER models from spaCy, we demonstrate two distinct membership attacks on these models. Our first attack capitalizes on unintended memorization in the NER's underlying neural network, a phenomenon NNs are known to be vulnerable to. Our second attack leverages a timing side-channel to target NER models that maintain vocabularies constructed from the training data. We show that different functional paths of words within the training dataset in contrast to words not previously seen have measurable differences in execution time. Revealing membership status of training samples has clear privacy implications, e.g., in text redaction, sensitive words or phrases to be found and removed, are at risk of being detected in the training dataset. Our experimental evaluation includes the redaction of both password and health data, presenting both security risks and privacy/regulatory issues. This is exacerbated by results that show memorization with only a single phrase. We achieved 70% AUC in our first attack on a text redaction use-case. We also show overwhelming success in the timing attack with 99.23% AUC. Finally we discuss potential mitigation approaches to realize the safe use of NER models in light of the privacy and security implications of membership inference attacks.
translated by 谷歌翻译
机器学习(ML)模型已广泛应用于各种应用,包括图像分类,文本生成,音频识别和图形数据分析。然而,最近的研究表明,ML模型容易受到隶属推导攻击(MIS),其目的是推断数据记录是否用于训练目标模型。 ML模型上的MIA可以直接导致隐私违规行为。例如,通过确定已经用于训练与某种疾病相关的模型的临床记录,攻击者可以推断临床记录的所有者具有很大的机会。近年来,MIS已被证明对各种ML模型有效,例如,分类模型和生成模型。同时,已经提出了许多防御方法来减轻米西亚。虽然ML模型上的MIAS形成了一个新的新兴和快速增长的研究区,但还没有对这一主题进行系统的调查。在本文中,我们对会员推论和防御进行了第一个全面调查。我们根据其特征提供攻击和防御的分类管理,并讨论其优点和缺点。根据本次调查中确定的限制和差距,我们指出了几个未来的未来研究方向,以激发希望遵循该地区的研究人员。这项调查不仅是研究社区的参考,而且还为该研究领域之外的研究人员带来了清晰的照片。为了进一步促进研究人员,我们创建了一个在线资源存储库,并与未来的相关作品继续更新。感兴趣的读者可以在https://github.com/hongshenghu/membership-inference-machine-learning-literature找到存储库。
translated by 谷歌翻译
作为对培训数据隐私的长期威胁,会员推理攻击(MIA)在机器学习模型中无处不在。现有作品证明了培训的区分性与测试损失分布与模型对MIA的脆弱性之间的密切联系。在现有结果的激励下,我们提出了一个基于轻松损失的新型培训框架,并具有更可实现的学习目标,从而导致概括差距狭窄和隐私泄漏减少。 RelaseLoss适用于任何分类模型,具有易于实施和可忽略不计的开销的额外好处。通过对具有不同方式(图像,医疗数据,交易记录)的五个数据集进行广泛的评估,我们的方法始终优于针对MIA和模型效用的韧性,以最先进的防御机制优于最先进的防御机制。我们的防御是第一个可以承受广泛攻击的同时,同时保存(甚至改善)目标模型的效用。源代码可从https://github.com/dingfanchen/relaxloss获得
translated by 谷歌翻译
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models-a common type of machine-learning model. Because such models are sometimes trained on sensitive data (e.g., the text of users' private messages), this methodology can benefit privacy by allowing deep-learning practitioners to select means of training that minimize such memorization.In experiments, we show that unintended memorization is a persistent, hard-to-avoid issue that can have serious consequences. Specifically, for models trained without consideration of memorization, we describe new, efficient procedures that can extract unique, secret sequences, such as credit card numbers. We show that our testing strategy is a practical and easy-to-use first line of defense, e.g., by describing its application to quantitatively limit data exposure in Google's Smart Compose, a commercial text-completion neural network trained on millions of users' email messages.
translated by 谷歌翻译
A distribution inference attack aims to infer statistical properties of data used to train machine learning models. These attacks are sometimes surprisingly potent, but the factors that impact distribution inference risk are not well understood and demonstrated attacks often rely on strong and unrealistic assumptions such as full knowledge of training environments even in supposedly black-box threat scenarios. To improve understanding of distribution inference risks, we develop a new black-box attack that even outperforms the best known white-box attack in most settings. Using this new attack, we evaluate distribution inference risk while relaxing a variety of assumptions about the adversary's knowledge under black-box access, like known model architectures and label-only access. Finally, we evaluate the effectiveness of previously proposed defenses and introduce new defenses. We find that although noise-based defenses appear to be ineffective, a simple re-sampling defense can be highly effective. Code is available at https://github.com/iamgroot42/dissecting_distribution_inference
translated by 谷歌翻译
员额推理攻击允许对训练的机器学习模型进行对手以预测模型的训练数据集中包含特定示例。目前使用平均案例的“精度”度量来评估这些攻击,该攻击未能表征攻击是否可以自信地识别培训集的任何成员。我们认为,应该通过计算其低(例如<0.1%)假阳性率来计算攻击来评估攻击,并在以这种方式评估时发现大多数事先攻击差。为了解决这一问题,我们开发了一个仔细结合文献中多种想法的似然比攻击(Lira)。我们的攻击是低于虚假阳性率的10倍,并且在攻击现有度量的情况下也严格占主导地位。
translated by 谷歌翻译
从公共机器学习(ML)模型中泄漏数据是一个越来越重要的领域,因为ML的商业和政府应用可以利用多个数据源,可能包括用户和客户的敏感数据。我们对几个方面的当代进步进行了全面的调查,涵盖了非自愿数据泄漏,这对ML模型很自然,潜在的恶毒泄漏是由隐私攻击引起的,以及目前可用的防御机制。我们专注于推理时间泄漏,这是公开可用模型的最可能场景。我们首先在不同的数据,任务和模型体系结构的背景下讨论什么是泄漏。然后,我们提出了跨非自愿和恶意泄漏的分类法,可用的防御措施,然后进行当前可用的评估指标和应用。我们以杰出的挑战和开放性的问题结束,概述了一些有希望的未来研究方向。
translated by 谷歌翻译
Privacy preserving deep learning is an emerging field in machine learning that aims to mitigate the privacy risks in the use of deep neural networks. One such risk is training data extraction from language models that have been trained on datasets , which contain personal and privacy sensitive information. In our study, we investigate the extent of named entity memorization in fine-tuned BERT models. We use single-label text classification as representative downstream task and employ three different fine-tuning setups in our experiments, including one with Differentially Privacy (DP). We create a large number of text samples from the fine-tuned BERT models utilizing a custom sequential sampling strategy with two prompting strategies. We search in these samples for named entities and check if they are also present in the fine-tuning datasets. We experiment with two benchmark datasets in the domains of emails and blogs. We show that the application of DP has a huge effect on the text generation capabilities of BERT. Furthermore, we show that a fine-tuned BERT does not generate more named entities entities specific to the fine-tuning dataset than a BERT model that is pre-trained only. This suggests that BERT is unlikely to emit personal or privacy sensitive named entities. Overall, our results are important to understand to what extent BERT-based services are prone to training data extraction attacks.
translated by 谷歌翻译
差异隐私(DP)已被出现为严格的形式主义,以推理可量化的隐私泄漏。在机器学习(ML)中,已采用DP限制推理/披露训练示例。在现有的工作中杠杆横跨ML管道,尽管隔离,通常专注于梯度扰动等机制。在本文中,我们展示了DP-util,DP整体实用分析框架,跨越ML管道,重点是输入扰动,客观扰动,梯度扰动,输出扰动和预测扰动。在隐私敏感数据上给出ML任务,DP-Util使ML隐私从业者能够对DP在这五个扰动点中的影响,以模型公用事业丢失,隐私泄漏和真正透露的数量来测量DP的影响。训练样本。我们在视觉,医疗和金融数据集上使用两个代表性学习算法(Logistic回归和深神经网络)来评估DP-Uts,以防止会员资格推论攻击作为案例研究攻击。我们结果的一个亮点是,预测扰动一致地在所有数据集中始终如一地实现所有模型的最低实用损耗。在Logistic回归模型中,与其他扰动技术相比,客观扰动导致最低的隐私泄漏。对于深度神经网络,梯度扰动导致最低的隐私泄漏。此外,我们的结果揭示了记录的结果表明,由于隐私泄漏增加,差异私有模型揭示了更多数量的成员样本。总体而言,我们的研究结果表明,为了使使用的扰动机制有明智的决定,ML隐私从业者需要检查优化技术(凸与非凸),扰动机制,课程数量和隐私预算之间的动态。
translated by 谷歌翻译
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge.We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing stateof-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
translated by 谷歌翻译
在培训机器学习模型期间,它们可能会存储或“了解”有关培训数据的更多信息,而不是预测或分类任务所需的信息。属性推理攻击旨在从给定模型的培训数据中提取统计属性,而无需访问培训数据本身,从而利用了这一点。这些属性可能包括图片的质量,以识别相机模型,以揭示产品的目标受众的年龄分布或在计算机网络中使用恶意软件攻击的随附的主机类型。当攻击者可以访问所有模型参数时,即在白色盒子方案中,此攻击尤其准确。通过捍卫此类攻击,模型所有者可以确保其培训数据,相关的属性以及其知识产权保持私密,即使他们故意共享自己的模型,例如协作培训或模型泄漏。在本文中,我们介绍了属性,这是针对白盒属性推理攻击的有效防御机制,独立于培训数据类型,模型任务或属性数量。属性通过系统地更改目标模型的训练的权重和偏见来减轻属性推理攻击,从而使对手无法提取所选属性。我们在三个不同的数据集(包括表格数据和图像数据)以及两种类型的人工神经网络(包括人造神经网络)上进行了经验评估属性。我们的研究结果表明,以良好的隐私性权衡取舍,可以保护机器学习模型免受财产推理攻击的侵害,既有效又可靠。此外,我们的方法表明该机制也有效地取消了多个特性。
translated by 谷歌翻译
对机器学习模型的会员推理攻击(MIA)可能会导致模型培训中使用的培训数据集的严重隐私风险。在本文中,我们提出了一种针对成员推理攻击(MIAS)的新颖有效的神经元引导的防御方法。我们确定了针对MIA的现有防御机制的关键弱点,在该机制中,他们不能同时防御两个常用的基于神经网络的MIA,表明应分别评估这两次攻击以确保防御效果。我们提出了Neuguard,这是一种新的防御方法,可以通过对象共同控制输出和内部神经元的激活,以指导训练集的模型输出和测试集的模型输出以具有近距离分布。 Neuguard由类别的差异最小化靶向限制最终输出神经元和层平衡输出控制的目标,旨在限制每一层中的内部神经元。我们评估Neuguard,并将其与最新的防御能力与两个基于神经网络的MIA,五个最强的基于度量的MIA,包括三个基准数据集中的新提出的仅标签MIA。结果表明,Neuguard通过提供大大改善的公用事业权衡权衡,一般性和间接费用来优于最先进的防御能力。
translated by 谷歌翻译
如今,深度学习模型的所有者和开发人员必须考虑其培训数据的严格隐私保护规则,通常是人群来源且保留敏感信息。如今,深入学习模型执行隐私保证的最广泛采用的方法依赖于实施差异隐私的优化技术。根据文献,这种方法已被证明是针对多种模型的隐私攻击的成功防御,但其缺点是对模型的性能的实质性降级。在这项工作中,我们比较了差异私有的随机梯度下降(DP-SGD)算法与使用正则化技术的标准优化实践的有效性。我们分析了生成模型的实用程序,培训性能以及成员推理和模型反转攻击对学习模型的有效性。最后,我们讨论了差异隐私的缺陷和限制,并从经验上证明了辍学和L2型规范的卓越保护特性。
translated by 谷歌翻译
Deploying machine learning models in production may allow adversaries to infer sensitive information about training data. There is a vast literature analyzing different types of inference risks, ranging from membership inference to reconstruction attacks. Inspired by the success of games (i.e., probabilistic experiments) to study security properties in cryptography, some authors describe privacy inference risks in machine learning using a similar game-based style. However, adversary capabilities and goals are often stated in subtly different ways from one presentation to the other, which makes it hard to relate and compose results. In this paper, we present a game-based framework to systematize the body of knowledge on privacy inference risks in machine learning.
translated by 谷歌翻译
我们研究在机器学习模型中部署经常性神经网络(RNNS)的隐私含义。我们专注于一类隐私威胁,称为会员推理攻击(MIAS),其旨在推断是否已用于培训模型的特定数据记录。考虑到三台机器学习应用,即机器翻译,深度加固学习和图像分类,我们提供了经验证据,即RNN比偏置更容易受到偏置的偏移架构。然后,我们研究差异隐私方法,以保护RNN的培训数据集的隐私。众所周知,这些方法提供了严谨的隐私,而不管对抗的模型如何保证。我们为所谓的DP-FedAVG算法开发替代差异隐私机制,而不是在训练期间混淆渐变,使模型的输出组合。与现有的工作不同,该机制允许培训隐私参数的后期调整,而无需重新培训模型。我们提供数值结果,表明该机制为米西亚提供了强烈的盾牌,同时交易边际效用。
translated by 谷歌翻译
Pre-training large transformer models with in-domain data improves domain adaptation and helps gain performance on the domain-specific downstream tasks. However, sharing models pre-trained on potentially sensitive data is prone to adversarial privacy attacks. In this paper, we asked to which extent we can guarantee privacy of pre-training data and, at the same time, achieve better downstream performance on legal tasks without the need of additional labeled data. We extensively experiment with scalable self-supervised learning of transformer models under the formal paradigm of differential privacy and show that under specific training configurations we can improve downstream performance without sacrifying privacy protection for the in-domain data. Our main contribution is utilizing differential privacy for large-scale pre-training of transformer language models in the legal NLP domain, which, to the best of our knowledge, has not been addressed before.
translated by 谷歌翻译
我们审查在机器学习(ML)中使用差异隐私(DP)对隐私保护的使用。我们表明,在维护学习模型的准确性的驱动下,基于DP的ML实现非常宽松,以至于它们不提供DP的事前隐私保证。取而代之的是,他们提供的基本上是与传统(经常受到批评的)统计披露控制方法相似的噪声。由于缺乏正式的隐私保证,因此所提供的实际隐私水平必须经过实验评估,这很少进行。在这方面,我们提出的经验结果表明,ML中的标准反拟合技术可以比DP实现更好的实用性/隐私/效率权衡。
translated by 谷歌翻译