在隐私机器学习中,很常见的是,学识渊博的模型的所有者没有对数据的任何物理访问。取而代之的是,仅授予对模型所有者的安全远程访问,而没有任何能够从数据湖检索数据的能力。但是,模型所有者可能希望从远程存储库定期导出受过训练的模型,并且出现问题是否可能导致数据泄漏。在本文中,我们介绍了神经网络导出期间数据窃取攻击的概念。它包括隐藏出口网络中的一些信息,该信息允许最初存储在该数据湖中的图像数据湖之外的重建。更确切地说,我们表明可以训练可以执行有损耗的图像压缩的网络,同时解决一些实用程序任务,例如图像分割。然后,通过将压缩解码器网络与一些图像代码一起导出,从而导致数据湖外的图像重建。我们探讨了此类攻击对CT和MR图像数据库的可行性,这表明可以获得目标数据集的感知有意义的重建,并且可以随时使用被盗数据集来解决广泛的任务。全面的实验和分析表明,数据窃取攻击应被视为敏感成像数据源的威胁。
translated by 谷歌翻译
在很大程度上,由于隐私问题,很难培训有关疾病诊断或图像分割的医学图像的计算机视觉相关算法。因此,高度寻求生成图像模型以促进数据共享。但是,需要研究3-D生成模型,需要研究其隐私泄漏。我们使用在肿瘤面膜上进行条件研究的头和颈宠物图像介绍了3D生成模型横向gan(TRGAN)。我们为模型定义了图像保真度,实用性和隐私的定量度量。在培训过程中评估了这些指标,以确定理想的保真度,公用事业和隐私权权衡,并建立这些参数之间的关系。我们表明,Trgan的歧视者很容易受到攻击,并且攻击者可以识别哪些样品在训练中几乎完全准确(AUC = 0.99)。我们还表明,仅访问发电机的攻击者无法可靠地分类样品是否已用于训练(AUC = 0.51)。这表明Trgan发电机(而不是歧视者)可以用于共享具有最小隐私风险的合成3-D PET数据,同时保持良好的效用和保真度。
translated by 谷歌翻译
联合学习已被提议作为隐私的机器学习框架,该框架使多个客户能够在不共享原始数据的情况下进行协作。但是,在此框架中,设计并不能保证客户隐私保护。先前的工作表明,联邦学习中的梯度共享策略可能容易受到数据重建攻击的影响。但是,实际上,考虑到高沟通成本或由于增强隐私要求,客户可能不会传输原始梯度。实证研究表明,梯度混淆,包括通过梯度噪声注入和通过梯度压缩的无意化混淆的意图混淆,可以提供更多的隐私保护,以防止重建攻击。在这项工作中,我们提出了一个针对联合学习中图像分类任务的新数据重建攻击框架。我们表明,通常采用的梯度后处理程序,例如梯度量化,梯度稀疏和梯度扰动,可能会在联合学习中具有错误的安全感。与先前的研究相反,我们认为不应将隐私增强视为梯度压缩的副产品。此外,我们在提出的框架下设计了一种新方法,以在语义层面重建图像。我们量化语义隐私泄漏,并根据图像相似性分数进行比较。我们的比较挑战了文献中图像数据泄漏评估方案。结果强调了在现有联合学习算法中重新审视和重新设计对客户数据的隐私保护机制的重要性。
translated by 谷歌翻译
我们调查分裂学习的安全 - 一种新颖的协作机器学习框架,通过需要最小的资源消耗来实现峰值性能。在本文中,我们通过介绍客户私人培训集重建的一般攻击策略来揭示议定书的脆弱性并展示其固有的不安全。更突出地,我们表明恶意服务器可以积极地劫持分布式模型的学习过程,并将其纳入不安全状态,从而为客户端提供推动攻击。我们实施不同的攻击调整,并在各种数据集中测试它们以及现实的威胁方案。我们证明我们的攻击能够克服最近提出的防御技术,旨在提高分裂学习议定书的安全性。最后,我们还通过扩展以前设计的联合学习的攻击来说明协议对恶意客户的不安全性。要使我们的结果可重复,我们会在https://github.com/pasquini-dario/splitn_fsha提供的代码。
translated by 谷歌翻译
从公共机器学习(ML)模型中泄漏数据是一个越来越重要的领域,因为ML的商业和政府应用可以利用多个数据源,可能包括用户和客户的敏感数据。我们对几个方面的当代进步进行了全面的调查,涵盖了非自愿数据泄漏,这对ML模型很自然,潜在的恶毒泄漏是由隐私攻击引起的,以及目前可用的防御机制。我们专注于推理时间泄漏,这是公开可用模型的最可能场景。我们首先在不同的数据,任务和模型体系结构的背景下讨论什么是泄漏。然后,我们提出了跨非自愿和恶意泄漏的分类法,可用的防御措施,然后进行当前可用的评估指标和应用。我们以杰出的挑战和开放性的问题结束,概述了一些有希望的未来研究方向。
translated by 谷歌翻译
Large training data and expensive model tweaking are standard features of deep learning for images. As a result, data owners often utilize cloud resources to develop large-scale complex models, which raises privacy concerns. Existing solutions are either too expensive to be practical or do not sufficiently protect the confidentiality of data and models. In this paper, we study and compare novel \emph{image disguising} mechanisms, DisguisedNets and InstaHide, aiming to achieve a better trade-off among the level of protection for outsourced DNN model training, the expenses, and the utility of data. DisguisedNets are novel combinations of image blocktization, block-level random permutation, and two block-level secure transformations: random multidimensional projection (RMT) and AES pixel-level encryption (AES). InstaHide is an image mixup and random pixel flipping technique \cite{huang20}. We have analyzed and evaluated them under a multi-level threat model. RMT provides a better security guarantee than InstaHide, under the Level-1 adversarial knowledge with well-preserved model quality. In contrast, AES provides a security guarantee under the Level-2 adversarial knowledge, but it may affect model quality more. The unique features of image disguising also help us to protect models from model-targeted attacks. We have done an extensive experimental evaluation to understand how these methods work in different settings for different datasets.
translated by 谷歌翻译
Federated learning is a collaborative method that aims to preserve data privacy while creating AI models. Current approaches to federated learning tend to rely heavily on secure aggregation protocols to preserve data privacy. However, to some degree, such protocols assume that the entity orchestrating the federated learning process (i.e., the server) is not fully malicious or dishonest. We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious and attempts to obtain access to private, potentially sensitive data. Furthermore, we provide a method to further defend against such a malicious server, and demonstrate effectiveness against known attacks that reconstruct data in a federated learning setting.
translated by 谷歌翻译
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge.We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing stateof-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
translated by 谷歌翻译
Differentially private federated learning (DP-FL) has received increasing attention to mitigate the privacy risk in federated learning. Although different schemes for DP-FL have been proposed, there is still a utility gap. Employing central Differential Privacy in FL (CDP-FL) can provide a good balance between the privacy and model utility, but requires a trusted server. Using Local Differential Privacy for FL (LDP-FL) does not require a trusted server, but suffers from lousy privacy-utility trade-off. Recently proposed shuffle DP based FL has the potential to bridge the gap between CDP-FL and LDP-FL without a trusted server; however, there is still a utility gap when the number of model parameters is large. In this work, we propose OLIVE, a system that combines the merits from CDP-FL and LDP-FL by leveraging Trusted Execution Environment (TEE). Our main technical contributions are the analysis and countermeasures against the vulnerability of TEE in OLIVE. Firstly, we theoretically analyze the memory access pattern leakage of OLIVE and find that there is a risk for sparsified gradients, which is common in FL. Secondly, we design an inference attack to understand how the memory access pattern could be linked to the training data. Thirdly, we propose oblivious yet efficient algorithms to prevent the memory access pattern leakage in OLIVE. Our experiments on real-world data demonstrate that OLIVE is efficient even when training a model with hundreds of thousands of parameters and effective against side-channel attacks on TEE.
translated by 谷歌翻译
深度学习模型越来越多地部署在现实世界中。这些模型通常在服务器端部署,并在信息丰富的表示中接收用户数据,以求解特定任务,例如图像分类。由于图像可以包含敏感信息,而用户可能不愿意共享,因此隐私保护变得越来越重要。对抗表示学习(ARL)是一种训练在客户端运行并混淆图像的编码器的常见方法。假定可以安全地将混淆的图像安全地传输并用于服务器上的任务,而无需隐私问题。但是,在这项工作中,我们发现培训重建攻击者可以成功恢复现有ARL方法的原始图像。为此,我们通过低通滤波引入了一种新颖的ARL方法,从而限制了要在频域中编码的可用信息量。我们的实验结果表明,我们的方法可以承受重建攻击,同时超过了先前有关隐私 - 实用性权衡的最先进方法。我们进一步进行用户研究,以定性评估我们对重建攻击的防御。
translated by 谷歌翻译
通信技术和互联网的最新进展与人工智能(AI)启用了智能医疗保健。传统上,由于现代医疗保健网络的高性性和日益增长的数据隐私问题,AI技术需要集中式数据收集和处理,这可能在现实的医疗环境中可能是不可行的。作为一个新兴的分布式协作AI范例,通过协调多个客户(例如,医院)来执行AI培训而不共享原始数据,对智能医疗保健特别有吸引力。因此,我们对智能医疗保健的使用提供了全面的调查。首先,我们在智能医疗保健中展示了近期进程,动机和使用FL的要求。然后讨论了近期智能医疗保健的FL设计,从资源感知FL,安全和隐私感知到激励FL和个性化FL。随后,我们对关键医疗领域的FL新兴应用提供了最先进的综述,包括健康数据管理,远程健康监测,医学成像和Covid-19检测。分析了几个最近基于智能医疗保健项目,并突出了从调查中学到的关键经验教训。最后,我们讨论了智能医疗保健未来研究的有趣研究挑战和可能的指示。
translated by 谷歌翻译
In terms of artificial intelligence, there are several security and privacy deficiencies in the traditional centralized training methods of machine learning models by a server. To address this limitation, federated learning (FL) has been proposed and is known for breaking down ``data silos" and protecting the privacy of users. However, FL has not yet gained popularity in the industry, mainly due to its security, privacy, and high cost of communication. For the purpose of advancing the research in this field, building a robust FL system, and realizing the wide application of FL, this paper sorts out the possible attacks and corresponding defenses of the current FL system systematically. Firstly, this paper briefly introduces the basic workflow of FL and related knowledge of attacks and defenses. It reviews a great deal of research about privacy theft and malicious attacks that have been studied in recent years. Most importantly, in view of the current three classification criteria, namely the three stages of machine learning, the three different roles in federated learning, and the CIA (Confidentiality, Integrity, and Availability) guidelines on privacy protection, we divide attack approaches into two categories according to the training stage and the prediction stage in machine learning. Furthermore, we also identify the CIA property violated for each attack method and potential attack role. Various defense mechanisms are then analyzed separately from the level of privacy and security. Finally, we summarize the possible challenges in the application of FL from the aspect of attacks and defenses and discuss the future development direction of FL systems. In this way, the designed FL system has the ability to resist different attacks and is more secure and stable.
translated by 谷歌翻译
对疾病的诊断或图像分割医学图像训练计算机视觉相关算法是缺乏训练数据,标记的样品,和隐私问题的困难所致。出于这个原因,一个强大的生成方法来创建合成数据后高度寻求。然而,大多数三维图像生成器需要额外的图像输入或者是非常占用大量内存。为了解决这些问题,我们建议调整视频生成技术3-d图像生成。使用时间GAN(TGAN)架构,我们将展示我们能够产生逼真的头部和颈部PET图像。我们还表明,通过调节肿瘤口罩发电机,我们能够控制肿瘤的几何形状和位置,在生成的图像。为了测试合成影像的用途,我们使用合成的图像训练分割模型。空调真实肿瘤掩模合成图像被自动分割,和对应的真实图像也分割。我们评估使用的骰子得分的分割,并找到两个数据集(0.65合成数据,0.70的真实数据)同样的分割算法执行。然后,各种radionomic特征在分割的肿瘤体积为每个数据集来计算。真实的和合成的特征分布的比较显示,8七个特征分布有统计学不显着差异(p> 0.05)。还计算所有radionomic特征之间的相关系数,它是示出了所有在真实数据组中的强统计相关的在合成数据集被保留。
translated by 谷歌翻译
Deep Learning has recently become hugely popular in machine learning for its ability to solve end-to-end learning systems, in which the features and the classifiers are learned simultaneously, providing significant improvements in classification accuracy in the presence of highly-structured and large databases.Its success is due to a combination of recent algorithmic breakthroughs, increasingly powerful computers, and access to significant amounts of data.Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15.Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level differential privacy applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack).
translated by 谷歌翻译
在用于医学图像分析的联合学习中,学习方案的安全至关重要。这种设置通常会被针对联邦使用的私人数据或模型本身完整性的对手所损害。这要求医学成像社区开发机制,以训练私人和强大的对抗数据的协作模型。为了应对这些挑战,我们提出了一个实用的开源框架,以研究结合差异隐私,模型压缩和对抗性训练的有效性,以提高模型在火车和推理时间攻击下针对对抗性样本的鲁棒性。使用我们的框架,我们实现了竞争性模型的性能,模型的大小显着降低以及改进的经验对抗性鲁棒性,而无需严重的性能降解,对医学图像分析至关重要。
translated by 谷歌翻译
已经提出了分裂学习(SL)以分散的方式训练深度学习模型。对于具有垂直数据分配的分散医疗保健应用,SL可以有益,因为它允许具有互补功能或图像的机构为一组共享的患者共同开发更强大且可推广的模型。在这项工作中,我们提出了“ split-u-net”,并成功地将SL应用于协作生物医学图像分割。但是,SL需要交换中间激活图和梯度,以允许跨不同特征空间的训练模型,这可能会泄漏数据并提高隐私问题。因此,我们还量化了用于生物医学图像分割的常见SL情况下的数据泄漏量,并通过应用适当的防御策略提供了抵消此类泄漏的方法。
translated by 谷歌翻译
在联合学习(FL)中,数据不会在联合培训机器学习模型时留下个人设备。相反,这些设备与中央党(例如,公司)共享梯度。因为数据永远不会“离开”个人设备,因此FL作为隐私保留呈现。然而,最近显示这种保护是一个薄的外观,甚至是一种被动攻击者观察梯度可以重建各个用户的数据。在本文中,我们争辩说,事先工作仍然很大程度上低估了FL的脆弱性。这是因为事先努力专门考虑被动攻击者,这些攻击者是诚实但好奇的。相反,我们介绍了一个活跃和不诚实的攻击者,作为中央会,他们能够在用户计算模型渐变之前修改共享模型的权重。我们称之为修改的重量“陷阱重量”。我们的活跃攻击者能够完全恢复用户数据,并在接近零成本时:攻击不需要复杂的优化目标。相反,它利用了模型梯度的固有数据泄漏,并通过恶意改变共享模型的权重来放大这种效果。这些特异性使我们的攻击能够扩展到具有大型迷你批次数据的模型。如果来自现有工作的攻击者需要小时才能恢复单个数据点,我们的方法需要毫秒来捕获完全连接和卷积的深度神经网络的完整百分之批次数据。最后,我们考虑缓解。我们观察到,FL中的差异隐私(DP)的当前实现是有缺陷的,因为它们明确地信任中央会,并在增加DP噪音的关键任务,因此不提供对恶意中央党的保护。我们还考虑其他防御,并解释为什么它们类似地不足。它需要重新设计FL,为用户提供任何有意义的数据隐私。
translated by 谷歌翻译
联合学习是一种新兴的范式,允许大规模分散学习,而无需在不同的数据所有者中共享数据,这有助于解决医学图像分析中数据隐私的关注。但是,通过现有方法对客户的标签一致性的要求很大程度上缩小了其应用程序范围。实际上,每个临床部位只能以部分或没有与其他站点重叠的某些感兴趣的器官注释某些感兴趣的器官。将这种部分标记的数据纳入统一联邦是一个未开发的问题,具有临床意义和紧迫性。这项工作通过使用新型联合多重编码U-NET(FED-MENU)方法来应对挑战,以进行多器官分割。在我们的方法中,提出了一个多编码的U-NET(菜单网络),以通过不同的编码子网络提取器官特异性功能。每个子网络都可以看作是特定风琴的专家,并为该客户培训。此外,为了鼓励不同子网络提取的特定器官特定功能具有信息性和独特性,我们通过设计辅助通用解码器(AGD)来规范菜单网络的训练。四个公共数据集上的广泛实验表明,我们的Fed-Menu方法可以使用具有优越性能的部分标记的数据集有效地获得联合学习模型,而不是由局部或集中学习方法培训的其他模型。源代码将在纸质出版时公开提供。
translated by 谷歌翻译
梯度反转攻击(或从梯度的输入恢复)是对联合学习的安全和隐私保存的新出现威胁,由此,协议中的恶意窃听者或参与者可以恢复(部分)客户的私有数据。本文评估了现有的攻击和防御。我们发现一些攻击对设置产生了强烈的假设。放松这种假设可以大大削弱这些攻击。然后,我们评估三种拟议的防御机制对梯度反转攻击的好处。我们展示了这些防御方法的隐私泄漏和数据效用的权衡,并发现以适当的方式将它们与它们相结合使得攻击较低,即使在原始的强烈假设下。我们还估计每个评估的防御下单个图像的端到端恢复的计算成本。我们的研究结果表明,目前可以针对较小的数据公用事业损失来捍卫最先进的攻击,如潜在策略的列表中总结。我们的代码可用于:https://github.com/princeton-sysml/gradattack。
translated by 谷歌翻译
基于神经网络的图像压缩已经过度研究。模型稳健性很大程度上被忽视,但它对服务能够实现至关重要。我们通过向原始源图像注入少量噪声扰动来执行对抗攻击,然后使用主要学习的图像压缩模型来编码这些对抗示例。实验报告对逆势实例的重建中的严重扭曲,揭示了现有方法的一般漏洞,无论用于底层压缩模型(例如,网络架构,丢失功能,质量标准)和用于注射扰动的优化策略(例如,噪声阈值,信号距离测量)。后来,我们应用迭代对抗的FineTuning来细化掠夺模型。在每次迭代中,将随机源图像和对抗示例混合以更新底层模型。结果通过大大提高压缩模型稳健性来表明提出的FineTuning策略的有效性。总体而言,我们的方法是简单,有效和更广泛的,使其具有开发稳健的学习图像压缩解决方案的吸引力。所有材料都在HTTPS://njuvision.github.io/trobustn中公开访问,以便可重复研究。
translated by 谷歌翻译