差异隐私(DP)提供了正式的隐私保证,以防止对手可以访问机器学习模型,从而从提取有关单个培训点的信息。最受欢迎的DP训练方法是差异私有随机梯度下降(DP-SGD),它通过在训练过程中注入噪声来实现这种保护。然而,以前的工作发现,DP-SGD通常会导致标准图像分类基准的性能显着降解。此外,一些作者假设DP-SGD在大型模型上固有地表现不佳,因为保留隐私所需的噪声规范与模型维度成正比。相反,我们证明了过度参数化模型上的DP-SGD可以比以前想象的要好得多。将仔细的超参数调整与简单技术结合起来,以确保信号传播并提高收敛速率,我们获得了新的SOTA,而没有额外数据的CIFAR-10,在81.4%的81.4%下(8,10^{ - 5}) - 使用40 -layer wide-Resnet,比以前的SOTA提高了71.7%。当对预训练的NFNET-F3进行微调时,我们在ImageNet(0.5,8*10^{ - 7})下达到了83.8%的TOP-1精度。此外,我们还在(8,8 \ cdot 10^{ - 7})下达到了86.7%的TOP-1精度,DP仅比当前的非私人SOTA仅4.3%。我们认为,我们的结果是缩小私人图像分类和非私有图像分类之间准确性差距的重要一步。
translated by 谷歌翻译
A major direction in differentially private machine learning is differentially private fine-tuning: pretraining a model on a source of "public data" and transferring the extracted features to downstream tasks. This is an important setting because many industry deployments fine-tune publicly available feature extractors on proprietary data for downstream tasks. In this paper, we use features extracted from state-of-the-art open source models to solve benchmark tasks in computer vision and natural language processing using differentially private fine-tuning. Our key insight is that by accelerating training, we can quickly drive the model parameters to regions in parameter space where the impact of noise is minimized. In doing so, we recover the same performance as non-private fine-tuning for realistic values of epsilon in [0.01, 1.0] on benchmark image classification datasets including CIFAR100.
translated by 谷歌翻译
Differentially private deep learning has recently witnessed advances in computational efficiency and privacy-utility trade-off. We explore whether further improvements along the two axes are possible and provide affirmative answers leveraging two instantiations of \emph{group-wise clipping}. To reduce the compute time overhead of private learning, we show that \emph{per-layer clipping}, where the gradient of each neural network layer is clipped separately, allows clipping to be performed in conjunction with backpropagation in differentially private optimization. This results in private learning that is as memory-efficient and almost as fast per training update as non-private learning for many workflows of interest. While per-layer clipping with constant thresholds tends to underperform standard flat clipping, per-layer clipping with adaptive thresholds matches or outperforms flat clipping under given training epoch constraints, hence attaining similar or better task performance within less wall time. To explore the limits of scaling (pretrained) models in differentially private deep learning, we privately fine-tune the 175 billion-parameter GPT-3. We bypass scaling challenges associated with clipping gradients that are distributed across multiple devices with \emph{per-device clipping} that clips the gradient of each model piece separately on its host device. Privately fine-tuning GPT-3 with per-device clipping achieves a task performance at $\epsilon=1$ better than what is attainable by non-privately fine-tuning the largest GPT-2 on a summarization task.
translated by 谷歌翻译
Pre-training large transformer models with in-domain data improves domain adaptation and helps gain performance on the domain-specific downstream tasks. However, sharing models pre-trained on potentially sensitive data is prone to adversarial privacy attacks. In this paper, we asked to which extent we can guarantee privacy of pre-training data and, at the same time, achieve better downstream performance on legal tasks without the need of additional labeled data. We extensively experiment with scalable self-supervised learning of transformer models under the formal paradigm of differential privacy and show that under specific training configurations we can improve downstream performance without sacrifying privacy protection for the in-domain data. Our main contribution is utilizing differential privacy for large-scale pre-training of transformer language models in the legal NLP domain, which, to the best of our knowledge, has not been addressed before.
translated by 谷歌翻译
深度神经网络(DNNS)铰接对大型数据集的可用性的最新成功;但是,对此类数据集的培训经常为敏感培训信息构成隐私风险。在本文中,我们的目标是探讨生成模型和梯度稀疏性的力量,并提出了一种可扩展的隐私保留生成模型数据标准。与标准展示隐私保留框架相比,允许教师对一维预测进行投票,在高维梯度向量上投票在隐私保存方面具有挑战性。随着需要尺寸减少技术,我们需要在(1)之间的改进之间导航精致的权衡空间,并进行SGD收敛的放缓。为了解决这一点,我们利用通信高效学习,并通过将顶-K压缩与相应的噪声注入机构相结合,提出一种新的噪声压缩和聚集方法TopAGG。理论上,我们证明了DataLens框架保证了其生成数据的差异隐私,并提供了其收敛性的分析。为了展示DataLens的实际使用情况,我们对不同数据集进行广泛的实验,包括Mnist,Fashion-Mnist和高维Celeba,并且我们表明,DataLens显着优于其他基线DP生成模型。此外,我们改进了所提出的Topagg方法,该方法是DP SGD培训的主要构建块之一,并表明它能够在大多数情况下实现比最先进的DP SGD方法更高的效用案件。我们的代码在HTTPS://github.com/ai-secure/datalens公开提供。
translated by 谷歌翻译
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. * Google.† OpenAI. Work done while at Google.
translated by 谷歌翻译
自适应优化方法已成为许多机器学习任务的默认求解器。不幸的是,适应性的好处可能会在具有不同隐私的训练时降低,因为噪声增加了,以确保隐私会降低自适应预处理的有效性。为此,我们提出了ADADP,这是一个使用非敏感的侧面信息来预处梯度的一般框架,从而可以在私有设置中有效使用自适应方法。我们正式显示ADADPS减少了获得类似隐私保证所需的噪声量,从而提高了优化性能。从经验上讲,我们利用简单且随时可用的侧面信息来探索实践中ADADP的性能,与集中式和联合设置中的强大基线相比。我们的结果表明,ADADP平均提高了准确性7.7%(绝对) - 在大规模文本和图像基准上产生最先进的隐私性权衡权衡。
translated by 谷歌翻译
当适用于大规模学习问题时,由于与差异性的性能下降和高记忆开销相比,所谓的隐私私人随机梯度下降(DP-SGD)的常规智慧已经满足了有限的成功。非隐私对应。我们展示了如何通过用新型DP正向传播(DP-FP)替换DP-SGD来减轻性能下降,然后是一个离上的非DP优化器。我们的DP-FP采用新的(1)表示剪辑,然后在前向传播阶段进行噪声,以及(2)微批量构建通过分置,以实现DP放大,并将噪声功率降低至1 / m $,其中$ m $是一步中的微批次数量。在培训分类模型时,我们的DP-FP与表示的所有隐私保留操作的DP-FP无天然偏离偏差,总噪声与模型大小,以及DP-SGD中的内存问题。结果,我们的DP-FP优于尖端DP-SGD,同时保持相同的隐私水平,并且它接近非私有基线,显着优于最先进的DP-SGD变体。例如,当在四个下游任务上应用于Roberta-Light时,DP-FP的平均准确性为91.34 \%,隐私预算小于3,代表了最先进的DP的3.81 \%的性能改进 - 与非私有基线相比,SGD和只有0.9 \%的损失,但具有明显降低的隐私泄漏风险。
translated by 谷歌翻译
员额推理攻击允许对训练的机器学习模型进行对手以预测模型的训练数据集中包含特定示例。目前使用平均案例的“精度”度量来评估这些攻击,该攻击未能表征攻击是否可以自信地识别培训集的任何成员。我们认为,应该通过计算其低(例如<0.1%)假阳性率来计算攻击来评估攻击,并在以这种方式评估时发现大多数事先攻击差。为了解决这一问题,我们开发了一个仔细结合文献中多种想法的似然比攻击(Lira)。我们的攻击是低于虚假阳性率的10倍,并且在攻击现有度量的情况下也严格占主导地位。
translated by 谷歌翻译
差异化(DP)学习在建立大型文本模型方面的成功有限,并尝试直接将差异化私有随机梯度下降(DP-SGD)应用于NLP任务,从而导致了大量的性能下降和高度计算的开销。我们表明,通过(1)使用大型验证模型可以缓解这种性能下降; (2)适合DP优化的超参数; (3)与训练过程对齐的微调目标。通过正确设定这些因素,我们将获得私人NLP模型,以优于最先进的私人培训方法和强大的非私人基准 - 通过直接对中等大小的Corpora进行DP优化的预审计模型。为了解决使用大型变压器运行DP-SGD的计算挑战,我们提出了一种存储器保存技术,该技术允许DP-SGD中的剪辑在不实例化模型中任何层的每个示例梯度的情况下运行。该技术使私人训练变压器的内存成本几乎与非私人培训相同,并以适度的运行时间开销。与传统的观点相反,即DP优化在学习高维模型(由于尺寸缩放的噪声)方面失败的经验结果表明,使用预审预周化模型的私人学习往往不会遭受维度依赖性性能降低的障碍。
translated by 谷歌翻译
鉴于对机器学习模型的访问,可以进行对手重建模型的培训数据?这项工作从一个强大的知情对手的镜头研究了这个问题,他们知道除了一个之外的所有培训数据点。通过实例化混凝土攻击,我们表明重建此严格威胁模型中的剩余数据点是可行的。对于凸模型(例如Logistic回归),重建攻击很简单,可以以封闭形式导出。对于更常规的模型(例如神经网络),我们提出了一种基于训练的攻击策略,该攻击策略接收作为输入攻击的模型的权重,并产生目标数据点。我们展示了我们对MNIST和CIFAR-10训练的图像分类器的攻击的有效性,并系统地研究了标准机器学习管道的哪些因素影响重建成功。最后,我们从理论上调查了有多差异的隐私足以通过知情对手减轻重建攻击。我们的工作提供了有效的重建攻击,模型开发人员可以用于评估超出以前作品中考虑的一般设置中的个别点的记忆(例如,生成语言模型或访问培训梯度);它表明,标准模型具有存储足够信息的能力,以实现培训数据点的高保真重建;它表明,差异隐私可以成功减轻该参数制度中的攻击,其中公用事业劣化最小。
translated by 谷歌翻译
我们考虑使用迷你批量梯度进行差异隐私(DP)的培训模型。现有的最先进的差异私有随机梯度下降(DP-SGD)需要通过采样或洗机来获得最佳隐私/准确性/计算权衡的隐私放大。不幸的是,在重要的实际情况下,精确采样和洗牌的精确要求可能很难获得,特别是联邦学习(FL)。我们设计和分析跟随 - 正规的领导者(DP-FTRL)的DP变体,其比较(理论上和经验地)与放大的DP-SGD相比,同时允许更灵活的数据访问模式。DP-FTRL不使用任何形式的隐私放大。该代码可在https://github.com/google-Research/federated/tree/master/dp_ftrl和https://github.com/google-reesearch/dp-ftrl处获得。
translated by 谷歌翻译
差异私有随机梯度下降(DPSGD)是基于差分隐私(DP)范例的随机梯度下降的变化,这可以减轻来自在训练数据中存在敏感信息的隐私威胁。然而,具有DPSGD的培训深度神经网络的一个主要缺点是模型精度的降低。本文研究了标准化层对DPSGD性能的影响。我们证明标准化层显着影响了深度神经网络与嘈杂参数的效用,应该被视为DPSGD培训的基本成分。特别是,我们提出了一种新的方法,用于将批量标准化与DPSGD集成,而不会产生额外的隐私损失。通过我们的方法,我们能够培训更深的网络并实现更好的效用隐私权衡。
translated by 谷歌翻译
Privacy noise may negate the benefits of using adaptive optimizers in differentially private model training. Prior works typically address this issue by using auxiliary information (e.g., public data) to boost the effectiveness of adaptive optimization. In this work, we explore techniques to estimate and efficiently adapt to gradient geometry in private adaptive optimization without auxiliary data. Motivated by the observation that adaptive methods can tolerate stale preconditioners, we propose differentially private adaptive training with delayed preconditioners (DP^2), a simple method that constructs delayed but less noisy preconditioners to better realize the benefits of adaptivity. Theoretically, we provide convergence guarantees for our method for both convex and non-convex problems, and analyze trade-offs between delay and privacy noise reduction. Empirically, we explore DP^2 across several real-world datasets, demonstrating that it can improve convergence speed by as much as 4x relative to non-adaptive baselines and match the performance of state-of-the-art optimization methods that require auxiliary data.
translated by 谷歌翻译
我们提出了一种重新制定方案,解决了在大型神经网络上应用差异私有SGD的挑战,这是1)存储个体梯度的巨大内存成本,2)增加令人臭名昭着的尺寸依赖的噪声。具体地,我们用两个\ emph {梯度 - 载波}的每个权重矩阵重新定位小维度的矩阵和一个\ emph {残差}矩阵。我们认为,这种重新游离的游离过程保持不变,同时使我们能够计算投影梯度而不计算梯度本身。为了学习差异隐私,我们设计\ emph {Reparamiratized梯度扰动(RGP)},其覆盖梯度载波矩阵上的梯度,并从嘈杂的渐变重新计算原始权重的更新。重要的是,我们使用历史更新来查找渐变 - 载波矩阵,其最优性在线性回归下严格合理,并经过深入学习任务。 RGP显着降低了内存成本并改善了该实用程序。例如,我们是第一个能够在BERT模型上应用差异隐私,并在四个下游任务中实现83.9 \%$ 83.9 = 8 $的平均准确性,而与非 - 私人基线,但享有更低的隐私泄漏风险。
translated by 谷歌翻译
Differential privacy is a strong notion for privacy that can be used to prove formal guarantees, in terms of a privacy budget, , about how much information is leaked by a mechanism. However, implementations of privacy-preserving machine learning often select large values of in order to get acceptable utility of the model, with little understanding of the impact of such choices on meaningful privacy. Moreover, in scenarios where iterative learning procedures are used, differential privacy variants that offer tighter analyses are used which appear to reduce the needed privacy budget but present poorly understood trade-offs between privacy and utility. In this paper, we quantify the impact of these choices on privacy in experiments with logistic regression and neural network models. Our main finding is that there is a huge gap between the upper bounds on privacy loss that can be guaranteed, even with advanced mechanisms, and the effective privacy loss that can be measured using current inference attacks. Current mechanisms for differentially private machine learning rarely offer acceptable utility-privacy trade-offs with guarantees for complex learning tasks: settings that provide limited accuracy loss provide meaningless privacy guarantees, and settings that provide strong privacy guarantees result in useless models.
translated by 谷歌翻译
每个例子梯度剪辑是一个关键算法步骤,可实现对深度学习模型的实用差异私有(DP)培训。但是,剪辑规范$ r $的选择对于在DP下实现高精度至关重要。我们提出了一个易于使用的替代品,称为Autoclipping,它消除了任何DP优化器(包括DP-SGD,DP-ADAM,DP-LAMB等)调整$ R $的需求。自动变体与现有的DP优化器一样私有和计算效率,但不需要DP特定的超参数,因此使DP培训与标准的非私人培训一样适合。我们在非凸vex设置中对自动DP-SGD进行了严格的融合分析,这表明它具有与标准SGD相匹配的渐近收敛速率。我们还展示了各种语言和视觉任务,这些任务自动剪辑优于或匹配最新的,并且可以轻松使用对现有代码库的最小更改。
translated by 谷歌翻译
我们审查在机器学习(ML)中使用差异隐私(DP)对隐私保护的使用。我们表明,在维护学习模型的准确性的驱动下,基于DP的ML实现非常宽松,以至于它们不提供DP的事前隐私保证。取而代之的是,他们提供的基本上是与传统(经常受到批评的)统计披露控制方法相似的噪声。由于缺乏正式的隐私保证,因此所提供的实际隐私水平必须经过实验评估,这很少进行。在这方面,我们提出的经验结果表明,ML中的标准反拟合技术可以比DP实现更好的实用性/隐私/效率权衡。
translated by 谷歌翻译
Distributing machine learning predictors enables the collection of large-scale datasets while leaving sensitive raw data at trustworthy sites. We show that locally training support vector machines (SVMs) and computing their averages leads to a learning technique that is scalable to a large number of users, satisfies differential privacy, and is applicable to non-trivial tasks, such as CIFAR-10. For a large number of participants, communication cost is one of the main challenges. We achieve a low communication cost by requiring only a single invocation of an efficient secure multiparty summation protocol. By relying on state-of-the-art feature extractors (SimCLR), we are able to utilize differentially private convex learners for non-trivial tasks such as CIFAR-10. Our experimental results illustrate that for $1{,}000$ users with $50$ data points each, our scheme outperforms state-of-the-art scalable distributed learning methods (differentially private federated learning, short DP-FL) while requiring around $500$ times fewer communication costs: For CIFAR-10, we achieve a classification accuracy of $79.7\,\%$ for an $\varepsilon = 0.59$ while DP-FL achieves $57.6\,\%$. More generally, we prove learnability properties for the average of such locally trained models: convergence and uniform stability. By only requiring strongly convex, smooth, and Lipschitz-continuous objective functions, locally trained via stochastic gradient descent (SGD), we achieve a strong utility-privacy tradeoff.
translated by 谷歌翻译
梯度泄漏攻击被认为是深度学习中的邪恶隐私威胁之一,因为攻击者在迭代培训期间隐蔽了梯度更新,而不会影响模型培训质量,但又使用泄漏的梯度逐步重建敏感培训数据,具有高攻击成功率。虽然具有差异隐私的深度学习是发布具有差异隐私保障的深度学习模型的违法标准,但我们展示了具有固定隐私参数的差异私有算法易受梯度泄漏攻击的影响。本文调查了差异隐私(DP)的梯度泄漏弹性深度学习的替代方法。首先,我们分析了差异隐私的深度学习的现有实现,它使用固定噪声方差使用固定隐私参数将恒定噪声对所有层中的梯度注入恒定噪声。尽管提供了DP保证,但该方法遭受了低精度,并且很容易受到梯度泄漏攻击。其次,通过使用动态隐私参数,我们提出了一种梯度泄漏弹性深度学习方法,差异隐私保证。与导致恒定噪声方差导致的固定参数策略不同,不同的动态参数策略存在替代技术,以引入自适应噪声方差和自适应噪声注入,其与差别私有模型训练期间的梯度更新的趋势紧密对齐。最后,我们描述了四个互补指标来评估和比较替代方法。
translated by 谷歌翻译