随着机器学习在整个社会中变得越来越普遍,必须仔细考虑包括数据隐私和公平性在内的各个方面,对于高度监管的行业的部署至关重要。不幸的是,增强隐私技术的应用可能会使模型中的不公平趋势恶化。尤其是用于私人模型训练,私人随机梯度下降(DPSGD)的最广泛使用的技术之一,通常会加剧对数据中的组的不同影响。在这项工作中,我们研究了DPSGD中不公平性的细粒度原因,并确定由于不公平的梯度剪辑而导致的梯度未对准是最重要的来源。该观察结果使我们采取了一种新的方法,可以通过防止DPSGD中的梯度未对准来减少不公平。
translated by 谷歌翻译
差异隐私(DP)是私人机器学习系统的重要隐私技术。它允许衡量与个人参与计算相关的风险。但是,最近观察到,DP学习系统可能会加剧不同群体的偏见和不公平性。本文以这些重要的观察为基础,并阐明了在差异私人经验风险最小化问题中产生的不同影响的原因。它着重于两种经过深入研究的DP学习方法中个人组之间产生的准确性差异:输出扰动和差异私有随机梯度下降。本文分析了哪些数据和模型属性负责不成比例的影响,为什么这些方面影响不同的群体不成比例,并提出了减轻这些影响的指南。提出的方法在几个数据集和设置上进行评估。
translated by 谷歌翻译
Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. * Google.† OpenAI. Work done while at Google.
translated by 谷歌翻译
本文调查了差异隐私(DP)与公平性交集中的最新工作。它审查了隐私和公平性可能使目标对准或对比目标的条件,分析了DP如何以及为什么在决策问题和学习任务中加剧偏见和不公平性,并描述了DP系统中出现的公平问题的可用缓解措施。该调查提供了对在公平镜头下部署隐私制度学习或决策任务时,对主要挑战和潜在风险的统一理解。
translated by 谷歌翻译
Privacy noise may negate the benefits of using adaptive optimizers in differentially private model training. Prior works typically address this issue by using auxiliary information (e.g., public data) to boost the effectiveness of adaptive optimization. In this work, we explore techniques to estimate and efficiently adapt to gradient geometry in private adaptive optimization without auxiliary data. Motivated by the observation that adaptive methods can tolerate stale preconditioners, we propose differentially private adaptive training with delayed preconditioners (DP^2), a simple method that constructs delayed but less noisy preconditioners to better realize the benefits of adaptivity. Theoretically, we provide convergence guarantees for our method for both convex and non-convex problems, and analyze trade-offs between delay and privacy noise reduction. Empirically, we explore DP^2 across several real-world datasets, demonstrating that it can improve convergence speed by as much as 4x relative to non-adaptive baselines and match the performance of state-of-the-art optimization methods that require auxiliary data.
translated by 谷歌翻译
差异隐私(DP)提供了正式的隐私保证,以防止对手可以访问机器学习模型,从而从提取有关单个培训点的信息。最受欢迎的DP训练方法是差异私有随机梯度下降(DP-SGD),它通过在训练过程中注入噪声来实现这种保护。然而,以前的工作发现,DP-SGD通常会导致标准图像分类基准的性能显着降解。此外,一些作者假设DP-SGD在大型模型上固有地表现不佳,因为保留隐私所需的噪声规范与模型维度成正比。相反,我们证明了过度参数化模型上的DP-SGD可以比以前想象的要好得多。将仔细的超参数调整与简单技术结合起来,以确保信号传播并提高收敛速率,我们获得了新的SOTA,而没有额外数据的CIFAR-10,在81.4%的81.4%下(8,10^{ - 5}) - 使用40 -layer wide-Resnet,比以前的SOTA提高了71.7%。当对预训练的NFNET-F3进行微调时,我们在ImageNet(0.5,8*10^{ - 7})下达到了83.8%的TOP-1精度。此外,我们还在(8,8 \ cdot 10^{ - 7})下达到了86.7%的TOP-1精度,DP仅比当前的非私人SOTA仅4.3%。我们认为,我们的结果是缩小私人图像分类和非私有图像分类之间准确性差距的重要一步。
translated by 谷歌翻译
通过确保学习算法中的差异隐私,可以严格降低大型模型记忆敏感培训数据的风险。在本文中,我们为此目的研究了两种算法,即DP-SGD和DP-NSGD,它们首先剪辑或归一化\ textIt \ textIt {每样本}梯度以绑定灵敏度,然后添加噪声以使精确信息混淆。我们通过两个常见的假设分析了非凸优化设置中这两种算法的收敛行为,并实现了$ \ nathcal {o} \ left(\ sqrt [4] {\ frac {\ frac {d \ log(1/\ delta) )} {n^2 \ epsilon^2}} \ right)$ $ d $ - 二维模型,$ n $ samples和$(\ epsilon,\ delta)$ - dp,它改进了以前的改进在较弱的假设下的界限。具体而言,我们在DP-NSGD中引入了一个正规化因素,并表明它对融合证明至关重要,并巧妙地控制了偏见和噪声权衡。我们的证明故意处理针对私人环境指定的按样本梯度剪辑和标准化。从经验上讲,我们证明这两种算法达到了相似的最佳准确性,而DP-NSGD比DP-SGD更容易调整,因此在计算调整工作时可能有助于进一步节省隐私预算。
translated by 谷歌翻译
深度神经网络(DNNS)铰接对大型数据集的可用性的最新成功;但是,对此类数据集的培训经常为敏感培训信息构成隐私风险。在本文中,我们的目标是探讨生成模型和梯度稀疏性的力量,并提出了一种可扩展的隐私保留生成模型数据标准。与标准展示隐私保留框架相比,允许教师对一维预测进行投票,在高维梯度向量上投票在隐私保存方面具有挑战性。随着需要尺寸减少技术,我们需要在(1)之间的改进之间导航精致的权衡空间,并进行SGD收敛的放缓。为了解决这一点,我们利用通信高效学习,并通过将顶-K压缩与相应的噪声注入机构相结合,提出一种新的噪声压缩和聚集方法TopAGG。理论上,我们证明了DataLens框架保证了其生成数据的差异隐私,并提供了其收敛性的分析。为了展示DataLens的实际使用情况,我们对不同数据集进行广泛的实验,包括Mnist,Fashion-Mnist和高维Celeba,并且我们表明,DataLens显着优于其他基线DP生成模型。此外,我们改进了所提出的Topagg方法,该方法是DP SGD培训的主要构建块之一,并表明它能够在大多数情况下实现比最先进的DP SGD方法更高的效用案件。我们的代码在HTTPS://github.com/ai-secure/datalens公开提供。
translated by 谷歌翻译
我们研究了从强大和差异私有优化产生的学习困难。我们首先使用差异隐私研究梯度下降基础的对抗基础训练的融合,以线性可分离的数据为例作为说明性示例。我们比较私人和非私人环境中对抗性和名义风险之间的差距,表明私人优化依赖的数据维度依赖性术语化化各种依赖于学习鲁棒模型的困难。在此之后,我们讨论了普遍培训和差异隐私伤害优化的哪些部分,确定了差异隐私中的对抗扰动和剪切规范的大小都会增加损失景观的曲率,这意味着较差的普遍性表现。
translated by 谷歌翻译
网络修剪是一种广泛使用的压缩技术,能够以最小的准确性损失显着缩小过度参数化模型。本文表明,修剪可能会产生或加剧不同的影响。该论文阐明了导致这种差异的因素,表明梯度规范的差异以及跨组的决策边界的距离造成了这一关键问题。它详细分析了这些因素,提供了理论和经验支持,并提出了一种简单而有效的解决方案,可以减轻修剪造成的不同影响。
translated by 谷歌翻译
Deep neural networks have strong capabilities of memorizing the underlying training data, which can be a serious privacy concern. An effective solution to this problem is to train models with differential privacy, which provides rigorous privacy guarantees by injecting random noise to the gradients. This paper focuses on the scenario where sensitive data are distributed among multiple participants, who jointly train a model through federated learning (FL), using both secure multiparty computation (MPC) to ensure the confidentiality of each gradient update, and differential privacy to avoid data leakage in the resulting model. A major challenge in this setting is that common mechanisms for enforcing DP in deep learning, which inject real-valued noise, are fundamentally incompatible with MPC, which exchanges finite-field integers among the participants. Consequently, most existing DP mechanisms require rather high noise levels, leading to poor model utility. Motivated by this, we propose Skellam mixture mechanism (SMM), an approach to enforce DP on models built via FL. Compared to existing methods, SMM eliminates the assumption that the input gradients must be integer-valued, and, thus, reduces the amount of noise injected to preserve DP. Further, SMM allows tight privacy accounting due to the nice composition and sub-sampling properties of the Skellam distribution, which are key to accurate deep learning with DP. The theoretical analysis of SMM is highly non-trivial, especially considering (i) the complicated math of differentially private deep learning in general and (ii) the fact that the mixture of two Skellam distributions is rather complex, and to our knowledge, has not been studied in the DP literature. Extensive experiments on various practical settings demonstrate that SMM consistently and significantly outperforms existing solutions in terms of the utility of the resulting model.
translated by 谷歌翻译
Privacy in AI remains a topic that draws attention from researchers and the general public in recent years. As one way to implement privacy-preserving AI, differentially private learning is a framework that enables AI models to use differential privacy (DP). To achieve DP in the learning process, existing algorithms typically limit the magnitude of gradients with a constant clipping, which requires carefully tuned due to its significant impact on model performance. As a solution to this issue, latest works NSGD and Auto-S innovatively propose to use normalization instead of clipping to avoid hyperparameter tuning. However, normalization-based approaches like NSGD and Auto-S rely on a monotonic weight function, which imposes excessive weight on small gradient samples and introduces extra deviation to the update. In this paper, we propose a Differentially Private Per-Sample Adaptive Clipping (DP-PSAC) algorithm based on a non-monotonic adaptive weight function, which guarantees privacy without the typical hyperparameter tuning process of using a constant clipping while significantly reducing the deviation between the update and true batch-averaged gradient. We provide a rigorous theoretical convergence analysis and show that with convergence rate at the same order, the proposed algorithm achieves a lower non-vanishing bound, which is maintained over training iterations, compared with NSGD/Auto-S. In addition, through extensive experimental evaluation, we show that DP-PSAC outperforms or matches the state-of-the-art methods on multiple main-stream vision and language tasks.
translated by 谷歌翻译
自适应优化方法已成为许多机器学习任务的默认求解器。不幸的是,适应性的好处可能会在具有不同隐私的训练时降低,因为噪声增加了,以确保隐私会降低自适应预处理的有效性。为此,我们提出了ADADP,这是一个使用非敏感的侧面信息来预处梯度的一般框架,从而可以在私有设置中有效使用自适应方法。我们正式显示ADADPS减少了获得类似隐私保证所需的噪声量,从而提高了优化性能。从经验上讲,我们利用简单且随时可用的侧面信息来探索实践中ADADP的性能,与集中式和联合设置中的强大基线相比。我们的结果表明,ADADP平均提高了准确性7.7%(绝对) - 在大规模文本和图像基准上产生最先进的隐私性权衡权衡。
translated by 谷歌翻译
我们考虑使用迷你批量梯度进行差异隐私(DP)的培训模型。现有的最先进的差异私有随机梯度下降(DP-SGD)需要通过采样或洗机来获得最佳隐私/准确性/计算权衡的隐私放大。不幸的是,在重要的实际情况下,精确采样和洗牌的精确要求可能很难获得,特别是联邦学习(FL)。我们设计和分析跟随 - 正规的领导者(DP-FTRL)的DP变体,其比较(理论上和经验地)与放大的DP-SGD相比,同时允许更灵活的数据访问模式。DP-FTRL不使用任何形式的隐私放大。该代码可在https://github.com/google-Research/federated/tree/master/dp_ftrl和https://github.com/google-reesearch/dp-ftrl处获得。
translated by 谷歌翻译
虽然在巨大数据上培训的机器学习模型导致了几个领域的断路器,但由于限制数据的访问,他们在隐私敏感域中的部署仍然有限。在私有数据上具有隐私约束的生成模型可以避免此挑战,而是提供对私有数据的间接访问。我们提出DP-Sinkhorn,一种新的最优传输的生成方法,用于从具有差异隐私的私有数据学习数据分布。 DP-Sinkhorn以差别私人方式在模型和数据之间的模型和数据之间最小化陷阱的分歧,将计算上有效的近似值,并在模型和数据之间使用新技术来控制梯度估计的偏差差异的偏差折衷。与现有的培训方法不同,差异私人生成模型主要基于生成的对抗网络,我们不依赖于对抗性目标,这令人惊叹的难以优化,特别是在隐私约束所施加的噪声存在下。因此,DP-Sinkhorn易于训练和部署。通过实验,我们改进了多种图像建模基准的最先进,并显示了差异私有的信息RGB图像综合。项目页面:https://nv-tlabs.github.io/dp-sinkhorn。
translated by 谷歌翻译
隐私和沟通效率是联邦神经网络培训中的重要挑战,并将它们组合仍然是一个公开的问题。在这项工作中,我们开发了一种统一高度压缩通信和差异隐私(DP)的方法。我们引入基于相对熵编码(REC)到联合设置的压缩技术。通过对REC进行微小的修改,我们获得了一种可怕的私立学习算法,DP-REC,并展示了如何计算其隐私保证。我们的实验表明,DP-REC大大降低了通信成本,同时提供与最先进的隐私保证。
translated by 谷歌翻译
A major direction in differentially private machine learning is differentially private fine-tuning: pretraining a model on a source of "public data" and transferring the extracted features to downstream tasks. This is an important setting because many industry deployments fine-tune publicly available feature extractors on proprietary data for downstream tasks. In this paper, we use features extracted from state-of-the-art open source models to solve benchmark tasks in computer vision and natural language processing using differentially private fine-tuning. Our key insight is that by accelerating training, we can quickly drive the model parameters to regions in parameter space where the impact of noise is minimized. In doing so, we recover the same performance as non-private fine-tuning for realistic values of epsilon in [0.01, 1.0] on benchmark image classification datasets including CIFAR100.
translated by 谷歌翻译
我们研究了差异私有线性回归的问题,其中每个数据点都是从固定的下高斯样式分布中采样的。我们提出和分析了一个单次迷你批次随机梯度下降法(DP-AMBSSGD),其中每次迭代中的点都在没有替换的情况下进行采样。为DP添加了噪声,但噪声标准偏差是在线估计的。与现有$(\ epsilon,\ delta)$ - 具有子最佳错误界限的DP技术相比,DP-AMBSSGD能够在关键参数(如多维参数)(如多维参数)等方面提供几乎最佳的错误范围$,以及观测值的噪声的标准偏差$ \ sigma $。例如,当对$ d $二维的协变量进行采样时。从正常分布中,然后由于隐私而引起的DP-AMBSSGD的多余误差为$ \ frac {\ sigma^2 d} {n} {n}(1+ \ frac {d} {\ epsilon^2 n})$,即当样本数量$ n = \ omega(d \ log d)$,这是线性回归的标准操作制度时,错误是有意义的。相比之下,在此设置中现有有效方法的错误范围为:$ \ mathcal {o} \ big(\ frac {d^3} {\ epsilon^2 n^2} \ big)$,即使是$ \ sigma = 0 $。也就是说,对于常量的$ \ epsilon $,现有技术需要$ n = \ omega(d \ sqrt {d})$才能提供非平凡的结果。
translated by 谷歌翻译
每个例子梯度剪辑是一个关键算法步骤,可实现对深度学习模型的实用差异私有(DP)培训。但是,剪辑规范$ r $的选择对于在DP下实现高精度至关重要。我们提出了一个易于使用的替代品,称为Autoclipping,它消除了任何DP优化器(包括DP-SGD,DP-ADAM,DP-LAMB等)调整$ R $的需求。自动变体与现有的DP优化器一样私有和计算效率,但不需要DP特定的超参数,因此使DP培训与标准的非私人培训一样适合。我们在非凸vex设置中对自动DP-SGD进行了严格的融合分析,这表明它具有与标准SGD相匹配的渐近收敛速率。我们还展示了各种语言和视觉任务,这些任务自动剪辑优于或匹配最新的,并且可以轻松使用对现有代码库的最小更改。
translated by 谷歌翻译
联合学习允许许多设备在机器学习模型的培训中进行协作。与传统的机器学习一样,越来越关注的是,接受联合学习的模型可能会对不同的人群组表现出不同的表现。现有的解决方案来衡量和确保跨小组的平等模型绩效需要访问有关小组成员的信息,但是此访问并不总是可用或可取的,尤其是在联邦学习的隐私愿望下。我们研究了衡量此类性能差异的可行性,同时保护用户组成员资格的隐私以及联合模型在用户数据上的性能。保护两者对于隐私至关重要,因为它们可能是相关的,因此学习一个可能会揭示另一个。另一方面,从公用事业的角度来看,保留隐私的数据应保持相关性,以确保能够对性能差异进行准确的测量。我们通过开发当地差异化的私人机制来实现这两个目标,从而保留小组成员和模型绩效之间的相关性。为了分析机制的有效性,我们在对给定隐私预算进行优化时估算差异时的错误,并在合成数据上验证这些界限。我们的结果表明,对于参与的客户数量的实际数量,错误迅速减少,这表明,与先前的工作相反,保护受保护属性的隐私不一定与确定联合模型性能的差异相抵触。
translated by 谷歌翻译