The performance of differentially private machine learning can be boosted significantly by leveraging the transfer learning capabilities of non-private models pretrained on large public datasets. We critically review this approach. We primarily question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving. We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy. Beyond the privacy considerations of using public data, we further question the utility of this paradigm. We scrutinize whether existing machine learning benchmarks are appropriate for measuring the ability of pretrained models to generalize to sensitive domains, which may be poorly represented in public Web data. Finally, we notice that pretraining has been especially impactful for the largest available models -- models sufficiently large to prohibit end users running them on their own devices. Thus, deploying such models today could be a net loss for privacy, as it would require (private) data to be outsourced to a more compute-powerful third party. We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
translated by 谷歌翻译
Stable Diffusion is a recent open-source image generation model comparable to proprietary models such as DALLE, Imagen, or Parti. Stable Diffusion comes with a safety filter that aims to prevent generating explicit images. Unfortunately, the filter is obfuscated and poorly documented. This makes it hard for users to prevent misuse in their applications, and to understand the filter's limitations and improve it. We first show that it is easy to generate disturbing content that bypasses the safety filter. We then reverse-engineer the filter and find that while it aims to prevent sexual content, it ignores violence, gore, and other similarly disturbing content. Based on our analysis, we argue safety measures in future model releases should strive to be fully open and properly documented to stimulate security contributions from the community.
translated by 谷歌翻译
属性推理攻击使对手可以从机器学习模型中提取培训数据集的全局属性。此类攻击对共享数据集来培训机器学习模型的数据所有者具有隐私影响。已经提出了几种针对深神经网络的财产推理攻击的现有方法,但它们都依靠攻击者训练大量的影子模型,这会导致大型计算开销。在本文中,我们考虑了攻击者可以毒化训练数据集的子集并查询训练有素的目标模型的属性推理攻击的设置。通过我们对中毒下模型信心的理论分析的激励,我们设计了有效的财产推理攻击,SNAP,该攻击获得了更高的攻击成功,并且需要比Mahloujifar Et的基于最先进的中毒的财产推理攻击更高的中毒量。 al。例如,在人口普查数据集上,SNAP的成功率比Mahloujifar等人高34%。同时更快56.5倍。我们还扩展了攻击,以确定在培训中是否根本存在某个财产,并有效地估算了利息财产的确切比例。我们评估了对四个数据集各种比例的多种属性的攻击,并证明了Snap的一般性和有效性。
translated by 谷歌翻译
机器学习模型表现出两个看似矛盾的现象:训练数据记忆和各种遗忘形式。在记忆中,模型过于适合特定的培训示例,并容易受到隐私攻击的影响。在忘记时,最终忘记了在培训初期出现的例子。在这项工作中,我们将这些现象联系起来。我们提出了一种技术,以衡量训练示例的细节在多大程度上``忘记'',从而不易受到他们最近未曾见过的示例的隐私攻击的影响。我们表明,尽管非凸性可以防止在最坏的情况下忘记发生,但标准图像和语音模型在经验上确实会随着时间的流逝而忘记示例。我们将非确定性识别为潜在的解释,表明经过确定性训练的模型不会忘记。我们的结果表明,当使用极大的数据集培训(例如用于预训练模型的示例)时,早期看到的例子可能会观察到隐私益处,而牺牲了后来看到的示例。
translated by 谷歌翻译
现代神经语言模型广泛用于任务中的任务,跨越培训数据记忆敏感信息。由于模型继续扩大参数,培训数据和计算,从学习理论的角度来看,培训数据和计算中的记忆既重要性也很重要,并且在现实世界应用中实际上至关重要。在语言模型中记忆的研究中的一个开放问题是如何过滤掉“常见的”记忆。事实上,大多数记忆标准与培训集的出现数量强烈关联,捕获“常见”记忆,例如熟悉的短语,公共知识或模板文本。在本文中,我们提供了由心理学中人类记忆分类的理性观点。从这个角度来看,我们制定了反事实记忆的概念,这表征了模型的预测如何改变,如果在训练期间省略了特定文件。我们在标准文本数据集中识别并研究了反复记忆培训示例。我们进一步估计每个训练示例对验证集和生成文本的影响,并显示这可以提供在测试时间的记忆源的直接证据。
translated by 谷歌翻译
差异化(DP)学习在建立大型文本模型方面的成功有限,并尝试直接将差异化私有随机梯度下降(DP-SGD)应用于NLP任务,从而导致了大量的性能下降和高度计算的开销。我们表明,通过(1)使用大型验证模型可以缓解这种性能下降; (2)适合DP优化的超参数; (3)与训练过程对齐的微调目标。通过正确设定这些因素,我们将获得私人NLP模型,以优于最先进的私人培训方法和强大的非私人基准 - 通过直接对中等大小的Corpora进行DP优化的预审计模型。为了解决使用大型变压器运行DP-SGD的计算挑战,我们提出了一种存储器保存技术,该技术允许DP-SGD中的剪辑在不实例化模型中任何层的每个示例梯度的情况下运行。该技术使私人训练变压器的内存成本几乎与非私人培训相同,并以适度的运行时间开销。与传统的观点相反,即DP优化在学习高维模型(由于尺寸缩放的噪声)方面失败的经验结果表明,使用预审预周化模型的私人学习往往不会遭受维度依赖性性能降低的障碍。
translated by 谷歌翻译
AI正在经历范式转变,随着模型的兴起(例如Bert,Dall-E,GPT-3),这些模型经过大规模的数据训练,并且可以适应广泛的下游任务。我们称这些模型基础模型来强调其至关重要但不完整的特征。该报告提供了基础模型的机会和风险的详尽说明,包括其功能(例如语言,愿景,机器人技术,推理,人类互动)和技术原则(例如,模型架构,培训程序,数据,系统,安全,安全性,评估,理论)对其应用(例如法律,医疗保健,教育)和社会影响(例如不平等,滥用,经济和环境影响,法律和道德考虑)。尽管基础模型基于标准的深度学习和转移学习,但它们的规模导致了新的新兴能力,以及它们在许多任务中的有效性都激发了同质化。同质化提供了强大的杠杆作用,但要求谨慎,因为基础模型的缺陷均由下游的所有适应模型继承。尽管即将广泛地部署基础模型,但我们目前对它们的工作方式,失败以及由于其新兴属性的影响而缺乏清晰的了解。为了解决这些问题,我们认为基础模型的许多批判性研究都需要与他们的基本社会技术性质相称。
translated by 谷歌翻译
使分类器对对抗性示例的强大实例很难。因此,许多防御能够应对检测扰动输入的看似更容易的任务。我们对这个目标显示了一个障碍。我们证明了对抗性示例的检测和分类之间的一般硬度降低:给定对距离处攻击的强大检测器{\ epsilon}(在某些度量中),我们可以在距离处构建类似强大的(但效率低下)的分类器,以{}/2。我们的减少在计算上效率低下,因此不能用于构建实用分类器。取而代之的是,测试经验检测是否意味着比作者可能预期的要强得多,这是一种有用的理智检查。为了说明,我们重新访问13个检测器防御。对于11/13的案例,我们表明所要求的检测结果将意味着效率低下的分类器稳健性远远超出了最先进的情况。
translated by 谷歌翻译
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
translated by 谷歌翻译
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with stronger robustness to blackbox attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks (Kurakin et al., 2017c). However, subsequent work found that more elaborate black-box attacks could significantly enhance transferability and reduce the accuracy of our models.
translated by 谷歌翻译