估计深神经网络(DNN)的概括误差(GE)是一项重要任务,通常依赖于持有数据的可用性。基于单个训练集更好地预测GE的能力可能会产生总体DNN设计原则,以减少对试用和错误的依赖以及其他绩效评估优势。为了寻找与GE相关的数量,我们使用无限宽度DNN限制到绑定的MI,研究了输入和最终层表示之间的相互信息(MI)。现有的基于输入压缩的GE绑定用于链接MI和GE。据我们所知,这代表了该界限的首次实证研究。为了实证伪造理论界限,我们发现它通常对于表现最佳模型而言通常很紧。此外,它在许多情况下检测到训练标签的随机化,反映了测试时间扰动的鲁棒性,并且只有很少的培训样本就可以很好地工作。考虑到输入压缩是广泛适用的,可以在信心估算MI的情况下,这些结果是有希望的。
translated by 谷歌翻译
We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.
translated by 谷歌翻译
Deep neural networks may easily memorize noisy labels present in real-world data, which degrades their ability to generalize. It is therefore important to track and evaluate the robustness of models against noisy label memorization. We propose a metric, called susceptibility, to gauge such memorization for neural networks. Susceptibility is simple and easy to compute during training. Moreover, it does not require access to ground-truth labels and it only uses unlabeled data. We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric. Finally, we show through extensive experiments on datasets with synthetic and real-world label noise that one can utilize susceptibility and the overall training accuracy to distinguish models that maintain a low memorization on the training set and generalize well to unseen clean data.
translated by 谷歌翻译
权重规范$ \ | w \ | $和保证金$ \ gamma $通过归一化的保证金$ \ gamma/\ | w \ | $参与学习理论。由于标准神经净优化器不能控制归一化的边缘,因此很难测试该数量是否与概括有关。本文设计了一系列实验研究,这些研究明确控制了归一化的边缘,从而解决了两个核心问题。首先:归一化的边缘是否总是对概括产生因果影响?本文发现,在归一化的边缘似乎与概括没有关系的情况下,可以与Bartlett等人的理论背道而驰。(2017)。第二:标准化边缘是否对概括有因果影响?该论文发现是的 - 在标准培训设置中,测试性能紧密跟踪了标准化的边距。该论文将高斯流程模型表示为这种行为的有前途的解释。
translated by 谷歌翻译
Machine learning models are often susceptible to adversarial perturbations of their inputs. Even small perturbations can cause state-of-the-art classifiers with high "standard" accuracy to produce an incorrect prediction with high confidence. To better understand this phenomenon, we study adversarially robust learning from the viewpoint of generalization. We show that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning. This gap is information theoretic and holds irrespective of the training algorithm or the model family. We complement our theoretical results with experiments on popular image classification datasets and show that a similar gap exists here as well. We postulate that the difficulty of training robust classifiers stems, at least partially, from this inherently larger sample complexity.
translated by 谷歌翻译
Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs. The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles. However their practicality in real-time, industrial-scale applications are limited due to the high memory and computational cost. Furthermore, ensembles and BNNs do not necessarily fix all the issues with the underlying member networks. In this work, we study principled approaches to improve uncertainty property of a single network, based on a single, deterministic representation. By formalizing the uncertainty quantification as a minimax learning problem, we first identify distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs with two simple changes: (1) applying spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in representations and (2) replacing the last output layer with a Gaussian process layer. On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection. Furthermore, SNGP provides complementary benefits to popular techniques such as deep ensembles and data augmentation, making it a simple and scalable building block for probabilistic deep learning. Code is open-sourced at https://github.com/google/uncertainty-baselines
translated by 谷歌翻译
Neural networks have achieved impressive results on many technological and scientific tasks. Yet, their empirical successes have outpaced our fundamental understanding of their structure and function. By identifying mechanisms driving the successes of neural networks, we can provide principled approaches for improving neural network performance and develop simple and effective alternatives. In this work, we isolate the key mechanism driving feature learning in fully connected neural networks by connecting neural feature learning to the average gradient outer product. We subsequently leverage this mechanism to design \textit{Recursive Feature Machines} (RFMs), which are kernel machines that learn features. We show that RFMs (1) accurately capture features learned by deep fully connected neural networks, (2) close the gap between kernel machines and fully connected networks, and (3) surpass a broad spectrum of models including neural networks on tabular data. Furthermore, we demonstrate that RFMs shed light on recently observed deep learning phenomena such as grokking, lottery tickets, simplicity biases, and spurious features. We provide a Python implementation to make our method broadly accessible [\href{https://github.com/aradha/recursive_feature_machines}{GitHub}].
translated by 谷歌翻译
过度参数化的神经网络的实际成功促进了最近对插值方法的科学研究,这些研究非常适合其训练数据。如果没有灾难性的测试表现,包括神经网络在内的某些插值方法(包括神经网络)可以符合嘈杂的训练数据,这是违反统计学习理论的标准直觉的。为了解释这一点,最近的一系列工作研究了$ \ textit {良性过拟合} $,这是一种现象,其中一些插值方法即使在存在噪音的情况下也接近了贝叶斯的最佳性。在这项工作中,我们认为,虽然良性过度拟合既具有启发性和富有成效的研究在测试时间的风险,这意味着这些模型既不是良性也不是灾难性的,而是属于中间状态。我们称此中级制度$ \ textit {perked forporting} $,我们启动其系统研究。我们首先在内核(Ridge)回归(KR)的背景下探索这种现象,通过在脊参数和核特征光谱上获得条件,KR在这些条件下表现出三种行为。我们发现,具有PowerLaw光谱的内核,包括Laplace内核和Relu神经切线内核,表现出了过度拟合的。然后,我们通过分类法的镜头从经验上研究深度神经网络,并发现接受插值训练的人是脾气暴躁的,而那些训练的人则是良性的。我们希望我们的工作能够使人们对现代学习过度拟合的过度理解。
translated by 谷歌翻译
我们专注于具有单个隐藏层的特定浅神经网络,即具有$ l_2 $ normalistization的数据以及Sigmoid形状的高斯错误函数(“ ERF”)激活或高斯错误线性单元(GELU)激活。对于这些网络,我们通过Pac-Bayesian理论得出了新的泛化界限。与大多数现有的界限不同,它们适用于具有确定性或随机参数的神经网络。当网络接受Mnist和Fashion-Mnist上的香草随机梯度下降训练时,我们的界限在经验上是无效的。
translated by 谷歌翻译
鉴于对机器学习模型的访问,可以进行对手重建模型的培训数据?这项工作从一个强大的知情对手的镜头研究了这个问题,他们知道除了一个之外的所有培训数据点。通过实例化混凝土攻击,我们表明重建此严格威胁模型中的剩余数据点是可行的。对于凸模型(例如Logistic回归),重建攻击很简单,可以以封闭形式导出。对于更常规的模型(例如神经网络),我们提出了一种基于训练的攻击策略,该攻击策略接收作为输入攻击的模型的权重,并产生目标数据点。我们展示了我们对MNIST和CIFAR-10训练的图像分类器的攻击的有效性,并系统地研究了标准机器学习管道的哪些因素影响重建成功。最后,我们从理论上调查了有多差异的隐私足以通过知情对手减轻重建攻击。我们的工作提供了有效的重建攻击,模型开发人员可以用于评估超出以前作品中考虑的一般设置中的个别点的记忆(例如,生成语言模型或访问培训梯度);它表明,标准模型具有存储足够信息的能力,以实现培训数据点的高保真重建;它表明,差异隐私可以成功减轻该参数制度中的攻击,其中公用事业劣化最小。
translated by 谷歌翻译
Learning curves provide insight into the dependence of a learner's generalization performance on the training set size. This important tool can be used for model selection, to predict the effect of more training data, and to reduce the computational complexity of model training and hyperparameter tuning. This review recounts the origins of the term, provides a formal definition of the learning curve, and briefly covers basics such as its estimation. Our main contribution is a comprehensive overview of the literature regarding the shape of learning curves. We discuss empirical and theoretical evidence that supports well-behaved curves that often have the shape of a power law or an exponential. We consider the learning curves of Gaussian processes, the complex shapes they can display, and the factors influencing them. We draw specific attention to examples of learning curves that are ill-behaved, showing worse learning performance with more training data. To wrap up, we point out various open problems that warrant deeper empirical and theoretical investigation. All in all, our review underscores that learning curves are surprisingly diverse and no universal model can be identified.
translated by 谷歌翻译
当前,随机平滑被认为是获得确切可靠分类器的最新方法。尽管其表现出色,但该方法仍与各种严重问题有关,例如``认证准确性瀑布'',认证与准确性权衡甚至公平性问题。已经提出了依赖输入的平滑方法,目的是克服这些缺陷。但是,我们证明了这些方法缺乏正式的保证,因此所产生的证书是没有道理的。我们表明,一般而言,输入依赖性平滑度遭受了维数的诅咒,迫使方差函数具有低半弹性。另一方面,我们提供了一个理论和实用的框架,即使在严格的限制下,即使在有维度的诅咒的情况下,即使在存在维度的诅咒的情况下,也可以使用依赖输入的平滑。我们提供平滑方差功能的一种混凝土设计,并在CIFAR10和MNIST上进行测试。我们的设计减轻了经典平滑的一些问题,并正式下划线,但仍需要进一步改进设计。
translated by 谷歌翻译
对抗性的鲁棒性已经成为深度学习的核心目标,无论是在理论和实践中。然而,成功的方法来改善对抗的鲁棒性(如逆势训练)在不受干扰的数据上大大伤害了泛化性能。这可能会对对抗性鲁棒性如何影响现实世界系统的影响(即,如果它可以提高未受干扰的数据的准确性),许多人可能选择放弃鲁棒性)。我们提出内插对抗培训,该培训最近雇用了在对抗培训框架内基于插值的基于插值的培训方法。在CiFar -10上,对抗性训练增加了标准测试错误(当没有对手时)从4.43%到12.32%,而我们的内插对抗培训我们保留了对抗性的鲁棒性,同时实现了仅6.45%的标准测试误差。通过我们的技术,强大模型标准误差的相对增加从178.1%降至仅为45.5%。此外,我们提供内插对抗性培训的数学分析,以确认其效率,并在鲁棒性和泛化方面展示其优势。
translated by 谷歌翻译
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to normbounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
translated by 谷歌翻译
The ability to quickly and accurately identify covariate shift at test time is a critical and often overlooked component of safe machine learning systems deployed in high-risk domains. While methods exist for detecting when predictions should not be made on out-of-distribution test examples, identifying distributional level differences between training and test time can help determine when a model should be removed from the deployment setting and retrained. In this work, we define harmful covariate shift (HCS) as a change in distribution that may weaken the generalization of a predictive model. To detect HCS, we use the discordance between an ensemble of classifiers trained to agree on training data and disagree on test data. We derive a loss function for training this ensemble and show that the disagreement rate and entropy represent powerful discriminative statistics for HCS. Empirically, we demonstrate the ability of our method to detect harmful covariate shift with statistical certainty on a variety of high-dimensional datasets. Across numerous domains and modalities, we show state-of-the-art performance compared to existing methods, particularly when the number of observed test samples is small.
translated by 谷歌翻译
我们理论上和经验地证明,对抗性鲁棒性可以显着受益于半体验学习。从理论上讲,我们重新审视了Schmidt等人的简单高斯模型。这显示了标准和稳健分类之间的示例复杂性差距。我们证明了未标记的数据桥接这种差距:简单的半体验学习程序(自我训练)使用相同数量的达到高标准精度所需的标签实现高的强大精度。经验上,我们增强了CiFar-10,使用50万微小的图像,使用了8000万微小的图像,并使用强大的自我训练来优于最先进的鲁棒精度(i)$ \ ell_ infty $鲁棒性通过对抗培训和(ii)认证$ \ ell_2 $和$ \ ell_ \ infty $鲁棒性通过随机平滑的几个强大的攻击。在SVHN上,添加DataSet自己的额外训练集,删除的标签提供了4到10个点的增益,在使用额外标签的1点之内。
translated by 谷歌翻译
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features (derived from patterns in the data distribution) that are highly predictive, yet brittle and (thus) incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data.
translated by 谷歌翻译
Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation and early-stopping. We formulate hyperparameter optimization as a pure-exploration nonstochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce a novel algorithm, Hyperband, for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare Hyperband with popular Bayesian optimization methods on a suite of hyperparameter optimization problems. We observe that Hyperband can provide over an order-of-magnitude speedup over our competitor set on a variety of deep-learning and kernel-based learning problems.
translated by 谷歌翻译
机器学习的许多成功都是基于最大程度地减少平均损失函数的基础。但是,众所周知,这种范式遭受了鲁棒性问题的影响,阻碍了其在安全 - 关键领域中的适用性。这些问题通常是通过针对最坏情况的数据扰动来解决的,该技术被称为对抗性训练。尽管经验上有效,但对抗性训练可能过于保守,从而导致名义性能和稳健性之间的不利权衡。为此,在本文中,我们提出了一个称为概率鲁棒性的框架,该框架弥合了准确但脆弱的平均情况和坚固而保守的最坏情况之间的差距,这是通过对最多而不是对所有扰动的实施强大的。从理论的角度来看,该框架克服了最差案例学习和平均案例学习的性能与样本复杂性之间的权衡。从实际的角度来看,我们提出了一种基于风险感知优化的新算法,该算法有效地平衡了平均和最差的案例性能,而相对于对抗性训练,计算成本较低。我们对MNIST,CIFAR-10和SVHN的结果说明了该框架在从平均值到最差的鲁棒性方面的优势。
translated by 谷歌翻译
机器学习模型的概括对数据,模型和学习算法具有复杂的依赖性。我们研究训练和测试性能,以及它们在不同数据集样本上的差异给出的概括差距,以理解其``典型''行为。我们得出了差距的表达式,作为模型之间协方差的函数参数分布和列车损耗以及平均测试性能的另一种表达,显示了测试概括仅取决于数据平均参数分布和数据平均损失。我们显示,对于大型模型参数分布,修改的概括差距为始终是非负的。通过进一步专门针对由随机梯度下降(SGD)产生的参数分布,以及一些近似值和建模考虑,我们能够预测有关通用差距和模型训练和测试性能如何变化为一个方面的一些方面SGD噪声的功能。我们基于RESNET体系结构对CIFAR10分类任务进行经验评估这些预测。
translated by 谷歌翻译