神经网络合奏,例如贝叶斯神经网络(BNNS),在不确定性估计和鲁棒性领域表现出了成功。但是,至关重要的挑战禁止其在实践中使用。 BNN需要大量预测来产生可靠的结果,从而大大增加了计算成本。为了减轻这个问题,我们提出了空间平滑,这是一种在空间上集合相邻的卷积神经网络特征映射点的方法。通过简单地在模型中添加一些模糊层,我们从经验上表明,空间平滑提高了BNN在整个合奏大小范围内的准确性,不确定性估计和鲁棒性。特别是,结合空间平滑的BNN仅与少数合奏实现高预测性能。此外,该方法还可以应用于规范确定性神经网络以改善性能。许多证据表明,改进可以归因于稳定的特征图和损失景观的平滑。此外,我们通过将其作为特殊的空间平滑案例来称呼它们,为先前作品提供基本解释 - 即全球平均汇集,预活化和relu6。这些不仅提高了准确性,而且通过使损失景观与空间平滑相同的方式使损失景观更加顺畅,从而提高了不确定性估计和鲁棒性。该代码可从https://github.com/xxxnell/spatial-smoothing获得。
translated by 谷歌翻译
现在,多头自我引入(MSA)对于计算机视觉的成功是无可争议的。但是,对于MSA的工作方式知之甚少。我们提出了基本的解释,以帮助更好地理解MSA的性质。特别是,我们证明了MSA和视觉变压器(VITS)的以下特性:(1)MSA不仅提高准确性,而且通过使损失景观变色来提高概括。这种改进主要归因于其数据特异性,而不是长期依赖性。另一方面,VIT遭受了非凸损的损失。大型数据集和损失景观平滑方法可以减轻此问题; (2)MSA和Convs表现出相反的行为。例如,MSA是低通滤波器,但Convs是高通滤波器。因此,MSA和Convs是互补的。 (3)多阶段神经网络的行为就像小型模型的系列连接。此外,舞台结束时的MSA在预测中起着关键作用。基于这些见解,我们提出了替代方案,其中一个模型在阶段末尾的Cons块被MSA块替换为MSA块。替代表的表现不仅超过了大型数据制度,而且在小型数据制度中的表现也优于CNN。该代码可在https://github.com/xxxnell/how-do-vits-work上找到。
translated by 谷歌翻译
深度卷积神经网络在各种计算机视觉任务上表现出色,但是它们容易从训练信号中拾取虚假相关性。所谓的“快捷方式”可以在学习过程中发生,例如,当图像数据中存在特定频率与输出预测相关的特定频率时。高频和低频都可以是由图像采集引起的潜在噪声分布的特征,而不是与有关图像内容的任务相关信息。学习与此特征噪声相关的功能的模型不会很好地推广到新数据。在这项工作中,我们提出了一种简单而有效的训练策略,频率辍学,以防止卷积神经网络从学习频率特异性成像功能中。我们在训练过程中采用了特征图的随机过滤,该特征地图充当特征级别的正则化。在这项研究中,我们考虑了常见的图像处理过滤器,例如高斯平滑,高斯(Gaussian)的拉普拉斯(Laplacian)和Gabor过滤。我们的培训策略是模型不合时宜的,可用于任何计算机视觉任务。我们证明了使用计算机视觉和医学成像数据集在一系列流行架构和多个任务中的频率辍学的有效性。我们的结果表明,所提出的方法不仅提高了预测准确性,而且还提高了针对领域转移的鲁棒性。
translated by 谷歌翻译
在过去的几年中,卷积神经网络(CNN)一直是广泛的计算机视觉任务中的主导神经架构。从图像和信号处理的角度来看,这一成功可能会令人惊讶,因为大多数CNN的固有空间金字塔设计显然违反了基本的信号处理法,即在其下采样操作中对定理进行采样。但是,由于不良的采样似乎不影响模型的准确性,因此在模型鲁棒性开始受到更多关注之前,该问题已被广泛忽略。最近的工作[17]在对抗性攻击和分布变化的背景下,毕竟表明,CNN的脆弱性与不良下降采样操作引起的混叠伪像之间存在很强的相关性。本文以这些发现为基础,并引入了一个可混合的免费下采样操作,可以轻松地插入任何CNN体系结构:频lowcut池。我们的实验表明,结合简单而快速的FGSM对抗训练,我们的超参数无操作员显着提高了模型的鲁棒性,并避免了灾难性的过度拟合。
translated by 谷歌翻译
对于神经网络的近似贝叶斯推断被认为是标准培训的强大替代品,通常在分发数据上提供良好的性能。然而,贝叶斯神经网络(BNNS)具有高保真近似推断的全批汉密尔顿蒙特卡罗在协变速下实现了较差的普遍,甚至表现不佳的经典估算。我们解释了这种令人惊讶的结果,展示了贝叶斯模型平均值实际上如何存在于协变量的情况下,特别是在输入特征中的线性依赖性导致缺乏后退的情况下。我们还展示了为什么相同的问题不会影响许多近似推理程序,或古典最大A-Bouthiori(地图)培训。最后,我们提出了改善BNN的鲁棒性的新型前锋,对许多协变量转变来源。
translated by 谷歌翻译
Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.
translated by 谷歌翻译
Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs. The most popular approaches to estimate predictive uncertainty in deep learning are methods that combine predictions from multiple neural networks, such as Bayesian neural networks (BNNs) and deep ensembles. However their practicality in real-time, industrial-scale applications are limited due to the high memory and computational cost. Furthermore, ensembles and BNNs do not necessarily fix all the issues with the underlying member networks. In this work, we study principled approaches to improve uncertainty property of a single network, based on a single, deterministic representation. By formalizing the uncertainty quantification as a minimax learning problem, we first identify distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs with two simple changes: (1) applying spectral normalization to hidden weights to enforce bi-Lipschitz smoothness in representations and (2) replacing the last output layer with a Gaussian process layer. On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection. Furthermore, SNGP provides complementary benefits to popular techniques such as deep ensembles and data augmentation, making it a simple and scalable building block for probabilistic deep learning. Code is open-sourced at https://github.com/google/uncertainty-baselines
translated by 谷歌翻译
贝叶斯范式有可能解决深度神经网络的核心问题,如校准和数据效率低差。唉,缩放贝叶斯推理到大量的空间通常需要限制近似。在这项工作中,我们表明它足以通过模型权重的小子集进行推动,以便获得准确的预测后断。另一个权重被保存为点估计。该子网推断框架使我们能够在这些子集上使用表现力,否则难以相容的后近近似。特别是,我们将子网线性化LAPLACE作为一种简单,可扩展的贝叶斯深度学习方法:我们首先使用线性化的拉普拉斯近似来获得所有重量的地图估计,然后在子网上推断出全协方差高斯后面。我们提出了一个子网选择策略,旨在最大限度地保护模型的预测性不确定性。经验上,我们的方法对整个网络的集合和较少的表达后近似进行了比较。
translated by 谷歌翻译
Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive uncertainty. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous largescale empirical comparison of these methods under dataset shift. We present a largescale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks.
translated by 谷歌翻译
现代神经网络Excel在图像分类中,但它们仍然容易受到常见图像损坏,如模糊,斑点噪音或雾。最近的方法关注这个问题,例如Augmix和Deepaulment,引入了在预期运行的防御,以期望图像损坏分布。相比之下,$ \ ell_p $ -norm界限扰动的文献侧重于针对最坏情况损坏的防御。在这项工作中,我们通过提出防范内人来调和两种方法,这是一种优化图像到图像模型的参数来产生对外损坏的增强图像的技术。我们理论上激发了我们的方法,并为其理想化版本的一致性以及大纲领提供了足够的条件。我们的分类机器在预期对CiFar-10-C进行的常见图像腐败基准上提高了最先进的,并改善了CIFAR-10和ImageNet上的$ \ ell_p $ -norm有界扰动的最坏情况性能。
translated by 谷歌翻译
Deep neural networks achieve high prediction accuracy when the train and test distributions coincide. In practice though, various types of corruptions occur which deviate from this setup and cause severe performance degradations. Few methods have been proposed to address generalization in the presence of unforeseen domain shifts. In particular, digital noise corruptions arise commonly in practice during the image acquisition stage and present a significant challenge for current robustness approaches. In this paper, we propose a diverse Gaussian noise consistency regularization method for improving robustness of image classifiers under a variety of noise corruptions while still maintaining high clean accuracy. We derive bounds to motivate and understand the behavior of our Gaussian noise consistency regularization using a local loss landscape analysis. We show that this simple approach improves robustness against various unforeseen noise corruptions by 4.2-18.4% over adversarial training and other strong diverse data augmentation baselines across several benchmarks. Furthermore, when combined with state-of-the-art diverse data augmentation techniques, experiments against state-of-the-art show our method further improves robustness accuracy by 3.7% and uncertainty calibration by 5.5% for all common corruptions on several image classification benchmarks.
translated by 谷歌翻译
最近出现了一系列用于估计具有单个正向通行证的深神经网络中的认知不确定性的新方法,最近已成为贝叶斯神经网络的有效替代方法。在信息性表示的前提下,这些确定性不确定性方法(DUM)在检测到分布(OOD)数据的同时在推理时添加可忽略的计算成本时实现了强大的性能。但是,目前尚不清楚dums是否经过校准,可以无缝地扩展到现实世界的应用 - 这都是其实际部署的先决条件。为此,我们首先提供了DUMS的分类法,并在连续分配转移下评估其校准。然后,我们将它们扩展到语义分割。我们发现,尽管DUMS尺度到现实的视觉任务并在OOD检测方面表现良好,但当前方法的实用性受到分配变化下的校准不良而破坏的。
translated by 谷歌翻译
深度神经网络易于对异常值过度自信的预测。贝叶斯神经网络和深度融合都已显示在某种程度上减轻了这个问题。在这项工作中,我们的目标是通过提议预测由高斯混合模型的后续的高斯混合模型来结合这两种方法的益处,该高斯混合模型包括独立培训的深神经网络的LAPPALL近似的加权和。该方法可以与任何一组预先训练的网络一起使用,并且与常规合并相比,只需要小的计算和内存开销。理论上我们验证了我们的方法从训练数据中的培训数据和虚拟化的基本线上的标准不确定量级基准测试中的“远离”的过度控制。
translated by 谷歌翻译
我们研究了回归中神经网络(NNS)的模型不确定性的方法。为了隔离模型不确定性的效果,我们专注于稀缺训练数据的无噪声环境。我们介绍了关于任何方法都应满足的模型不确定性的五个重要的逃亡者。但是,我们发现,建立的基准通常无法可靠地捕获其中一些逃避者,即使是贝叶斯理论要求的基准。为了解决这个问题,我们介绍了一种新方法来捕获NNS的模型不确定性,我们称之为基于神经优化的模型不确定性(NOMU)。 NOMU的主要思想是设计一个由两个连接的子NN组成的网络体系结构,一个用于模型预测,一个用于模型不确定性,并使用精心设计的损耗函数进行训练。重要的是,我们的设计执行NOMU满足我们的五个Desiderata。由于其模块化体系结构,NOMU可以为任何给定(先前训练)NN提供模型不确定性,如果访问其培训数据。我们在各种回归任务和无嘈杂的贝叶斯优化(BO)中评估NOMU,并具有昂贵的评估。在回归中,NOMU至少和最先进的方法。在BO中,Nomu甚至胜过所有考虑的基准。
translated by 谷歌翻译
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward-and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
translated by 谷歌翻译
Aliasing is a highly important concept in signal processing, as careful consideration of resolution changes is essential in ensuring transmission and processing quality of audio, image, and video. Despite this, up until recently aliasing has received very little consideration in Deep Learning, with all common architectures carelessly sub-sampling without considering aliasing effects. In this work, we investigate the hypothesis that the existence of adversarial perturbations is due in part to aliasing in neural networks. Our ultimate goal is to increase robustness against adversarial attacks using explainable, non-trained, structural changes only, derived from aliasing first principles. Our contributions are the following. First, we establish a sufficient condition for no aliasing for general image transformations. Next, we study sources of aliasing in common neural network layers, and derive simple modifications from first principles to eliminate or reduce it. Lastly, our experimental results show a solid link between anti-aliasing and adversarial attacks. Simply reducing aliasing already results in more robust classifiers, and combining anti-aliasing with robust training out-performs solo robust training on $L_2$ attacks with none or minimal losses in performance on $L_{\infty}$ attacks.
translated by 谷歌翻译
The ability to estimate epistemic uncertainty is often crucial when deploying machine learning in the real world, but modern methods often produce overconfident, uncalibrated uncertainty predictions. A common approach to quantify epistemic uncertainty, usable across a wide class of prediction models, is to train a model ensemble. In a naive implementation, the ensemble approach has high computational cost and high memory demand. This challenges in particular modern deep learning, where even a single deep network is already demanding in terms of compute and memory, and has given rise to a number of attempts to emulate the model ensemble without actually instantiating separate ensemble members. We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation (FiLM). That technique was originally developed for multi-task learning, with the aim of decoupling different tasks. We show that the idea can be extended to uncertainty quantification: by modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity, and consequently well-calibrated estimates of epistemic uncertainty, with low computational overhead in comparison. Empirically, FiLM-Ensemble outperforms other implicit ensemble methods, and it and comes very close to the upper bound of an explicit ensemble of networks (sometimes even beating it), at a fraction of the memory cost.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
神经网络缺乏对抗性鲁棒性,即,它们容易受到对抗的例子,通过对输入的小扰动导致错误的预测。此外,当模型给出错误的预测时,信任被破坏,即,预测的概率不是我们应该相信我们模型的良好指标。在本文中,我们研究了对抗性鲁棒性和校准之间的联系,发现模型对小扰动敏感的输入(很容易攻击)更有可能具有较差的预测。基于这种洞察力,我们通过解决这些对抗的缺陷输入来研究校准。为此,我们提出了基于对抗基于对抗的自适应标签平滑(AR-AD),其通过适应性软化标签,通过适应性软化标签来整合对抗性鲁棒性和校准到训练中的相关性,这是基于对敌人可以攻击的容易攻击。我们发现我们的方法,考虑了分销数据的对抗性稳健性,即使在分布班次下也能够更好地校准模型。此外,还可以应用于集合模型,以进一步提高模型校准。
translated by 谷歌翻译
在他们的损失景观方面观看神经网络模型在学习的统计力学方法方面具有悠久的历史,并且近年来它在机器学习中得到了关注。除此之外,已显示局部度量(例如损失景观的平滑度)与模型的全局性质(例如良好的泛化性能)相关联。在这里,我们对数千个神经网络模型的损失景观结构进行了详细的实证分析,系统地改变了学习任务,模型架构和/或数据数量/质量。通过考虑试图捕获损失景观的不同方面的一系列指标,我们证明了最佳的测试精度是如下:损失景观在全球连接;训练型模型的集合彼此更像;而模型会聚到局部平滑的地区。我们还表明,当模型很小或培训以较低质量数据时,可以出现全球相连的景观景观;而且,如果损失景观全球相连,则培训零损失实际上可以导致更糟糕的测试精度。我们详细的经验结果阐明了学习阶段的阶段(以及后续双重行为),基本与偶然的决定因素良好的概括决定因素,负载样和温度相同的参数在学习过程中,不同的影响对模型的损失景观的影响不同和数据,以及地方和全球度量之间的关系,近期兴趣的所有主题。
translated by 谷歌翻译