Modern convolutional networks are not shiftinvariant, as small input shifts or translations can cause drastic changes in the output. Commonly used downsampling methods, such as max-pooling, strided-convolution, and averagepooling, ignore the sampling theorem. The wellknown signal processing fix is anti-aliasing by low-pass filtering before downsampling. However, simply inserting this module into deep networks degrades performance; as a result, it is seldomly used today. We show that when integrated correctly, it is compatible with existing architectural components, such as max-pooling and strided-convolution. We observe increased accuracy in ImageNet classification, across several commonly-used architectures, such as ResNet, DenseNet, and MobileNet, indicating effective regularization. Furthermore, we observe better generalization, in terms of stability and robustness to input corruptions. Our results demonstrate that this classical signal processing technique has been undeservingly overlooked in modern deep networks.
translated by 谷歌翻译
由于下采样层的存在,卷积神经网络缺乏转变标准。在图像分类中,最近提出了自适应多相下采样(APS-D)以使CNN完全换档不变。但是,在用于图像重建任务的网络中,它不能自动恢复转移标准规范。我们通过提出自适应多相上升采样(APS-U),传统上采样的非线性扩展来解决该问题,该传统上采样的非线性扩展,其允许CNNS与对称编码器 - 解码器架构(例如U-Net)进行CNN,以表现出完美的换档设备。利用MRI和CT重建实验,我们表明,网络包含APS-D / U层的网络展示了本领域的状态性能,而不会牺牲图像重建质量。此外,与数据增强和抗锯齿等先前的方法不同,从APS-D / U获得的标准规范中的增益也扩展到训练分布外的图像。
translated by 谷歌翻译
现有的感知相似性指标假定图像及其参考非常适合。结果,这些指标通常对人眼无法察觉的小对齐误差敏感。本文研究了微小的未对准的影响,特别是输入图像和参考图像之间对现有指标的微小变化,并因此发展了耐转移的相似性度量。本文以LPIP为基础,这是一种广泛使用的知觉相似性度量,并探索了建筑设计注意事项,以使其可抵抗不可察觉的未对准。具体而言,我们研究了广泛的神经网络元素,例如抗异化滤波,汇总,跨性别,填充和跳过连接,并讨论它们在制作强大度量方面的作用。根据我们的研究,我们开发了一种新的基于神经网络的知觉相似性度量。我们的实验表明,我们的指标宽容不可察觉的转变,同时与人类的相似性判断一致。
translated by 谷歌翻译
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation. * This work was done during an internship at NVIDIA. 35th Conference on Neural Information Processing Systems (NeurIPS 2021).
translated by 谷歌翻译
We propose a novel antialiasing method to increase shift invariance in convolutional neural networks (CNNs). More precisely, we replace the conventional combination "real-valued convolutions + max pooling" ($\mathbb R$Max) by "complex-valued convolutions + modulus" ($\mathbb C$Mod), which produce stable feature representations for band-pass filters with well-defined orientations. In a recent work, we proved that, for such filters, the two operators yield similar outputs. Therefore, $\mathbb C$Mod can be viewed as a stable alternative to $\mathbb R$Max. To separate band-pass filters from other freely-trained kernels, in this paper, we designed a "twin" architecture based on the dual-tree complex wavelet packet transform, which generates similar outputs as standard CNNs with fewer trainable parameters. In addition to improving stability to small shifts, our experiments on AlexNet and ResNet showed increased prediction accuracy on natural image datasets such as ImageNet and CIFAR10. Furthermore, our approach outperformed recent antialiasing methods based on low-pass filtering by preserving high-frequency information, while reducing memory usage.
translated by 谷歌翻译
卷积神经网络的理论表明,移位阶段性的特性,即移动输入会导致同样移动的输出。但是,实际上,情况并非总是如此。这为场景文本检测带来了一个很好的问题,对于文本检测,一致的空间响应至关重要,无论文本在场景中的位置如何。使用简单的合成实验,我们证明了最先进的完全卷积文本检测器的固有移位方差。此外,使用相同的实验设置,我们展示了较小的体系结构变化如何导致改善的移位等效性和较小的检测器输出变化。我们使用文本检测网络上的现实世界培训时间表来验证合成结果。为了量化转移变异性的量,我们提出了一个基于公认的文本检测基准测试的度量。虽然提出的架构更改无法完全恢复移位均衡性,但添加平滑过滤器可以显着提高公共文本数据集的变化一致性。考虑到小移位的潜在影响,我们建议通过本工作中描述的指标扩展常用的文本检测指标,以便能够量化文本检测器的一致性。
translated by 谷歌翻译
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CI-FAR10 and rotated MNIST.
translated by 谷歌翻译
In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, IMAGENET-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called IMAGENET-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.
translated by 谷歌翻译
现代神经网络Excel在图像分类中,但它们仍然容易受到常见图像损坏,如模糊,斑点噪音或雾。最近的方法关注这个问题,例如Augmix和Deepaulment,引入了在预期运行的防御,以期望图像损坏分布。相比之下,$ \ ell_p $ -norm界限扰动的文献侧重于针对最坏情况损坏的防御。在这项工作中,我们通过提出防范内人来调和两种方法,这是一种优化图像到图像模型的参数来产生对外损坏的增强图像的技术。我们理论上激发了我们的方法,并为其理想化版本的一致性以及大纲领提供了足够的条件。我们的分类机器在预期对CiFar-10-C进行的常见图像腐败基准上提高了最先进的,并改善了CIFAR-10和ImageNet上的$ \ ell_p $ -norm有界扰动的最坏情况性能。
translated by 谷歌翻译
在过去的几年中,卷积神经网络(CNN)一直是广泛的计算机视觉任务中的主导神经架构。从图像和信号处理的角度来看,这一成功可能会令人惊讶,因为大多数CNN的固有空间金字塔设计显然违反了基本的信号处理法,即在其下采样操作中对定理进行采样。但是,由于不良的采样似乎不影响模型的准确性,因此在模型鲁棒性开始受到更多关注之前,该问题已被广泛忽略。最近的工作[17]在对抗性攻击和分布变化的背景下,毕竟表明,CNN的脆弱性与不良下降采样操作引起的混叠伪像之间存在很强的相关性。本文以这些发现为基础,并引入了一个可混合的免费下采样操作,可以轻松地插入任何CNN体系结构:频lowcut池。我们的实验表明,结合简单而快速的FGSM对抗训练,我们的超参数无操作员显着提高了模型的鲁棒性,并避免了灾难性的过度拟合。
translated by 谷歌翻译
深度神经网络(DNN)已被广泛用于计算机视觉任务,例如图像分类,对象检测和分割。尽管最近的研究表明它们易受输入图像中手动数字扰动或失真的脆弱性。网络的准确性受到培训数据集的数据分布的极大影响。缩放原始图像会创建分布数据,这使其成为欺骗网络的对抗性攻击。在这项工作中,我们通过通过不同的倍数将ImageNet挑战数据集的子集缩放出一个子集,从而提出了一个缩放分数数据集Imagenet-C。我们工作的目的是研究缩放图像对高级DNN的性能的影响。我们对所提出的Imagenet-CS进行了几个最新的深神网络体系结构进行实验,结果显示缩放大小和准确性下降之间存在显着的正相关。此外,根据RESNET50体系结构,我们展示了一些关于最近提出的强大训练技术和策略(例如Augmix,Revisiting and Ranstorize of Al Of Awmiting and Normorizer of Un Imagenet-cs)的测试。实验结果表明,这些强大的训练技术可以改善网络对缩放转换的鲁棒性。
translated by 谷歌翻译
我们提供了各种图像分类体系结构(卷积,视觉变压器和完全连接的MLP网络)和数据增强技术的详细评估。我们进行以下观察结果:(a)在没有数据增强的情况下,所有体系结构,包括卷积网络在翻译测试分布中评估时的性能下降。可以理解的是,对于非跨跨结构,分配准确性以及降解对变化都明显较差。 (b)在所有体系结构中,即使是$ 4 $ PIXEL随机农作物的最小增强也可以提高性能的稳健性,从而在测试数据中更大的图像大小($ 8 $ - $ 16 $像素)的更大幅度转移 - - 提出一种从增强性的元概括形式。对于非横线架构,虽然绝对精度仍然很低,但我们看到稳健性对大型翻译转移的稳定性有了显着改善。 (c)具有足够高级的增强($ 4 $ PIXEL CROP+RANDAGEMTANTY+RASANing+Mixup)管道,所有架构都可以训练以具有竞争性能,无论是在分发精度以及对大型翻译转移的推广方面。
translated by 谷歌翻译
Shift Invariance是CNN的关键属性,可提高分类性能。然而,我们表明,与循环偏移的不变性也可能导致对对抗性攻击的更大敏感性。我们首先在使用换档不变线性分类器时表征类之间的余量。我们表明边际只能依赖于信号的DC分量。然后,使用关于无限宽网络的结果,我们显示在一些简单的情况下,完全连接和换档不变神经网络产生线性决策边界。使用这一点,我们证明了神经网络中的换档不变性为两个类的简单情况产生了对手示例,每个案例由灰色背景上的黑色或白点组成的单个图像。这不仅仅是一种好奇心;我们凭经验显示,使用真实的数据集和现实的架构,换档不变性降低了对抗性的鲁棒性。最后,我们描述了使用合成数据来探测这种连接源的初始实验。
translated by 谷歌翻译
Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We propose AUGMIX, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. AUGMIX significantly improves robustness and uncertainty measures on challenging image classification benchmarks, closing the gap between previous methods and the best possible performance in some cases by more than half.
translated by 谷歌翻译
尽管对图像分类任务的表现令人印象深刻,但深网络仍然难以概括其数据的许多常见损坏。为解决此漏洞,事先作品主要专注于提高其培训管道的复杂性,以多样性的名义结合多种方法。然而,在这项工作中,我们逐步回来并遵循原则的方法来实现共同腐败的稳健性。我们提出了一个普遍的数据增强方案,包括最大熵图像变换的简单系列。我们展示了Prime优于现有技术的腐败鲁棒性,而其简单和即插即用性质使其能够与其他方法结合以进一步提升其稳健性。此外,我们分析了对综合腐败图像混合策略的重要性,并揭示了在共同腐败背景下产生的鲁棒性准确性权衡的重要性。最后,我们表明我们的方法的计算效率允许它在线和离线数据增强方案轻松使用。
translated by 谷歌翻译
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.
translated by 谷歌翻译
我们提出了层饱和 - 一种简单的在线可计算的方法,用于分析神经网络中的信息处理。首先,我们表明层的输出可以限制在没有性能损失的方差矩阵的eIgenspace。我们提出了一种计算上的轻量级方法,用于在训练期间近似方差矩阵。从其无损EIGenspace的维度我们推导了层饱和度 - eIGenspace尺寸和层宽度之间的比率。我们表明饱和度似乎表明哪个层有助于网络性能。我们通过改变网络深度,滤波器大小和输入分辨率,展示如何改变神经网络中的层饱和度。此外,我们表明,通过在网络上更均匀地分配推动过程,所选择的输入分辨率提高了网络性能。
translated by 谷歌翻译
对共同腐败的稳健性的文献表明对逆势培训是否可以提高这种环境的性能,没有达成共识。 First, we show that, when used with an appropriately selected perturbation radius, $\ell_p$ adversarial training can serve as a strong baseline against common corruptions improving both accuracy and calibration.然后,我们解释了为什么对抗性训练比具有简单高斯噪声的数据增强更好地表现,这被观察到是对共同腐败的有意义的基线。与此相关,我们确定了高斯增强过度适用于用于培训的特定标准偏差的$ \ sigma $ -oviting现象,这对培训具有显着不利影响的普通腐败精度。我们讨论如何缓解这一问题,然后如何通过学习的感知图像贴片相似度引入对抗性训练的有效放松来进一步增强$ \ ell_p $普发的培训。通过对CiFar-10和Imagenet-100的实验,我们表明我们的方法不仅改善了$ \ ell_p $普发的培训基线,而且还有累积的收益与Augmix,Deepaulment,Ant和Sin等数据增强方法,导致普通腐败的最先进的表现。我们的实验代码在HTTPS://github.com/tml-epfl/adv-training - 窗子上公开使用。
translated by 谷歌翻译
不变性于广泛的图像损坏,例如翘曲,噪声或颜色移位,是在计算机视觉中建立强大模型的一个重要方面。最近,已经提出了几种新的数据增强,从而显着提高了Imagenet-C的性能,这是这种腐败的基准。但是,对数据增强和测试时间损坏之间的关系仍然缺乏基本的理解。为此,我们开发了图像变换的一个特征空间,然后在增强和损坏之间使用该空间中的新措施,称为最小示例距离,以演示相似性和性能之间的强相关性。然后,当测试时间损坏被对来自Imagenet-C中的测试时间损坏被采样时,我们调查最近的数据增强并观察腐败鲁棒性的重大退化。我们的结果表明,通过对感知同类增强的培训来提高测试错误,数据增强可能不会超出现有的基准。我们希望我们的结果和工具将允许更强大的进展,以提高对图像损坏的稳健性。我们在https://github.com/facebookresearch/augmentation - 窗子提供代码。
translated by 谷歌翻译
神经网络合奏,例如贝叶斯神经网络(BNNS),在不确定性估计和鲁棒性领域表现出了成功。但是,至关重要的挑战禁止其在实践中使用。 BNN需要大量预测来产生可靠的结果,从而大大增加了计算成本。为了减轻这个问题,我们提出了空间平滑,这是一种在空间上集合相邻的卷积神经网络特征映射点的方法。通过简单地在模型中添加一些模糊层,我们从经验上表明,空间平滑提高了BNN在整个合奏大小范围内的准确性,不确定性估计和鲁棒性。特别是,结合空间平滑的BNN仅与少数合奏实现高预测性能。此外,该方法还可以应用于规范确定性神经网络以改善性能。许多证据表明,改进可以归因于稳定的特征图和损失景观的平滑。此外,我们通过将其作为特殊的空间平滑案例来称呼它们,为先前作品提供基本解释 - 即全球平均汇集,预活化和relu6。这些不仅提高了准确性,而且通过使损失景观与空间平滑相同的方式使损失景观更加顺畅,从而提高了不确定性估计和鲁棒性。该代码可从https://github.com/xxxnell/spatial-smoothing获得。
translated by 谷歌翻译