虽然小说计算机视觉架构正在获得牵引力,但模型架构的影响往往与培训方法的变化或探索有关。基于身份映射的架构Resnets和Densenets在图像分类任务中承诺路径断开结果,并且如果给出的数据相当有限,甚至现在甚至是现在的方法。考虑到有限资源的易培训,这项工作重新审视ERSNET并通过使用混合数据增强作为正则化和调整超参数来改善Reset50 \ Cite {Resnets}。
translated by 谷歌翻译
Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels. This technique is known to improve the generalization performance in many learning paradigms and applications. In this work, we first analyze Mixup and show that it implicitly regularizes infinitely many directional derivatives of all orders. We then propose a new method to improve Mixup based on the novel insight. To demonstrate the effectiveness of the proposed method, we conduct experiments across various domains such as images, tabular data, speech, and graphs. Our results show that the proposed method improves Mixup across various datasets using a variety of architectures, for instance, exhibiting an improvement over Mixup by 0.8% in ImageNet top-1 accuracy.
translated by 谷歌翻译
事实证明,数据混合对提高深神经网络的概括能力是有效的。虽然早期方法通过手工制作的策略(例如线性插值)混合样品,但最新方法利用显着性信息通过复杂的离线优化来匹配混合样品和标签。但是,在精确的混合政策和优化复杂性之间进行了权衡。为了应对这一挑战,我们提出了一个新颖的自动混合(Automix)框架,其中混合策略被参数化并直接实现最终分类目标。具体而言,Automix将混合分类重新定义为两个子任务(即混合样品生成和混合分类)与相应的子网络,并在双层优化框架中求解它们。对于这一代,可学习的轻质混合发电机Mix Block旨在通过在相应混合标签的直接监督下对贴片的关系进行建模,以生成混合样品。为了防止双层优化的降解和不稳定性,我们进一步引入了动量管道以端到端的方式训练汽车。与在各种分类场景和下游任务中的最新图像相比,九个图像基准的广泛实验证明了汽车的优势。
translated by 谷歌翻译
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
translated by 谷歌翻译
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
translated by 谷歌翻译
CutMix是一种流行的增强技术,通常用于训练现代卷积和变压器视觉网络。它最初旨在鼓励卷积神经网络(CNN)更多地关注图像的全球环境,而不是本地信息,从而大大提高了CNN的性能。但是,我们发现它对自然具有全球接收领域的基于变压器的体系结构的好处有限。在本文中,我们提出了一种新型的数据增强技术图,以提高视觉变压器的性能。 TokenMix通过将混合区分为多个分离的零件,将两个图像在令牌级别混合。此外,我们表明,Cutmix中的混合学习目标是一对地面真相标签的线性组合,可能是不准确的,有时是违反直觉的。为了获得更合适的目标,我们建议根据预先训练的教师模型的两个图像的基于内容的神经激活图分配目标得分,该图像不需要具有高性能。通过大量有关各种视觉变压器体系结构的实验,我们表明我们提出的TokenMix可以帮助视觉变形金刚专注于前景区域,以推断班级并增强其稳健性,以稳定的性能增长。值得注意的是,我们使用 +1%Imagenet TOP-1精度改善DEIT-T/S/B。此外,TokenMix的训练较长,在Imainet上获得了81.2%的TOP-1精度,而DEIT-S训练了400个时代。代码可从https://github.com/sense-x/tokenmix获得。
translated by 谷歌翻译
对抗性训练遭受了稳健的过度装备,这是一种现象,在训练期间鲁棒测试精度开始减少。在本文中,我们专注于通过使用常见的数据增强方案来减少强大的过度装备。我们证明,与先前的发现相反,当与模型重量平均结合时,数据增强可以显着提高鲁棒精度。此外,我们比较各种增强技术,并观察到空间组合技术适用于对抗性培训。最后,我们评估了我们在Cifar-10上的方法,而不是$ \ ell_ indty $和$ \ ell_2 $ norm-indeded扰动分别为尺寸$ \ epsilon = 8/255 $和$ \ epsilon = 128/255 $。与以前的最先进的方法相比,我们表现出+ 2.93%的绝对改善+ 2.93%,+ 2.16%。特别是,反对$ \ ell_ infty $ norm-indeded扰动尺寸$ \ epsilon = 8/255 $,我们的模型达到60.07%的强劲准确性而不使用任何外部数据。我们还通过这种方法实现了显着的性能提升,同时使用其他架构和数据集如CiFar-100,SVHN和TinyimageNet。
translated by 谷歌翻译
Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers. They have proved to be effective for guiding the model to attend on less discriminative parts of objects (e.g. leg as opposed to head of a person), thereby letting the network generalize better and have better object localization capabilities. On the other hand, current methods for regional dropout remove informative pixels on training images by overlaying a patch of either black pixels or random noise. Such removal is not desirable because it leads to information loss and inefficiency during training. We therefore propose the CutMix augmentation strategy: patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches. By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CI-FAR and ImageNet classification tasks, as well as on the Im-ageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. Source code and pretrained models are available at https://github.com/clovaai/CutMix-PyTorch.
translated by 谷歌翻译
Modern neural networks are over-parameterized and thus rely on strong regularization such as data augmentation and weight decay to reduce overfitting and improve generalization. The dominant form of data augmentation applies invariant transforms, where the learning target of a sample is invariant to the transform applied to that sample. We draw inspiration from human visual classification studies and propose generalizing augmentation with invariant transforms to soft augmentation where the learning target softens non-linearly as a function of the degree of the transform applied to the sample: e.g., more aggressive image crop augmentations produce less confident learning targets. We demonstrate that soft targets allow for more aggressive data augmentation, offer more robust performance boosts, work with other augmentation policies, and interestingly, produce better calibrated models (since they are trained to be less confident on aggressively cropped/occluded examples). Combined with existing aggressive augmentation strategies, soft target 1) doubles the top-1 accuracy boost across Cifar-10, Cifar-100, ImageNet-1K, and ImageNet-V2, 2) improves model occlusion performance by up to $4\times$, and 3) halves the expected calibration error (ECE). Finally, we show that soft augmentation generalizes to self-supervised classification tasks.
translated by 谷歌翻译
使用卷积神经网络(CNN)已经显着改善了几种图像处理任务,例如图像分类和对象检测。与Reset和Abseralnet一样,许多架构在创建时至少在一个数据集中实现了出色的结果。培训的一个关键因素涉及网络的正规化,这可以防止结构过度装备。这项工作分析了在过去几年中开发的几种正规化方法,显示了不同CNN模型的显着改进。该作品分为三个主要区域:第一个称为“数据增强”,其中所有技术都侧重于执行输入数据的更改。第二个,命名为“内部更改”,旨在描述修改神经网络或内核生成的特征映射的过程。最后一个称为“标签”,涉及转换给定输入的标签。这项工作提出了与关于正则化的其他可用调查相比的两个主要差异:(i)第一个涉及在稿件中收集的论文并非超过五年,并第二个区别是关于可重复性,即所有作品此处推荐在公共存储库中可用的代码,或者它们已直接在某些框架中实现,例如Tensorflow或Torch。
translated by 谷歌翻译
Modern deep networks can be better generalized when trained with noisy samples and regularization techniques. Mixup and CutMix have been proven to be effective for data augmentation to help avoid overfitting. Previous Mixup-based methods linearly combine images and labels to generate additional training data. However, this is problematic if the object does not occupy the whole image as we demonstrate in Figure 1. Correctly assigning the label weights is hard even for human beings and there is no clear criterion to measure it. To tackle this problem, in this paper, we propose LUMix, which models such uncertainty by adding label perturbation during training. LUMix is simple as it can be implemented in just a few lines of code and can be universally applied to any deep networks \eg CNNs and Vision Transformers, with minimal computational cost. Extensive experiments show that our LUMix can consistently boost the performance for networks with a wide range of diversity and capacity on ImageNet, \eg $+0.7\%$ for a small model DeiT-S and $+0.6\%$ for a large variant XCiT-L. We also demonstrate that LUMix can lead to better robustness when evaluated on ImageNet-O and ImageNet-A. The source code can be found \href{https://github.com/kevin-ssy/LUMix}{here}
translated by 谷歌翻译
Convolutional Neural Networks (CNNs) have demonstrated superiority in learning patterns, but are sensitive to label noises and may overfit noisy labels during training. The early stopping strategy averts updating CNNs during the early training phase and is widely employed in the presence of noisy labels. Motivated by biological findings that the amplitude spectrum (AS) and phase spectrum (PS) in the frequency domain play different roles in the animal's vision system, we observe that PS, which captures more semantic information, can increase the robustness of DNNs to label noise, more so than AS can. We thus propose early stops at different times for AS and PS by disentangling the features of some layer(s) into AS and PS using Discrete Fourier Transform (DFT) during training. Our proposed Phase-AmplituDe DisentangLed Early Stopping (PADDLES) method is shown to be effective on both synthetic and real-world label-noise datasets. PADDLES outperforms other early stopping methods and obtains state-of-the-art performance.
translated by 谷歌翻译
我们提出了混合样品数据增强(MSDA)的第一个统一的理论分析,例如混合和cutmix。我们的理论结果表明,无论选择混合策略如何,MSDA都表现为基础训练损失的像素级正规化和第一层参数的正则化。同样,我们的理论结果支持MSDA培训策略可以改善与香草训练策略相比的对抗性鲁棒性和泛化。利用理论结果,我们对MSDA的不同设计选择的工作方式提供了高级了解。例如,我们表明,最流行的MSDA方法,混合和cutmix的表现不同,例如,CutMix通过像素距离正规化输入梯度,而混合量则使输入梯度正常于像素距离。我们的理论结果还表明,最佳MSDA策略取决于任务,数据集或模型参数。从这些观察结果中,我们提出了广义MSDA,这是混合版的混合和Cutmix(HMIX)和Gaussian Mixup(GMIX),简单的混合和CutMix。我们的实施可以利用混合和cutmix的优势,而我们的实施非常有效,并且计算成本几乎可以忽略为混合和cutmix。我们的实证研究表明,我们的HMIX和GMIX优于CIFAR-100和Imagenet分类任务中先前最先进的MSDA方法。源代码可从https://github.com/naver-ai/hmix-gmix获得
translated by 谷歌翻译
Despite being robust to small amounts of label noise, convolutional neural networks trained with stochastic gradient methods have been shown to easily fit random labels. When there are a mixture of correct and mislabelled targets, networks tend to fit the former before the latter. This suggests using a suitable two-component mixture model as an unsupervised generative model of sample loss values during training to allow online estimation of the probability that a sample is mislabelled. Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss). We further adapt mixup augmentation to drive our approach a step further. Experiments on CIFAR-10/100 and TinyImageNet demonstrate a robustness to label noise that substantially outperforms recent state-of-the-art. Source code is available at https://git.io/fjsvE.
translated by 谷歌翻译
数据增强是使用深度学习来提高对象识别的识别精度的重要技术。从多个数据集中产生混合数据(例如混音)的方法可以获取未包含在培训数据中的新多样性,从而有助于改善准确性。但是,由于在整个训练过程中选择了选择用于混合的数据,因此在某些情况下未选择适当的类或数据。在这项研究中,我们提出了一种数据增强方法,该方法根据班级概率来计算类之间的距离,并可以从合适的类中选择数据以在培训过程中混合。根据每个班级的训练趋势,对混合数据进行动态调整,以促进培​​训。所提出的方法与常规方法结合使用,以生成混合数据。评估实验表明,提出的方法改善了对一般和长尾图像识别数据集的识别性能。
translated by 谷歌翻译
数据增强已被广泛用于改善深形网络的性能。提出了许多方法,例如丢弃,正则化和图像增强,以避免过度发出和增强神经网络的概括。数据增强中的一个子区域是图像混合和删除。这种特定类型的增强混合两个图像或删除图像区域以隐藏或制定困惑的图像的某些特征,以强制它强调图像中对象的整体结构。与此方法培训的模型表明,与未执行混合或删除的培训相比,该模型表现得很好。这种培训方法实现的额外福利是对图像损坏的鲁棒性。由于其最近的计算成本低,因此提出了许多图像混合和删除技术。本文对这些设计的方法提供了详细的审查,在三个主要类别中划分增强策略,切割和删除,切割和混合和混合。纸张的第二部分是评估这些方法的图像分类,微小的图像识别和对象检测方法,其中显示了这类数据增强提高了深度神经网络的整体性能。
translated by 谷歌翻译
In this paper, we present a modified Xception architecture, the NEXcepTion network. Our network has significantly better performance than the original Xception, achieving top-1 accuracy of 81.5% on the ImageNet validation dataset (an improvement of 2.5%) as well as a 28% higher throughput. Another variant of our model, NEXcepTion-TP, reaches 81.8% top-1 accuracy, similar to ConvNeXt (82.1%), while having a 27% higher throughput. Our model is the result of applying improved training procedures and new design decisions combined with an application of Neural Architecture Search (NAS) on a smaller dataset. These findings call for revisiting older architectures and reassessing their potential when combined with the latest enhancements.
translated by 谷歌翻译
混合是深度神经网络的流行数据依赖性增强技术,其包含两个子任务,混合生成和分类。社区通常将混合限制在监督学习(SL)中,并且生成子任务的目的是固定到采样的对,而不是考虑整个数据歧管。为了克服这些限制,我们系统地研究了两个子任务的目标,并为SL和自我监督的学习(SSL)方案,命名为Samix的两个子任务和提出情景 - 激动化混合。具体而言,我们假设并验证混合生成的核心目标,因为优化来自其他类别的全球歧视的两个类之间的局部平滑度。基于这一发现,提出了$ \ eta $ -Balanced混合丢失,以进行两个子任务的互补培训。同时,生成子任务被参数化为可优化的模块,混音器,其利用注意机制来生成混合样本而无需标记依赖性。对SL和SSL任务的广泛实验表明SAMIX始终如一地优于大边距。
translated by 谷歌翻译
类别不平衡数据的问题在于,由于少数类别的数据缺乏数据,分类器的泛化性能劣化。在本文中,我们提出了一种新的少数民族过度采样方法,通过利用大多数类作为背景图像的丰富背景来增加多元化的少数民族样本。为了使少数民族样本多样化,我们的主要思想是将前景补丁从少数级别粘贴到来自具有富裕环境的多数类的背景图像。我们的方法很简单,可以轻松地与现有的长尾识别方法结合。我们通过广泛的实验和消融研究证明了提出的过采样方法的有效性。如果没有任何架构更改或复杂的算法,我们的方法在各种长尾分类基准上实现了最先进的性能。我们的代码将在链接上公开提供。
translated by 谷歌翻译
人们对从长尾班级分布中学习的具有挑战性的视觉感知任务越来越兴趣。训练数据集中的极端类失衡使模型偏向于识别多数级数据而不是少数级数据。最近,已经提出了两个分支网络的双分支网络(DBN)框架。传统的分支和重新平衡分支用于提高长尾视觉识别的准确性。重新平衡分支使用反向采样器来生成类平衡的训练样本,以减轻由于类不平衡而减轻偏见。尽管该策略在处理偏见方面非常成功,但使用反向采样器进行培训可以降低表示形式的学习绩效。为了减轻这个问题,常规方法使用了精心设计的累积学习策略,在整个培训阶段,重新平衡分支的影响逐渐增加。在这项研究中,我们旨在开发一种简单而有效的方法,以不需要优化的累积学习而在不累积学习的情况下提高DBN的性能。我们设计了一种称为双边混合增强的简单数据增强方法,该方法将统一采样器中的一个样品与反向采样器中的另一个样品结合在一起,以产生训练样本。此外,我们介绍了阶级条件的温度缩放,从而减轻对拟议的DBN结构的多数级别的偏见。我们对广泛使用的长尾视觉识别数据集进行的实验表明,双边混合增加在改善DBN的表示性能方面非常有效,并且所提出的方法可以实现某些类别的先进绩效。
translated by 谷歌翻译