Learning image representations using synthetic data allows training neural networks without some of the concerns associated with real images, such as privacy and bias. Existing work focuses on a handful of curated generative processes which require expert knowledge to design, making it hard to scale up. To overcome this, we propose training with a large dataset of twenty-one thousand programs, each one generating a diverse set of synthetic images. These programs are short code snippets, which are easy to modify and fast to execute using OpenGL. The proposed dataset can be used for both supervised and unsupervised representation learning, and reduces the gap between pre-training with real and procedurally generated images by 38%.
translated by 谷歌翻译
当前的视觉系统在巨大的数据集上培训,这些数据集具有成本:策良昂贵,他们继承了人类偏见,并且担心隐私和使用权。为了抵消这些成本,利息飙升,从更便宜的数据来源学习,如未标记的图像。在本文中,我们进一步逐步询问我们是否可以完全脱离真实的图像数据集,而是从噪声过程中学习。我们调查一套图像生成模型,从简单随机过程产生图像。然后将这些作为视觉表示学习者的培训数据,具有对比损失。我们在不同随机初始化下研究两种类型的噪声过程,统计图像模型和深度生成模型。我们的调查结果表明,噪声捕获真实数据的某些结构特性是重要的,但即使使用远离现实的过程也可以实现良好的性能。我们还发现多样性是学习良好陈述的关键财产。数据集,模型和代码可在https://mbaradad.github.io/learning_with_noise上获得。
translated by 谷歌翻译
带有像素天标签的注释图像是耗时和昂贵的过程。最近,DataSetGan展示了有希望的替代方案 - 通过利用一小组手动标记的GaN生成的图像来通过生成的对抗网络(GAN)来综合大型标记数据集。在这里,我们将DataSetGan缩放到ImageNet类别的规模。我们从ImageNet上训练的类条件生成模型中拍摄图像样本,并为所有1K类手动注释每个类的5张图像。通过在Biggan之上培训有效的特征分割架构,我们将Bigan转换为标记的DataSet生成器。我们进一步表明,VQGan可以类似地用作数据集生成器,利用已经注释的数据。我们通过在各种设置中标记一组8K实图像并在各种设置中评估分段性能来创建一个新的想象因基准。通过广泛的消融研究,我们展示了利用大型生成的数据集来培训在像素 - 明智的任务上培训不同的监督和自我监督的骨干模型的大增益。此外,我们证明,使用我们的合成数据集进行预培训,以改善在几个下游数据集上的标准Imagenet预培训,例如Pascal-VOC,MS-Coco,Citycapes和Chink X射线以及任务(检测,细分)。我们的基准将公开并维护一个具有挑战性的任务的排行榜。项目页面:https://nv-tlabs.github.io/big-dataseTgan/
translated by 谷歌翻译
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available. To facilitate GAN training, current methods propose to use data-specific augmentation techniques. Despite the effectiveness, it is difficult for these methods to scale to practical applications. In this work, we present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks. We first produce augmented samples using the convex combinations of the real samples. Then, we optimize the augmented samples by minimizing the norms of the data scores, i.e., the gradients of the log-density functions. This procedure enforces the augmented samples close to the data manifold. To estimate the scores, we train a deep estimation network with multi-scale score matching. For different image synthesis tasks, we train the score estimation network using different data. We do not require the tuning of the hyperparameters or modifications to the network architecture. The ScoreMix method effectively increases the diversity of data and reduces the overfitting problem. Moreover, it can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ScoreMix method achieve significant improvements.
translated by 谷歌翻译
Adversarially trained generative models (GANs) have recently achieved compelling image synthesis results. But despite early successes in using GANs for unsupervised representation learning, they have since been superseded by approaches based on self-supervision. In this work we show that progress in image generation quality translates to substantially improved representation learning performance. Our approach, BigBiGAN, builds upon the state-of-the-art BigGAN model, extending it to representation learning by adding an encoder and modifying the discriminator. We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as in unconditional image generation. Pretrained BigBiGAN models -including image generators and encoders -are available on TensorFlow Hub 1 .
translated by 谷歌翻译
Contrastive learning has become a key component of self-supervised learning approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation and large sets of negatives are both crucial in learning such representations. At the same time, data mixing strategies, either at the image or the feature level, improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, i.e. the effect of hard negatives, has so far been neglected. To get more meaningful negative samples, current top contrastive self-supervised learning approaches either substantially increase the batch sizes, or keep very large memory banks; increasing memory requirements, however, leads to diminishing returns in terms of performance. We therefore start by delving deeper into a top-performing framework and show evidence that harder negatives are needed to facilitate better and faster learning. Based on these observations, and motivated by the success of data mixing, we propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead. We exhaustively ablate our approach on linear classification, object detection, and instance segmentation and show that employing our hard negative mixing procedure improves the quality of visual representations learned by a state-of-the-art self-supervised learning method.Project page: https://europe.naverlabs.com/mochi 34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
近年来有条件的GAN已经成熟,并且能够产生高质量的现实形象。但是,计算资源和培训高质量的GAN所需的培训数据是巨大的,因此对这些模型的转移学习的研究是一个紧急话题。在本文中,我们探讨了从高质量预训练的无条件GAN到有条件的GAN的转移。为此,我们提出了基于HyperNetwork的自适应权重调制。此外,我们介绍了一个自我初始化过程,不需要任何真实数据才能初始化HyperNetwork参数。为了进一步提高知识转移的样本效率,我们建议使用自我监督(对比)损失来改善GaN判别者。在广泛的实验中,我们验证了多个标准基准上的Hypernetworks,自我初始化和对比损失的效率。
translated by 谷歌翻译
在Imagenet或其他大规模数据数据上的预培训模型导致计算机愿景的主要进步,尽管伴随着与策划成本,隐私,使用权和道德问题相关的缺点。在本文中,我们首次研究了基于由图形模拟器生成的合成数据到来自非常不同的域的下游任务的培训模型的可转换性。在使用此类合成数据进行预培训时,我们发现不同任务的下游性能受到不同配置的不同配置(例如,照明,对象姿势,背景等),并且没有单尺寸适合 - 所有解决方案。因此,更好地将合成的预训练数据量身定制到特定的下游任务,以获得最佳性能。我们介绍Task2SIM,一个统一的模型将下游任务表示映射到最佳模拟参数,以为它们生成合成的预训练数据。 Task2SIM通过培训学习此映射,以查找一组“看到”任务上的最佳参数集。曾经训练过,它可以用于预测一个新颖的“看不见”任务的最佳仿真参数,而无需额外的培训。鉴于每级图像数量的预算,我们具有20个不同的下游任务的广泛实验,显示了Task2SIM的任务 - 自适应预训练数据导致明显更好的下游性能,而不是在看见和看不见的任务上的非自适应选择模拟参数。它甚至是竞争对手的真实图像的竞争力。
translated by 谷歌翻译
生成对抗网络(GAN)是现实图像合成的最新生成模型之一。虽然培训和评估GAN变得越来越重要,但当前的GAN研究生态系统并未提供可靠的基准,以始终如一地进行评估。此外,由于GAN实施很少,因此研究人员将大量时间用于重现基线。我们研究了GAN方法的分类法,并提出了一个名为Studiogan的新开源库。 Studiogan支持7种GAN体系结构,9种调理方法,4种对抗损失,13个正则化模块,3个可区分的增强,7个评估指标和5个评估骨干。通过我们的培训和评估协议,我们使用各种数据集(CIFAR10,ImageNet,AFHQV2,FFHQ和Baby/Papa/Granpa-Imagenet)和3个不同的评估骨干(InceptionV3,Swav,Swav和Swin Transformer)提出了大规模的基准。与GAN社区中使用的其他基准不同,我们在统一的培训管道中培训了包括Biggan,stylegan2和stylegan3在内的代表GAN,并使用7个评估指标量化了生成性能。基准测试评估其他尖端生成模型(例如,stylegan-xl,adm,maskgit和rq-transformer)。 Studiogan提供了预先训练的权重的GAN实现,培训和评估脚本。 Studiogan可从https://github.com/postech-cvlab/pytorch-studiogan获得。
translated by 谷歌翻译
生成的对抗网络(GANS)产生高质量的图像,但致力于训练。它们需要仔细正常化,大量计算和昂贵的超参数扫描。我们通过将生成和真实样本投影到固定的预级特征空间中,在这些问题上进行了重要的头路。发现鉴别者无法充分利用来自预押模型的更深层次的特征,我们提出了更有效的策略,可以在渠道和分辨率中混合特征。我们预计的GaN提高了图像质量,样品效率和收敛速度。它与最多一个百万像素的分辨率进一步兼容,并在二十二个基准数据集上推进最先进的FR \'Echet Inception距离(FID)。重要的是,预计的GAN符合先前最低的FID速度快40倍,鉴于相同的计算资源,将壁钟时间从5天切割到不到3小时。
translated by 谷歌翻译
最近的工作认为,强大的培训需要比标准分类所需的数据集大得多。在CiFar-10和CiFar-100上,这转化为仅培训的型号之间的可稳健稳健精度差距,这些型号来自原始训练集的数据,那些从“80万微小图像”数据集(TI-80M)提取的附加数据培训。在本文中,我们探讨了单独培训的生成模型如何利用人为地提高原始训练集的大小,并改善对$ \ ell_p $ norm-inded扰动的对抗鲁棒性。我们确定了包含额外生成数据的充分条件可以改善鲁棒性,并证明可以显着降低具有额外实际数据训练的模型的强大准确性差距。令人惊讶的是,我们甚至表明即使增加了非现实的随机数据(由高斯采样产生)也可以改善鲁棒性。我们在Cifar-10,CiFar-100,SVHN和Tinyimagenet上评估我们的方法,而$ \ ell_ indty $和$ \ ell_2 $ norm-indeded扰动尺寸$ \ epsilon = 8/255 $和$ \ epsilon = 128/255 $分别。与以前的最先进的方法相比,我们以强大的准确性显示出大的绝对改进。反对$ \ ell_ \ infty $ norm-indeded扰动尺寸$ \ epsilon = 8/255 $,我们的车型分别在Cifar-10和Cifar-100上达到66.10%和33.49%(改善状态)最新美术+ 8.96%和+ 3.29%)。反对$ \ ell_2 $ norm-indeded扰动尺寸$ \ epsilon = 128/255 $,我们的型号在Cifar-10(+ 3.81%)上实现78.31%。这些结果击败了使用外部数据的最先前的作品。
translated by 谷歌翻译
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -from 1 example per class to 1 M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
translated by 谷歌翻译
神经网络可以从单个图像中了解视觉世界的内容是什么?虽然它显然不能包含存在的可能对象,场景和照明条件 - 在所有可能的256 ^(3x224x224)224尺寸的方形图像中,它仍然可以在自然图像之前提供强大的。为了分析这一假设,我们通过通过监控掠夺教师的知识蒸馏来制定一种训练神经网络的培训神经网络。有了这个,我们发现上述问题的答案是:“令人惊讶的是,很多”。在定量术语中,我们在CiFar-10/100上找到了94%/ 74%的前1个精度,在想象中,通过将这种方法扩展到音频,84%的语音组合。在广泛的分析中,我们解除了增强,源图像和网络架构的选择,以及在从未见过熊猫的网络中发现“熊猫神经元”。这项工作表明,一个图像可用于推断成千上万的对象类,并激励关于增强和图像的基本相互作用的更新的研究议程。
translated by 谷歌翻译
在深度学习研究中,自学学习(SSL)引起了极大的关注,引起了计算机视觉和遥感社区的兴趣。尽管计算机视觉取得了很大的成功,但SSL在地球观测领域的大部分潜力仍然锁定。在本文中,我们对在遥感的背景下为计算机视觉的SSL概念和最新发展提供了介绍,并回顾了SSL中的概念和最新发展。此外,我们在流行的遥感数据集上提供了现代SSL算法的初步基准,从而验证了SSL在遥感中的潜力,并提供了有关数据增强的扩展研究。最后,我们确定了SSL未来研究的有希望的方向的地球观察(SSL4EO),以铺平了两个领域的富有成效的相互作用。
translated by 谷歌翻译
当今的生成模型能够综合高保真图像,但是每个模型都专门研究特定的目标域。这增加了模型合并的需求:将两个或多个预贴的生成模型组合到单个统一模型中。在这项工作中,我们解决了模型合并的问题,鉴于在现实世界中经常出现的两个限制:(1)无法访问原始培训数据,并且(2)没有增加神经网络的大小。据我们所知,到目前为止尚未研究在这些约束下合并的模型。我们提出了一种新颖的两阶段解决方案。在第一阶段,我们将所有模型的权重转换为相同的参数空间,通过我们项模型生根的技术。在第二阶段,我们仅使用原始训练的模型生成的数据将其平均重量平均并为每个特定域进行微调来合并。我们证明我们的方法优于基线方法和现有的转移学习技术,并研究了几种应用。
translated by 谷歌翻译
基于GAN的生成建模的进展是,社区的推动是为了发现超出图像生成和编辑任务的使用。特别是,最近的几项工作表明,可以重新用诸如零件分割的判别任务重新用来重新用,尤其是当训练数据有限时。但这些改进如何解决自我监督学习的最新进展情况?由此引起这种激励,我们提出了一种基于对比学习的替代方法,并比较它们对标准的几次射击部分分割基准的性能。我们的实验表明,不仅GAN的方法不提供显着的性能优势,它们的多步训练很复杂,几乎是数量级较慢,并且可以引入额外的偏差。这些实验表明,由使用对比学习训练的标准前馈网络捕获的生成模型的感应偏差,例如它们的解开形状和纹理的能力。这些实验表明,目前生成模型中存在的电感偏差,例如它们的解开形状和纹理的能力,通过使用对比学习训练的标准前馈网络充分捕获。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
降低降低方法是无监督的方法,它学习了低维空间,在这些方法中,初始空间的某些特性(通常是“邻居”的概念)被保留。这种方法通常需要在大的K-NN图或复杂的优化求解器上传播。另一方面,通常用于从头开始学习表示形式,依靠简单,更可扩展的框架来学习的自我监督学习方法。在本文中,我们提出了TLDR,这是通用输入空间的一种降低方法,该方法正在移植Zbontar等人的最新自我监督学习框架。 (2021)降低维度的特定任务,超越任意表示。我们建议使用最近的邻居从训练组中构建对,并减少冗余损失,以学习在此类对之间产生表示形式的编码器。 TLDR是一种简单,易于训练和广泛适用性的方法。它由一个离线最近的邻居计算步骤组成,该步骤可以高度近似,并且是一个直接的学习过程。为了提高可伸缩性,我们专注于提高线性维度的降低,并在图像和文档检索任务上显示一致的收益,例如在Roxford上获得PCA的 +4%地图,用于GEM-AP,改善了ImageNet上的Dino的性能或以10倍的压缩保留。
translated by 谷歌翻译
现代计算机视觉系统中使用的深度神经网络需要巨大的图像数据集来训练它们。这些仔细策划的数据集通常具有百万或更多的图像,跨越千分之一或更多的不同类别。创建和策划此类数据集的过程是一个巨大的承诺,要求广泛的努力和标签费用,并需要仔细导航技术和社会问题,如标签准确性,版权所有权和内容偏见。如果我们有一种方法来利用大型图像数据集的力量,但有很少或没有主要问题和目前面临的关注点?本文扩展了KataOka et的最近工作。 al。 (2020),提出基于动态生成的分形图像的改进的预训练数据集。大规模图像数据集的挑战性问题成为分形预训练的优雅点:完美的标签精度为零成本;无需存储/传输大图像档案;没有隐私/人口统计偏见/不适当内容的疑虑,因为没有人类被描绘;无限的图像供应和多样性;图像是空闲/开源。也许令人惊讶的是,避免这些困难只会在表现中征收小额罚款。利用新建的预训练任务 - 多实例预测 - 我们的实验表明,微调使用分形预先培训的网络培训的网络预先培训网络的准确性的92.7-98.1%。
translated by 谷歌翻译
Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a "dog" can be seen, heard, and felt). We investigate the classic hypothesis that a powerful representation is one that models view-invariant factors. We study this hypothesis under the framework of multiview contrastive learning, where we learn a representation that aims to maximize mutual information between different views of the same scene but is otherwise compact. Our approach scales to any number of views, and is viewagnostic. We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics. Our approach achieves state-of-the-art results on image and video unsupervised learning benchmarks.
translated by 谷歌翻译