创成对抗性网络(甘斯)的主要目标是产生相同的统计数据所提供的培训数据的新数据。然而,最近的多部作品表明,国家的最先进的架构又斗争,以实现这一目标。特别地,他们报告的升高量在光谱统计这使得它可以直接区分真实和生成的图像的高频率。对于这种现象的解释是有争议的:虽然大多数的作品属性文物发电机,其他作品指向鉴别。我们需要在这些解释清醒的审视,并提供有关什么使有效的打击高频文物提出的措施的见解。要做到这一点,我们首先独立评估发电机和鉴别两者的架构,如果他们表现出的频率偏差,使学习的高频含量尤其成问题的分布调查。基于这些实验中,我们提出以下四点看法:1)不同的采样操作偏向不同光谱特性的发电机。 2)由上采样引入的伪像棋盘不能单独解释的光谱差异作为发电机能够补偿这些伪影。 3)鉴别器不与检测本身高频纠缠,但具有低幅度的频率上而奋斗。 4)在鉴别器的下采样操作可以削弱它提供的训练信号的质量。在这些研究结果,我们分析提出了在国家的最先进的甘训练对高频文物的措施,但发现没有现有的方法可以彻底解决谱伪呢。我们的研究结果表明,有很大的潜力,在提高鉴别和,这可能是关键的训练数据的分布更紧密地匹配。
translated by 谷歌翻译
生成的对抗网络(GANS)能够生成从真实图像视觉无法区分的图像。然而,最近的研究表明,生成和实际图像在频域中共享显着差异。在本文中,我们探讨了高频分量在GAN训练中的影响。根据我们的观察,在大多数GAN的培训期间,严重的高频差异使鉴别器聚焦在过度高频成分上,阻碍了发电机拟合了对学习图像内容很重要的低频分量。然后,我们提出了两个简单但有效的频率操作,以消除由GAN训练的高频差异引起的副作用:高频混淆(HFC)和高频滤波器(HFF)。拟议的操作是一般的,可以应用于大多数现有的GAN,一小部分成本。在多丢失函数,网络架构和数据集中验证了所提出的操作的高级性能。具体而言,拟议的HFF在Celeba(128 * 128)基于SSNGAN的Celeba无条件生成的Celeba(128 * 128)无条件一代,在Celeba无条件一代基于SSGAN的13.2 \%$ 30.2 \%$ 69.3 \%$ 69.3 \%$ FID在Celeba无条件一代基于Infomaxgan。
translated by 谷歌翻译
Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.
translated by 谷歌翻译
We observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. We trace the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, we derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. Our results pave the way for generative models better suited for video and animation. * This work was done during an internship at NVIDIA. 35th Conference on Neural Information Processing Systems (NeurIPS 2021).
translated by 谷歌翻译
生成的对抗网络(GANS)产生高质量的图像,但致力于训练。它们需要仔细正常化,大量计算和昂贵的超参数扫描。我们通过将生成和真实样本投影到固定的预级特征空间中,在这些问题上进行了重要的头路。发现鉴别者无法充分利用来自预押模型的更深层次的特征,我们提出了更有效的策略,可以在渠道和分辨率中混合特征。我们预计的GaN提高了图像质量,样品效率和收敛速度。它与最多一个百万像素的分辨率进一步兼容,并在二十二个基准数据集上推进最先进的FR \'Echet Inception距离(FID)。重要的是,预计的GAN符合先前最低的FID速度快40倍,鉴于相同的计算资源,将壁钟时间从5天切割到不到3小时。
translated by 谷歌翻译
生成的对抗网络由于研究人员的最新性能在生成新图像时仅使用目标分布的数据集,因此引起了研究人员的关注。已经表明,真实图像的频谱和假图像之间存在差异。由于傅立叶变换是一种徒图映射,因此说该模型在学习原始分布方面有一个重大问题是一个公平的结论。在这项工作中,我们研究了当前gan的架构和数学理论中提到的缺点的可能原因。然后,我们提出了一个新模型,以减少实际图像和假图像频谱之间的差异。为此,我们使用几何深度学习的蓝图为频域设计了一个全新的架构。然后,我们通过将原始数据的傅立叶域表示作为训练过程中的主要特征来表明生成图像的质量的有希望的改善。
translated by 谷歌翻译
即使自然图像有多种尺寸,生成模型也以固定分辨率运行。由于高分辨率的细节被删除并完全丢弃了低分辨率图像,因此丢失了宝贵的监督。我们认为,每个像素都很重要,并创建具有可变大小图像的数据集,该图像以本机分辨率收集。为了利用各种大小的数据,我们引入了连续尺度训练,该过程以随机尺度进行采样以训练具有可变输出分辨率的新发电机。首先,对生成器进行调节,可以使我们能够生成比以前更高的分辨率图像,而无需在模型中添加层。其次,通过对连续坐标进行调节,我们可以采样仍然遵守一致的全局布局的贴片,这也允许在更高分辨率下进行可扩展的训练。受控的FFHQ实验表明,与离散的多尺度方法相比,我们的方法可以更好地利用多分辨率培训数据,从而获得更好的FID分数和更清洁的高频细节。我们还训练包括教堂,山脉和鸟类在内的其他自然图像领域,并通过连贯的全球布局和现实的本地细节来展示任意量表的综合,超出了我们的实验中的2K分辨率。我们的项目页面可在以下网址找到:https://chail.github.io/anyres-gan/。
translated by 谷歌翻译
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CELEBA images at 1024 2 . We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CELEBA dataset.
translated by 谷歌翻译
随着神经网络能够生成现实的人造图像,它们有可能改善电影,音乐,视频游戏并使互联网变得更具创造力和鼓舞人心的地方。然而,最新的技术有可能使新的数字方式撒谎。作为响应,出现了多种可靠的方法工具箱,以识别人造图像和其他内容。先前的工作主要依赖于像素空间CNN或傅立叶变换。据我们所知,到目前为止,基于多尺度小波表示的综合伪造图像分析和检测方法始于迄今为止在空间和频率中始终存在。小波转换在一定程度上可以保守空间信息,这使我们能够提出新的分析。比较真实图像和假图像的小波系数可以解释。确定了显着差异。此外,本文提议学习一个模型,以根据自然和gan生成图像的小波包装表示合成图像。正如我们在FFHQ,Celeba和LSUN源识别问题上所证明的那样,我们的轻巧法医分类器在相对较小的网络大小上表现出竞争性或改进的性能。此外,我们研究了二进制脸庞++假检测问题。
translated by 谷歌翻译
The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
translated by 谷歌翻译
最先进的3D感知生成模型依赖于基于坐标的MLP来参数化3D辐射场。在证明令人印象深刻的结果的同时,请查询每个沿每个射线样品的MLP,都会导致渲染缓慢。因此,现有方法通常会呈现低分辨率特征图,并通过UPSMPLING网络处理以获取最终图像。尽管有效,神经渲染通常纠缠于观点和内容,从而改变摄像头会导致几何或外观的不必要变化。在基于体素的新型视图合成中的最新结果中,我们研究了本文中稀疏体素电网表示的快速和3D一致生成建模的实用性。我们的结果表明,当将稀疏体素电网与渐进式生长,自由空间修剪和适当的正则化结合时,单层MLP确实可以被3D卷积代替。为了获得场景的紧凑表示并允许缩放到更高的体素分辨率,我们的模型将前景对象(以3D模型)从背景(以2D模型建模)中。与现有方法相反,我们的方法仅需要单个正向通行证来生成完整的3D场景。因此,它允许从任意观点呈现有效渲染,同时以高视觉保真度产生3D一致的结果。
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.
translated by 谷歌翻译
Despite the recent visually-pleasing results achieved, the massive computational cost has been a long-standing flaw for diffusion probabilistic models (DPMs), which, in turn, greatly limits their applications on resource-limited platforms. Prior methods towards efficient DPM, however, have largely focused on accelerating the testing yet overlooked their huge complexity and sizes. In this paper, we make a dedicated attempt to lighten DPM while striving to preserve its favourable performance. We start by training a small-sized latent diffusion model (LDM) from scratch, but observe a significant fidelity drop in the synthetic images. Through a thorough assessment, we find that DPM is intrinsically biased against high-frequency generation, and learns to recover different frequency components at different time-steps. These properties make compact networks unable to represent frequency dynamics with accurate high-frequency estimation. Towards this end, we introduce a customized design for slim DPM, which we term as Spectral Diffusion (SD), for light-weight image synthesis. SD incorporates wavelet gating in its architecture to enable frequency dynamic feature extraction at every reverse steps, and conducts spectrum-aware distillation to promote high-frequency recovery by inverse weighting the objective based on spectrum magni tudes. Experimental results demonstrate that, SD achieves 8-18x computational complexity reduction as compared to the latent diffusion models on a series of conditional and unconditional image generation tasks while retaining competitive image fidelity.
translated by 谷歌翻译
The performance of generative adversarial networks (GANs) heavily deteriorates given a limited amount of training data. This is mainly because the discriminator is memorizing the exact training set. To combat it, we propose Differentiable Augmentation (DiffAugment), a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples. Previous attempts to directly augment the training data manipulate the distribution of real images, yielding little benefit; DiffAugment enables us to adopt the differentiable augmentation for the generated samples, effectively stabilizes training, and leads to better convergence. Experiments demonstrate consistent gains of our method over a variety of GAN architectures and loss functions for both unconditional and class-conditional generation. With DiffAugment, we achieve a state-of-the-art FID of 6.80 with an IS of 100.8 on ImageNet 128×128 and 2-4× reductions of FID given 1,000 images on FFHQ and LSUN. Furthermore, with only 20% training data, we can match the top performance on CIFAR-10 and CIFAR-100. Finally, our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms. Code is available at https://github.com/mit-han-lab/data-efficient-gans.
translated by 谷歌翻译
While 2D generative adversarial networks have enabled high-resolution image synthesis, they largely lack an understanding of the 3D world and the image formation process. Thus, they do not provide precise control over camera viewpoint or object pose. To address this problem, several recent approaches leverage intermediate voxel-based representations in combination with differentiable rendering. However, existing methods either produce low image resolution or fall short in disentangling camera and scene properties, e.g., the object identity may vary with the viewpoint. In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene. In contrast to voxelbased representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity. By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone. We systematically analyze our approach on several challenging synthetic and real-world datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.
translated by 谷歌翻译
Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.
translated by 谷歌翻译
变压器在计算机视觉中变得普遍,特别是对于高级视觉任务。然而,采用生成的对抗性网络(GaN)框架中的变压器仍然是一个开放但具有挑战性的问题。本文进行了一项全面的实证研究,探讨了高保真图像合成的GaN中变压器的性能。我们的分析亮点并重申了特征局部度在图像生成中的重要性,尽管局部性的优点在分类任务中是众所周知的。也许更有趣的是,我们发现自我关注层中的残余连接有害,以利用基于变压器的鉴别器和条件发电机。我们仔细检查了影响力,并提出了减轻负面影响的有效方法。我们的研究导致GaN中的变压器的新替代设计,卷积神经网络(CNN) - 免费发电机称为晶体 - G,这在无条件和条件图像代中实现了竞争导致。基于变压器的鉴别器,Strans-D也显着降低了其基于CNN的鉴别器的间隙。
translated by 谷歌翻译
尽管在广泛的愿景任务中取得了诱人的成功,但变形金刚尚未在高分辨率图像生成建模中作为Convnets的讨论能力。在本文中,我们寻求探索使用纯变压器来构建用于高分辨率图像合成的生成对抗网络。为此,我们认为,当地的关注是在计算效率和建模能力之间取得平衡至关重要。因此,所提出的发电机采用基于风格的架构中的Swin变压器。为了实现更大的接收领域,我们提出了双重关注,同时利用本地和移位窗的上下文,从而提高了发电质量。此外,我们表明提供了在基于窗口的变压器中丢失的绝对位置的知识极大地利益了代理。所提出的STYLESWIN可扩展到高分辨率,粗糙几何和细结构都受益于变压器的强效力。然而,在高分辨率合成期间发生阻塞伪像,因为以块明智的方式执行局部注意力可能会破坏空间一致性。为了解决这一点,我们经验研究了各种解决方案,其中我们发现采用小波鉴别器来检查光谱差异的措施有效地抑制伪影。广泛的实验表明了对现有的基于变压器的GAN的优越性,特别是在高分辨率上,例如高分辨率,例如1024x1024。如果没有复杂的培训策略,则在Celeba-HQ 1024上赢得了STYLEGAN,并且在FFHQ-1024上实现了对PAR的表现,证明了使用变压器进行高分辨率图像生成的承诺。代码和模型将在https://github.com/microsoft/styleswin上使用。
translated by 谷歌翻译
目前的高保真发电和高精度检测DeepFake图像位于臂赛中。我们认为,生产高度逼真和“检测逃避”的深度可以服务于改善未来一代深度检测能力的最终目标。在本文中,我们提出了一种简单但强大的管道,以通过执行隐式空间域陷波滤波来减少假图像的伪影图案而不会损伤图像质量。我们首先表明频域陷波滤波,尽管由于陷波滤波器所需的手动设计,我们的任务对于我们的任务是有效的,但是频域陷波过滤虽然是有效的。因此,我们诉诸基于学习的方法来重现陷波滤波效果,而是仅在空间域中。我们采用添加压倒性的空间噪声来打破周期性噪声模式和深映像滤波来重建无噪声假图像,我们将我们的方法命名为Deadnotch。深度图像过滤为嘈杂图像中的每个像素提供专用过滤器,与其DeepFake对应物相比,产生具有高保真度的滤波图像。此外,我们还使用图像的语义信息来生成对抗性引导映射,以智能地添加噪声。我们对3种代表性的最先进的深蓝进行的大规模评估(在16种DeepFakes上测试)已经证明,我们的技术显着降低了这3种假图像检测方法的准确性,平均和高度为36.79% 97.02%在最好的情况下。
translated by 谷歌翻译