现有的少量图像生成方法通常在图像或特征级别采用基于融合的策略来生成新图像。但是,以前的方法很难通过细节良好的细节合成高频信号,从而恶化了合成质量。为了解决这个问题,我们提出了Wovegan,这是一种用于几弹图像生成的频率感知模型。具体而言,我们将编码的特征分解为多个频率组件,并执行低频跳过连接以保留轮廓和结构信息。然后,我们通过采用高频跳过连接来减轻发电机综合细节的斗争,从而为发电机提供信息频率信息。此外,我们在生成的图像和真实图像上利用频率L1损失来进一步阻碍频率信息丢失。广泛的实验证明了我们方法在三个数据集上的有效性和进步。值得注意的是,我们以FID 42.17,LPIPS 0.3868,FID 30.35,LPIPS 0.5076和FID 4.96,LPIPS分别为0.3822,在花,动物面和VGGFace上分别为0.3822。 github:https://github.com/kobeshegu/eccv2022_wavegan
translated by 谷歌翻译
学习为仅基于几个图像(称为少数图像生成的少数图像)生成新类别的新图像,引起了研究的兴趣。几项最先进的作品取得了令人印象深刻的结果,但多样性仍然有限。在这项工作中,我们提出了一个新型的三角洲生成对抗网络(Deltagan),该网络由重建子网和一代子网组成。重建子网捕获了类别内转换,即同一类别对之间的三角洲。该生成子网为输入图像生成了特定于样本的三角洲,该图像与此输入图像结合使用,以在同一类别中生成新图像。此外,对抗性的三角洲匹配损失旨在将上述两个子网链接在一起。六个基准数据集的广泛实验证明了我们提出的方法的有效性。我们的代码可从https://github.com/bcmi/deltagan-few-shot-image-generation获得。
translated by 谷歌翻译
学习为仅基于几个图像(称为少数图像生成的少数图像)生成新类别的新图像,引起了研究的兴趣。几项最先进的作品取得了令人印象深刻的结果,但多样性仍然有限。在这项工作中,我们提出了一个新型的三角洲生成对抗网络(Deltagan),该网络由重建子网和一代子网组成。重建子网捕获了类别内转换,即“ delta”,在相同类别对之间。生成子网为输入图像生成了特定于样本的“ delta”,该图像与此输入图像结合使用,以在同一类别中生成新图像。此外,对抗性的三角洲匹配损失旨在将上述两个子网链接在一起。在五个少量图像数据集上进行的广泛实验证明了我们提出的方法的有效性。
translated by 谷歌翻译
很少有图像生成和几张相关的图像翻译是两个相关的任务,这两个任务旨在为只有几张图像的看不见类别生成新图像。在这项工作中,我们首次尝试将几张图像翻译方法调整为几乎没有图像生成任务。几乎没有图像翻译将图像分解为样式向量和内容图。看不见的样式矢量可以与不同的见面内容映射结合使用,以产生不同的图像。但是,它需要存储可见的图像以提供内容图,并且看不见的样式向量可能与可见的内容映射不相容。为了使其适应少量图像生成任务,我们通过将连续内容映射量化为离散的内容映射而不是存储可见图像,从而学习了局部内容向量的紧凑词字典。此外,我们对根据样式向量进行的离散内容图的自回归分布进行建模,这可以减轻内容映射和样式向量之间的不兼容。三个真实数据集的定性和定量结果表明,与以前的方法相比,我们的模型可以为看不见的类别产生更高的多样性和忠诚度图像。
translated by 谷歌翻译
Diffusion models are rising as a powerful solution for high-fidelity image generation, which exceeds GANs in quality in many circumstances. However, their slow training and inference speed is a huge bottleneck, blocking them from being used in real-time applications. A recent DiffusionGAN method significantly decreases the models' running time by reducing the number of sampling steps from thousands to several, but their speeds still largely lag behind the GAN counterparts. This paper aims to reduce the speed gap by proposing a novel wavelet-based diffusion structure. We extract low-and-high frequency components from both image and feature levels via wavelet decomposition and adaptively handle these components for faster processing while maintaining good generation quality. Furthermore, we propose to use a reconstruction term, which effectively boosts the model training convergence. Experimental results on CelebA-HQ, CIFAR-10, LSUN-Church, and STL-10 datasets prove our solution is a stepping-stone to offering real-time and high-fidelity diffusion models. Our code and pre-trained checkpoints will be available at \url{https://github.com/VinAIResearch/WaveDiff.git}.
translated by 谷歌翻译
Automatic font generation without human experts is a practical and significant problem, especially for some languages that consist of a large number of characters. Existing methods for font generation are often in supervised learning. They require a large number of paired data, which are labor-intensive and expensive to collect. In contrast, common unsupervised image-to-image translation methods are not applicable to font generation, as they often define style as the set of textures and colors. In this work, we propose a robust deformable generative network for unsupervised font generation (abbreviated as DGFont++). We introduce a feature deformation skip connection (FDSC) to learn local patterns and geometric transformations between fonts. The FDSC predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level content feature maps. The outputs of FDSC are fed into a mixer to generate final results. Moreover, we introduce contrastive self-supervised learning to learn a robust style representation for fonts by understanding the similarity and dissimilarities of fonts. To distinguish different styles, we train our model with a multi-task discriminator, which ensures that each style can be discriminated independently. In addition to adversarial loss, another two reconstruction losses are adopted to constrain the domain-invariant characteristics between generated images and content images. Taking advantage of FDSC and the adopted loss functions, our model is able to maintain spatial information and generates high-quality character images in an unsupervised manner. Experiments demonstrate that our model is able to generate character images of higher quality than state-of-the-art methods.
translated by 谷歌翻译
深层图像介绍取得了令人印象深刻的进步,随着图像产生和处理算法的最新进展。我们声称,可以通过生成的结构和纹理更好地判断介入算法的性能。结构是指孔中生成的对象边界或新的几何结构,而纹理是指高频细节,尤其是在结构区域内填充的人造重复模式。我们认为,更好的结构通常是从基于粗糙的GAN的发电机网络中获得的,而如今重复模式可以通过最新的高频快速快速傅立叶卷积层进行更好的建模。在本文中,我们提出了一个新颖的介绍网络,结合了这两种设计的优势。因此,我们的模型具有出色的视觉质量,可以匹配结构生成和使用单个网络重复纹理合成的最新性能。广泛的实验证明了该方法的有效性,我们的结论进一步突出了图像覆盖质量,结构和纹理的两个关键因素,即未来的设计方向。
translated by 谷歌翻译
In this work, we are dedicated to text-guided image generation and propose a novel framework, i.e., CLIP2GAN, by leveraging CLIP model and StyleGAN. The key idea of our CLIP2GAN is to bridge the output feature embedding space of CLIP and the input latent space of StyleGAN, which is realized by introducing a mapping network. In the training stage, we encode an image with CLIP and map the output feature to a latent code, which is further used to reconstruct the image. In this way, the mapping network is optimized in a self-supervised learning way. In the inference stage, since CLIP can embed both image and text into a shared feature embedding space, we replace CLIP image encoder in the training architecture with CLIP text encoder, while keeping the following mapping network as well as StyleGAN model. As a result, we can flexibly input a text description to generate an image. Moreover, by simply adding mapped text features of an attribute to a mapped CLIP image feature, we can effectively edit the attribute to the image. Extensive experiments demonstrate the superior performance of our proposed CLIP2GAN compared to previous methods.
translated by 谷歌翻译
Generative adversarial network (GAN) is formulated as a two-player game between a generator (G) and a discriminator (D), where D is asked to differentiate whether an image comes from real data or is produced by G. Under such a formulation, D plays as the rule maker and hence tends to dominate the competition. Towards a fairer game in GANs, we propose a new paradigm for adversarial training, which makes G assign a task to D as well. Specifically, given an image, we expect D to extract representative features that can be adequately decoded by G to reconstruct the input. That way, instead of learning freely, D is urged to align with the view of G for domain classification. Experimental results on various datasets demonstrate the substantial superiority of our approach over the baselines. For instance, we improve the FID of StyleGAN2 from 4.30 to 2.55 on LSUN Bedroom and from 4.04 to 2.82 on LSUN Church. We believe that the pioneering attempt present in this work could inspire the community with better designed generator-leading tasks for GAN improvement.
translated by 谷歌翻译
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available. To facilitate GAN training, current methods propose to use data-specific augmentation techniques. Despite the effectiveness, it is difficult for these methods to scale to practical applications. In this work, we present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks. We first produce augmented samples using the convex combinations of the real samples. Then, we optimize the augmented samples by minimizing the norms of the data scores, i.e., the gradients of the log-density functions. This procedure enforces the augmented samples close to the data manifold. To estimate the scores, we train a deep estimation network with multi-scale score matching. For different image synthesis tasks, we train the score estimation network using different data. We do not require the tuning of the hyperparameters or modifications to the network architecture. The ScoreMix method effectively increases the diversity of data and reduces the overfitting problem. Moreover, it can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ScoreMix method achieve significant improvements.
translated by 谷歌翻译
使用诸如GAN的生成模型产生多样化和现实图像通常需要大量的图像训练。具有极其限制的数据培训的GAN可以容易地覆盖很少的训练样本,并显示出“楼梯”潜在的空间,如潜在空间的过渡存在不连续性,偶尔会产生输出的突然变化。在这项工作中,我们认为我们的兴趣或可转让源数据集没有大规模数据集的情况,并寻求培训具有最小的过度和模式折叠的现有生成模型。我们在发电机和对应鉴别器的特征空间上提出基于潜在的混合距离正则化,这促使这两个玩家不仅仅是关于稀缺观察到的数据点,而且驻留的特征空间中的相对距离。不同数据集的定性和定量评估表明,我们的方法通常适用于现有模型,以在有限数据的约束下提高保真度和多样性。代码将公开。
translated by 谷歌翻译
人物图像的旨在在源图像上执行非刚性变形,这通常需要未对准数据对进行培训。最近,自我监督的方法通过合并自我重建的解除印章表达来表达这项任务的巨大前景。然而,这些方法未能利用解除戒断功能之间的空间相关性。在本文中,我们提出了一种自我监督的相关挖掘网络(SCM-NET)来重新排列特征空间中的源图像,其中两种协作模块是集成的,分解的样式编码器(DSE)和相关挖掘模块(CMM)。具体地,DSE首先在特征级别创建未对齐的对。然后,CMM建立用于特征重新排列的空间相关领域。最终,翻译模块将重新排列的功能转换为逼真的结果。同时,为了提高跨尺度姿态变换的保真度,我们提出了一种基于曲线图的体结构保持损失(BSR损耗),以保持半体上的合理的身体结构到全身。与Deepfashion DataSet进行的广泛实验表明了与其他监督和无监督和无监督的方法相比的方法的优势。此外,对面部的令人满意的结果显示了我们在其他变形任务中的方法的多功能性。
translated by 谷歌翻译
本文提出了一种凝视校正和动画方法,用于高分辨率,不受约束的肖像图像,可以在没有凝视角度和头部姿势注释的情况下对其进行训练。常见的凝视校正方法通常需要用精确的注视和头姿势信息对培训数据进行注释。使用无监督的方法解决此问题仍然是一个空旷的问题,尤其是对于野外高分辨率的面部图像,这并不容易用凝视和头部姿势标签注释。为了解决这个问题,我们首先创建两个新的肖像数据集:Celebgaze和高分辨率Celebhqgaze。其次,我们将目光校正任务制定为图像介绍问题,使用凝视校正模块(GCM)和凝视动画模块(GAM)解决。此外,我们提出了一种无监督的训练策略,即训练的综合训练,以学习眼睛区域特征与凝视角度之间的相关性。结果,我们可以在此空间中使用学习的潜在空间进行凝视动画。此外,为了减轻培训和推理阶段中的记忆和计算成本,我们提出了一个与GCM和GAM集成的粗到精细模块(CFM)。广泛的实验验证了我们方法对野外低和高分辨率面部数据集中的目光校正和凝视动画任务的有效性,并证明了我们方法在艺术状态方面的优越性。代码可从https://github.com/zhangqianhui/gazeanimationv2获得。
translated by 谷歌翻译
We present a novel image inversion framework and a training pipeline to achieve high-fidelity image inversion with high-quality attribute editing. Inverting real images into StyleGAN's latent space is an extensively studied problem, yet the trade-off between the image reconstruction fidelity and image editing quality remains an open challenge. The low-rate latent spaces are limited in their expressiveness power for high-fidelity reconstruction. On the other hand, high-rate latent spaces result in degradation in editing quality. In this work, to achieve high-fidelity inversion, we learn residual features in higher latent codes that lower latent codes were not able to encode. This enables preserving image details in reconstruction. To achieve high-quality editing, we learn how to transform the residual features for adapting to manipulations in latent codes. We train the framework to extract residual features and transform them via a novel architecture pipeline and cycle consistency losses. We run extensive experiments and compare our method with state-of-the-art inversion methods. Qualitative metrics and visual comparisons show significant improvements. Code: https://github.com/hamzapehlivan/StyleRes
translated by 谷歌翻译
近年来,面部语义指导(包括面部地标,面部热图和面部解析图)和面部生成对抗网络(GAN)近年来已广泛用于盲面修复(BFR)。尽管现有的BFR方法在普通案例中取得了良好的性能,但这些解决方案在面对严重降解和姿势变化的图像时具有有限的弹性(例如,在现实世界情景中看起来右,左看,笑等)。在这项工作中,我们提出了一个精心设计的盲人面部修复网络,具有生成性面部先验。所提出的网络主要由非对称编解码器和stylegan2先验网络组成。在非对称编解码器中,我们采用混合的多路残留块(MMRB)来逐渐提取输入图像的弱纹理特征,从而可以更好地保留原始面部特征并避免过多的幻想。 MMRB也可以在其他网络中插入插件。此外,多亏了StyleGAN2模型的富裕和多样化的面部先验,我们采用了微调的方法来灵活地恢复自然和现实的面部细节。此外,一种新颖的自我监督训练策略是专门设计用于面部修复任务的,以使分配更接近目标并保持训练稳定性。关于合成和现实世界数据集的广泛实验表明,我们的模型在面部恢复和面部超分辨率任务方面取得了卓越的表现。
translated by 谷歌翻译
培训有效的生成对抗性网络(GANS)需要大量的培训数据,但是训练型模型通常是用鉴别器过度拟合的次优。通过大规模和手工制作的数据增强,通过扩大有限培训数据的分布来解决此问题的几项问题。我们从一个非常不同的角度处理数据限制图像生成。具体而言,我们设计Genco,这是一种生成的共同培训网络,通过引入多种互补鉴别者来减轻鉴别者过度拟合问题,这些判别符号在培训中提供多种独特的观点来提供不同的监督。我们以两种方式实例化了Genco的想法。首先是重量差异共同训练(WECO),其通过多样化它们的参数共同列举多个独特的鉴别器。第二种方式是数据差异共同训练(DACO),其通过馈送具有输入图像的不同视图的鉴别器(例如,输入图像的不同频率分量)来实现共同训练。在多个基准上进行广泛的实验表明,Genco实现了具有有限培训数据的优异发电。此外,Genco还通过组合时补充了增强方法,并在结合时进行了一致和明确的性能。
translated by 谷歌翻译
生成的对抗网络(GANS)能够生成从真实图像视觉无法区分的图像。然而,最近的研究表明,生成和实际图像在频域中共享显着差异。在本文中,我们探讨了高频分量在GAN训练中的影响。根据我们的观察,在大多数GAN的培训期间,严重的高频差异使鉴别器聚焦在过度高频成分上,阻碍了发电机拟合了对学习图像内容很重要的低频分量。然后,我们提出了两个简单但有效的频率操作,以消除由GAN训练的高频差异引起的副作用:高频混淆(HFC)和高频滤波器(HFF)。拟议的操作是一般的,可以应用于大多数现有的GAN,一小部分成本。在多丢失函数,网络架构和数据集中验证了所提出的操作的高级性能。具体而言,拟议的HFF在Celeba(128 * 128)基于SSNGAN的Celeba无条件生成的Celeba(128 * 128)无条件一代,在Celeba无条件一代基于SSGAN的13.2 \%$ 30.2 \%$ 69.3 \%$ 69.3 \%$ FID在Celeba无条件一代基于Infomaxgan。
translated by 谷歌翻译
Facial image manipulation has achieved great progress in recent years. However, previous methods either operate on a predefined set of face attributes or leave users little freedom to interactively manipulate images. To overcome these drawbacks, we propose a novel framework termed MaskGAN, enabling diverse and interactive face manipulation. Our key insight is that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation. MaskGAN has two main components: 1) Dense Mapping Network (DMN) and 2) Editing Behavior Simulated Training (EBST). Specifically, DMN learns style mapping between a free-form user modified mask and a target image, enabling diverse generation results. EBST models the user editing behavior on the source mask, making the overall framework more robust to various manipulated inputs. Specifically, it introduces dual-editing consistency as the auxiliary supervision signal. To facilitate extensive studies, we construct a large-scale high-resolution face dataset with fine-grained mask annotations named CelebAMask-HQ. MaskGAN is comprehensively evaluated on two challenging tasks: attribute transfer and style copy, demonstrating superior performance over other state-of-the-art methods. The code, models, and dataset are available at https://github.com/switchablenorms/CelebAMask-HQ.
translated by 谷歌翻译
我们表明,诸如Stylegan和Biggan之类的预训练的生成对抗网络(GAN)可以用作潜在银行,以提高图像超分辨率的性能。尽管大多数现有面向感知的方法试图通过以对抗性损失学习来产生现实的产出,但我们的方法,即生成的潜在银行(GLEAN),通过直接利用预先训练的gan封装的丰富而多样的先验来超越现有实践。但是,与需要在运行时需要昂贵的图像特定优化的普遍的GAN反演方法不同,我们的方法只需要单个前向通行证才能修复。可以轻松地将Glean合并到具有多分辨率Skip连接的简单编码器银行decoder架构中。采用来自不同生成模型的先验,可以将收集到各种类别(例如人的面孔,猫,建筑物和汽车)。我们进一步提出了一个轻巧的Glean,名为Lightglean,该版本仅保留Glean中的关键组成部分。值得注意的是,Lightglean仅由21%的参数和35%的拖鞋组成,同时达到可比的图像质量。我们将方法扩展到不同的任务,包括图像着色和盲图恢复,广泛的实验表明,与现有方法相比,我们提出的模型表现出色。代码和模型可在https://github.com/open-mmlab/mmediting上找到。
translated by 谷歌翻译
我们建议使用单个图像进行面部表达到表达翻译的简单而强大的地标引导的生成对抗网络(Landmarkgan),这在计算机视觉中是一项重要且具有挑战性的任务,因为表达到表达的翻译是非 - 线性和非对准问题。此外,由于图像中的对象可以具有任意的姿势,大小,位置,背景和自我观念,因此需要在输入图像和输出图像之间有一个高级的语义理解。为了解决这个问题,我们建议明确利用面部地标信息。由于这是一个具有挑战性的问题,我们将其分为两个子任务,(i)类别引导的地标生成,以及(ii)具有里程碑意义的指导表达式对表达的翻译。两项子任务以端到端的方式进行了培训,旨在享受产生的地标和表情的相互改善的好处。与当前的按键指导的方法相比,提议的Landmarkgan只需要单个面部图像即可产生各种表达式。四个公共数据集的广泛实验结果表明,与仅使用单个图像的最先进方法相比,所提出的Landmarkgan获得了更好的结果。该代码可从https://github.com/ha0tang/landmarkgan获得。
translated by 谷歌翻译