自适应实例归一化(ADAIN)已成为样式注入的标准方法:通过通过缩放和迁移操作重新归一化功能,它发现在样式传输,图像生成和图像到图像转换中广泛使用。在这项工作中,我们提出了Adain的概括,该概括依赖于我们配音的美白和着色转化(WCT),我们将其申请在大型gan中申请样式注射。我们通过对Starganv2体系结构的实验来展示这种概括(尽管在概念上很简单,但在生成的图像的质量上都显着改善。
translated by 谷歌翻译
虽然现代形象翻译技术可以创造光电态合成图像,但它们具有有限的风格可控性,因此可能遭受翻译误差。在这项工作中,我们表明激活功能是控制图像合成方向的重要组件之一。具体地,我们明确证明整流器的斜率参数可以改变数据分布并独立使用以控制翻译方向。为了提高风格可控性,提出了两种简单但有效的技术,包括自适应Relu(Adarelu)和结构自适应功能。 Adarelu可以根据目标风格动态调整斜率参数,并且可以用于通过与自适应实例归一化(Adain)组合来提高可控性。同时,结构适应性功能使整流器能够更有效地操纵特征图的结构。它由所提出的结构卷积(Struconv)组成,一种有效的卷积模块,可以根据AADAIN指定的平均值和方差选择要激活的区域。广泛的实验表明,所提出的技术可以大大提高基于风格的图像转换任务的网络可控性和输出分集。
translated by 谷歌翻译
Current methods for image-to-image translation produce compelling results, however, the applied transformation is difficult to control, since existing mechanisms are often limited and non-intuitive. We propose ParGAN, a generalization of the cycle-consistent GAN framework to learn image transformations with simple and intuitive controls. The proposed generator takes as input both an image and a parametrization of the transformation. We train this network to preserve the content of the input image while ensuring that the result is consistent with the given parametrization. Our approach does not require paired data and can learn transformations across several tasks and datasets. We show how, with disjoint image domains with no annotated parametrization, our framework can create smooth interpolations as well as learn multiple transformations simultaneously.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
未配对的图像到图像转换的目标是产生反映目标域样式的输出图像,同时保持输入源图像的不相关内容不变。但是,由于缺乏对现有方法的内容变化的关注,来自源图像的语义信息遭受翻译期间的降级。在论文中,为了解决这个问题,我们介绍了一种新颖的方法,全局和局部对齐网络(GLA-NET)。全局对齐网络旨在将输入图像从源域传输到目标域。要有效地这样做,我们通过使用MLP-MILLER基于MATY编码器将多元高斯分布的参数(均值和标准偏差)作为样式特征学习。要更准确地传输样式,我们在编码器中使用自适应实例归一化层,具有目标多功能高斯分布的参数作为输入。我们还采用正常化和可能性损失,以进一步降低领域差距并产生高质量的产出。另外,我们介绍了局部对准网络,该网络采用预磨平的自我监督模型来通过新颖的局部对准丢失来产生注意图,确保翻译网络专注于相关像素。在五个公共数据集上进行的广泛实验表明,我们的方法有效地产生比现有方法更锐利和更现实的图像。我们的代码可在https://github.com/ygjwd12345/glanet获得。
translated by 谷歌翻译
近年来有条件的GAN已经成熟,并且能够产生高质量的现实形象。但是,计算资源和培训高质量的GAN所需的培训数据是巨大的,因此对这些模型的转移学习的研究是一个紧急话题。在本文中,我们探讨了从高质量预训练的无条件GAN到有条件的GAN的转移。为此,我们提出了基于HyperNetwork的自适应权重调制。此外,我们介绍了一个自我初始化过程,不需要任何真实数据才能初始化HyperNetwork参数。为了进一步提高知识转移的样本效率,我们建议使用自我监督(对比)损失来改善GaN判别者。在广泛的实验中,我们验证了多个标准基准上的Hypernetworks,自我初始化和对比损失的效率。
translated by 谷歌翻译
交换自动编码器在深层图像操纵和图像到图像翻译中实现了最先进的性能。我们通过基于梯度逆转层引入简单而有效的辅助模块来改善这项工作。辅助模块的损失迫使发电机学会使用全零纹理代码重建图像,从而鼓励结构和纹理信息之间更好地分解。提出的基于属性的转移方法可以在样式传输中进行精致的控制,同时在不使用语义掩码的情况下保留结构信息。为了操纵图像,我们将对象的几何形状和输入图像的一般样式编码为两个潜在代码,并具有实施结构一致性的附加约束。此外,由于辅助损失,训练时间大大减少。提出的模型的优越性在复杂的域中得到了证明,例如已知最先进的卫星图像。最后,我们表明我们的模型改善了广泛的数据集的质量指标,同时通过多模式图像生成技术实现了可比的结果。
translated by 谷歌翻译
我们提出了Vecgan,这是一个图像到图像翻译框架,用于带有可解释潜在方向的面部属性编辑。面部属性编辑任务面临着精确属性编辑的挑战,具有可控的强度和图像的其他属性的保存。对于此目标,我们通过潜在空间分解设计属性编辑,对于每个属性,我们学习了与其他属性正交的线性方向。另一个组件是变化的可控强度,标量值。在我们的框架中,可以通过投影从参考图像中对此标量进行采样或编码。我们的工作灵感来自固定预验证的gan的潜在空间分解作品。但是,尽管这些模型无法进行端到端训练,并难以精确编辑编码的图像,但Vecgan受到了端到端的培训,用于图像翻译任务,并成功地编辑了属性,同时保留了其他属性。我们的广泛实验表明,vecgan对本地和全球编辑的最先进进行了重大改进。
translated by 谷歌翻译
Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any examples of corresponding image pairs. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to state-of-the-art approaches further demonstrate the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT.
translated by 谷歌翻译
大多数图像到图像翻译方法需要大量的培训图像,这限制了他们的适用性。我们提出了清单:几次拍摄图像转换的框架,它仅从几个图像中了解目标域的上下文感知表示。要强制执行功能一致性,我们的框架会在源和代理锚域之间的样式歧管(假设由大量图像组成)之间。学习的歧管通过基于贴片的对策和特征统计对准损耗来插入并使少量射击目标域变形。所有这些组件在单端到端循环期间同时培训。除了普通的少量镜头翻译任务之外,我们的方法可以在单个示例图像上调节以再现其特定样式。广泛的实验证明了清单对多项任务的功效,优于所有度量的最先进,并且在基于一般和示例的情景中表现出最先进的。我们的代码将是开源的。
translated by 谷歌翻译
Domain adaptation of GANs is a problem of fine-tuning the state-of-the-art GAN models (e.g. StyleGAN) pretrained on a large dataset to a specific domain with few samples (e.g. painting faces, sketches, etc.). While there are a great number of methods that tackle this problem in different ways there are still many important questions that remain unanswered. In this paper, we provide a systematic and in-depth analysis of the domain adaptation problem of GANs, focusing on the StyleGAN model. First, we perform a detailed exploration of the most important parts of StyleGAN that are responsible for adapting the generator to a new domain depending on the similarity between the source and target domains. In particular, we show that affine layers of StyleGAN can be sufficient for fine-tuning to similar domains. Second, inspired by these findings, we investigate StyleSpace to utilize it for domain adaptation. We show that there exist directions in the StyleSpace that can adapt StyleGAN to new domains. Further, we examine these directions and discover their many surprising properties. Finally, we leverage our analysis and findings to deliver practical improvements and applications in such standard tasks as image-to-image translation and cross-domain morphing.
translated by 谷歌翻译
尽管具有生成对抗网络(GAN)的图像到图像(I2I)翻译的显着进步,但使用单对生成器和歧视器将图像有效地转换为多个目标域中的一组不同图像仍然具有挑战性。现有的I2i翻译方法采用多个针对不同域的特定于域的内容编码,其中每个特定于域的内容编码器仅经过来自同一域的图像的训练。然而,我们认为应从所有域之间的图像中学到内容(域变相)特征。因此,现有方案的每个特定于域的内容编码器都无法有效提取域不变特征。为了解决这个问题,我们提出了一个灵活而通用的Sologan模型,用于在多个域之间具有未配对数据的多模式I2I翻译。与现有方法相反,Solgan算法使用具有附加辅助分类器的单个投影鉴别器,并为所有域共享编码器和生成器。因此,可以使用来自所有域的图像有效地训练Solgan,从而可以有效提取域 - 不变性内容表示。在多个数据集中,针对多个同行和sologan的变体的定性和定量结果证明了该方法的优点,尤其是对于挑战i2i翻译数据集的挑战,即涉及极端形状变化的数据集或在翻译后保持复杂的背景,需要保持复杂的背景。此外,我们通过消融研究证明了Sogan中每个成分的贡献。
translated by 谷歌翻译
The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
translated by 谷歌翻译
Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.
translated by 谷歌翻译
图像到图像翻译(I2I)是一个充满挑战的计算机视觉问题,用于多个任务的众多域。最近,眼科成为I2i的应用迅速增加的主要领域之一。一种这样的应用是合成视网膜光学相干断层(OCT)扫描的产生。现有的I2I方法需要培训多种模型,将图像从正常扫描转换为特定病理学:限制由于它们的复杂性而对这些模型的使用。要解决此问题,我们提出了一个无监督的多域I2I网络,具有预先培训的样式编码器,可将一个域中的视网膜OCT图像转换为多个域。我们假设图像分裂到域不变内容和域特定的样式代码,并预先培训这些样式代码。所执行的实验表明,所提出的模型优于Munit和Cyclangan合成不同的病理扫描等最先进的模型。
translated by 谷歌翻译
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available. To facilitate GAN training, current methods propose to use data-specific augmentation techniques. Despite the effectiveness, it is difficult for these methods to scale to practical applications. In this work, we present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks. We first produce augmented samples using the convex combinations of the real samples. Then, we optimize the augmented samples by minimizing the norms of the data scores, i.e., the gradients of the log-density functions. This procedure enforces the augmented samples close to the data manifold. To estimate the scores, we train a deep estimation network with multi-scale score matching. For different image synthesis tasks, we train the score estimation network using different data. We do not require the tuning of the hyperparameters or modifications to the network architecture. The ScoreMix method effectively increases the diversity of data and reduces the overfitting problem. Moreover, it can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ScoreMix method achieve significant improvements.
translated by 谷歌翻译
深尾学习旨在培训有用的深层网络,以实用现实世界中的不平衡分布,其中大多数尾巴类别的标签都与一些样本相关联。有大量的工作来训练判别模型,以进行长尾分布的视觉识别。相比之下,我们旨在训练有条件的生成对抗网络,这是一类长尾分布的图像生成模型。我们发现,类似于识别图像产生的最新方法类似,也遭受了尾部类别的性能降解。性能降解主要是由于尾部类别的类别模式塌陷,我们观察到与调节参数矩阵的光谱爆炸相关。我们提出了一种新型的组光谱正规剂(GSR),以防止光谱爆炸减轻模式崩溃,从而导致尾巴类别的形象产生多样化和合理的图像产生。我们发现GSR有效地与现有的增强和正则化技术结合在一起,从而导致长尾数据上的最新图像生成性能。广泛的实验证明了我们的常规器在不同程度不平衡的长尾数据集上的功效。
translated by 谷歌翻译
We propose semantic region-adaptive normalization (SEAN), a simple but effective building block for Generative Adversarial Networks conditioned on segmentation masks that describe the semantic regions in the desired output image. Using SEAN normalization, we can build a network architecture that can control the style of each semantic region individually, e.g., we can specify one style reference image per region. SEAN is better suited to encode, transfer, and synthesize style than the best previous method in terms of reconstruction quality, variability, and visual quality. We evaluate SEAN on multiple datasets and report better quan-titative metrics (e.g. FID, PSNR) than the current state of the art. SEAN also pushes the frontier of interactive image editing. We can interactively edit images by changing segmentation masks or the style for any given region. We can also interpolate styles from two reference images per region. Code: https://github.com/ZPdesu/SEAN .
translated by 谷歌翻译
在这项工作中,我们为面部年龄编辑提出了一种新颖的架构,该架构可以产生结构修改,同时保持原始图像中存在相关细节。我们删除输入图像的样式和内容,并提出了一个新的解码器网络,该网络采用了一种基于样式的策略来结合输入图像的样式和内容表示,同时将输出在目标年龄上调节。我们超越了现有的衰老方法,使用户可以在推理过程中调整输入图像中的结构保存程度。为此,我们引入了一种掩盖机制,即自定义结构保存模块,该模块将输入图像中的相关区域与应丢弃的区域区分开。尖峰不需要其他监督。最后,我们的定量和定性分析在内,包括用户研究,表明我们的方法优于先前的艺术,并证明了我们在图像编辑和可调节结构保存方面的策略的有效性。可以在https://github.com/guillermogogotre/cusp上获得代码和预估计的模型。
translated by 谷歌翻译