Current methods for image-to-image translation produce compelling results, however, the applied transformation is difficult to control, since existing mechanisms are often limited and non-intuitive. We propose ParGAN, a generalization of the cycle-consistent GAN framework to learn image transformations with simple and intuitive controls. The proposed generator takes as input both an image and a parametrization of the transformation. We train this network to preserve the content of the input image while ensuring that the result is consistent with the given parametrization. Our approach does not require paired data and can learn transformations across several tasks and datasets. We show how, with disjoint image domains with no annotated parametrization, our framework can create smooth interpolations as well as learn multiple transformations simultaneously.
translated by 谷歌翻译
COMOGAN是一个连续的GAN,依赖于功能歧管上目标数据的无监督重组。就此而言,我们引入了一个新的功能实例归一化层和剩余机制,该机制将图像含量从目标歧管上的位置分开。我们依靠天真的物理启发的模型来指导培训,同时允许私人模型/翻译功能。COMOGAN可与任何GAN主链一起使用,并允许新型的图像翻译类型,例如循环图像翻译(如及时解散的生成)或分离的线性翻译。在所有数据集上,它都优于文献。我们的代码可在http://github.com/cv-rits/comogan上找到。
translated by 谷歌翻译
可以训练生成模型,以从特定域中生成图像,仅由文本提示引导,而不看到任何图像?换句话说:可以将图像生成器“盲目地训练”吗?利用大规模对比语言图像预训练(CLIP)模型的语义力量,我们提出了一种文本驱动方法,允许将生成模型转移到新域,而无需收集单个图像。我们展示通过自然语言提示和几分钟的培训,我们的方法可以通过各种风格和形状的多种域调整发电机。值得注意的是,许多这些修改难以与现有方法达到困难或完全不可能。我们在广泛的域中进行了广泛的实验和比较。这些证明了我们方法的有效性,并表明我们的移动模型保持了对下游任务吸引的生成模型的潜在空间属性。
translated by 谷歌翻译
图像到图像翻译(I2i)网络在目标域中与物理相关现象的存在(例如遮挡,雾等)存在纠缠效应,从而完全降低了翻译质量,可控性和可变性。在本文中,我们基于简单物理模型的收集,并提出了一种综合方法,可以通过物理模型来指导目标图像中的视觉特征,从而指导过程,该模型呈现一些目标特征,并学习其余的特征。因为它允许明确和可解释的输出,所以我们的物理模型(在目标上最佳回归)允许以可控的方式生成看不见的方案。我们还扩展了我们的框架,显示出多功能性与神经引导的分离。结果表明,在几种挑战性的图像翻译方案中,我们的分离策略在定性和定量上大大提高了性能。
translated by 谷歌翻译
Recent 3D-aware GANs rely on volumetric rendering techniques to disentangle the pose and appearance of objects, de facto generating entire 3D volumes rather than single-view 2D images from a latent code. Complex image editing tasks can be performed in standard 2D-based GANs (e.g., StyleGAN models) as manipulation of latent dimensions. However, to the best of our knowledge, similar properties have only been partially explored for 3D-aware GAN models. This work aims to fill this gap by showing the limitations of existing methods and proposing LatentSwap3D, a model-agnostic approach designed to enable attribute editing in the latent space of pre-trained 3D-aware GANs. We first identify the most relevant dimensions in the latent space of the model controlling the targeted attribute by relying on the feature importance ranking of a random forest classifier. Then, to apply the transformation, we swap the top-K most relevant latent dimensions of the image being edited with an image exhibiting the desired attribute. Despite its simplicity, LatentSwap3D provides remarkable semantic edits in a disentangled manner and outperforms alternative approaches both qualitatively and quantitatively. We demonstrate our semantic edit approach on various 3D-aware generative models such as pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D and VolumeGAN, and on diverse datasets, such as FFHQ, AFHQ, Cats, MetFaces, and CompCars. The project page can be found: \url{https://enisimsar.github.io/latentswap3d/}.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
大多数图像到图像翻译方法需要大量的培训图像,这限制了他们的适用性。我们提出了清单:几次拍摄图像转换的框架,它仅从几个图像中了解目标域的上下文感知表示。要强制执行功能一致性,我们的框架会在源和代理锚域之间的样式歧管(假设由大量图像组成)之间。学习的歧管通过基于贴片的对策和特征统计对准损耗来插入并使少量射击目标域变形。所有这些组件在单端到端循环期间同时培训。除了普通的少量镜头翻译任务之外,我们的方法可以在单个示例图像上调节以再现其特定样式。广泛的实验证明了清单对多项任务的功效,优于所有度量的最先进,并且在基于一般和示例的情景中表现出最先进的。我们的代码将是开源的。
translated by 谷歌翻译
图像翻译和操纵随着深层生成模型的快速发展而引起了越来越多的关注。尽管现有的方法带来了令人印象深刻的结果,但它们主要在2D空间中运行。鉴于基于NERF的3D感知生成模型的最新进展,我们介绍了一项新的任务,语义到网络翻译,旨在重建由NERF模型的3D场景,该场景以一个单视语义掩码作为输入为条件。为了启动这项新颖的任务,我们提出了SEM2NERF框架。特别是,SEM2NERF通过将语义面膜编码到控制预训练的解码器的3D场景表示形式中来解决高度挑战的任务。为了进一步提高映射的准确性,我们将新的区域感知学习策略集成到编码器和解码器的设计中。我们验证了提出的SEM2NERF的功效,并证明它在两个基准数据集上的表现优于几个强基础。代码和视频可从https://donydchen.github.io/sem2nerf/获得
translated by 谷歌翻译
An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to biased training data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by "steering" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution. Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. Code is released on our project page: https://ali-design.github.io/gan_steerability/.
translated by 谷歌翻译
Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any examples of corresponding image pairs. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to state-of-the-art approaches further demonstrate the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT.
translated by 谷歌翻译
恶劣的天气图像翻译属于无监督的图像到图像(I2i)翻译任务,旨在将不利条件领域(例如,雨夜)转移到标准领域(例如,日期)。这是一个具有挑战性的任务,因为来自不利域的图像具有一些伪影和信息不足。最近,许多采用生成的对抗性网络(GANS)的研究在I2I翻译中取得了显着的成功,但仍然有限制将它们应用于恶劣天气增强。基于双向循环 - 一致性损耗的对称架构被采用作为无监督域传输方法的标准框架。但是,如果两个域具有不平衡信息,它可能会导致较差的转换结果。为了解决这个问题,我们提出了一种新的GaN模型,即Au-GaN,它具有不对称的域翻译的非对称架构。我们仅在普通域生成器(即雨夜 - >日)中插入建议的功能传输网络($ {T} $ - 网),以增强不利域图像的编码特征。此外,我们介绍了对编码特征的解剖学的非对称特征匹配。最后,我们提出了不确定感知的周期 - 一致性损失,以解决循环重建图像的区域不确定性。我们通过与最先进的模型进行定性和定量比较来证明我们的方法的有效性。代码在https://github.com/jgkwak95/au-g中提供。
translated by 谷歌翻译
尽管具有生成对抗网络(GAN)的图像到图像(I2I)翻译的显着进步,但使用单对生成器和歧视器将图像有效地转换为多个目标域中的一组不同图像仍然具有挑战性。现有的I2i翻译方法采用多个针对不同域的特定于域的内容编码,其中每个特定于域的内容编码器仅经过来自同一域的图像的训练。然而,我们认为应从所有域之间的图像中学到内容(域变相)特征。因此,现有方案的每个特定于域的内容编码器都无法有效提取域不变特征。为了解决这个问题,我们提出了一个灵活而通用的Sologan模型,用于在多个域之间具有未配对数据的多模式I2I翻译。与现有方法相反,Solgan算法使用具有附加辅助分类器的单个投影鉴别器,并为所有域共享编码器和生成器。因此,可以使用来自所有域的图像有效地训练Solgan,从而可以有效提取域 - 不变性内容表示。在多个数据集中,针对多个同行和sologan的变体的定性和定量结果证明了该方法的优点,尤其是对于挑战i2i翻译数据集的挑战,即涉及极端形状变化的数据集或在翻译后保持复杂的背景,需要保持复杂的背景。此外,我们通过消融研究证明了Sogan中每个成分的贡献。
translated by 谷歌翻译
Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.
translated by 谷歌翻译
我们提出了Vecgan,这是一个图像到图像翻译框架,用于带有可解释潜在方向的面部属性编辑。面部属性编辑任务面临着精确属性编辑的挑战,具有可控的强度和图像的其他属性的保存。对于此目标,我们通过潜在空间分解设计属性编辑,对于每个属性,我们学习了与其他属性正交的线性方向。另一个组件是变化的可控强度,标量值。在我们的框架中,可以通过投影从参考图像中对此标量进行采样或编码。我们的工作灵感来自固定预验证的gan的潜在空间分解作品。但是,尽管这些模型无法进行端到端训练,并难以精确编辑编码的图像,但Vecgan受到了端到端的培训,用于图像翻译任务,并成功地编辑了属性,同时保留了其他属性。我们的广泛实验表明,vecgan对本地和全球编辑的最先进进行了重大改进。
translated by 谷歌翻译
Figure 1. The proposed pixel2style2pixel framework can be used to solve a wide variety of image-to-image translation tasks. Here we show results of pSp on StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting and super-resolution.
translated by 谷歌翻译
生成的对抗网络(GANS)最近引入了执行图像到图像翻译的有效方法。这些模型可以应用于图像到图像到图像转换中的各种域而不改变任何参数。在本文中,我们调查并分析了八个图像到图像生成的对策网络:PIX2PX,Cyclegan,Cogan,Stargan,Munit,Stargan2,Da-Gan,以及自我关注GaN。这些模型中的每一个都呈现了最先进的结果,并引入了构建图像到图像的新技术。除了对模型的调查外,我们还调查了他们接受培训的18个数据集,并在其上进行了评估的9个指标。最后,我们在常见的一组指标和数据集中呈现6种这些模型的受控实验的结果。结果混合并显示,在某些数据集,任务和指标上,某些型号优于其他型号。本文的最后一部分讨论了这些结果并建立了未来研究领域。由于研究人员继续创新新的图像到图像GAN,因此他们非常重要地了解现有方法,数据集和指标。本文提供了全面的概述和讨论,以帮助构建此基础。
translated by 谷歌翻译
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.
translated by 谷歌翻译
从单个图像中的新视图综合最近实现了显着的结果,尽管在训练时需要某种形式的3D,姿势或多视图监管限制了实际情况的部署。这项工作旨在放松这些假设,可实现新颖的观看综合的条件生成模型,以完全无人监测。我们首先使用3D感知GaN制定预先列车纯粹的生成解码器模型,同时训练编码器网络将映射从潜空间颠覆到图像。然后,我们将编码器和解码器交换,并将网络作为条件GaN培训,其混合物类似于自动化器的物镜和自蒸馏。在测试时间,给定对象的视图,我们的模型首先将图像内容嵌入到潜在代码中并通过保留代码固定并改变姿势来生成它的新颖视图。我们在ShapeNet等合成数据集上测试我们的框架,如ShapeNet和无约束的自然图像集合,在那里没有竞争方法可以训练。
translated by 谷歌翻译