Figure 1. Multi-domain image-to-image translation results on the CelebA dataset via transferring knowledge learned from the RaFD dataset. The first and sixth columns show input images while the remaining columns are images generated by StarGAN. Note that the images are generated by a single generator network, and facial expression labels such as angry, happy, and fearful are from RaFD, not CelebA.
translated by 谷歌翻译
生成的对抗网络(GANS)最近引入了执行图像到图像翻译的有效方法。这些模型可以应用于图像到图像到图像转换中的各种域而不改变任何参数。在本文中,我们调查并分析了八个图像到图像生成的对策网络:PIX2PX,Cyclegan,Cogan,Stargan,Munit,Stargan2,Da-Gan,以及自我关注GaN。这些模型中的每一个都呈现了最先进的结果,并引入了构建图像到图像的新技术。除了对模型的调查外,我们还调查了他们接受培训的18个数据集,并在其上进行了评估的9个指标。最后,我们在常见的一组指标和数据集中呈现6种这些模型的受控实验的结果。结果混合并显示,在某些数据集,任务和指标上,某些型号优于其他型号。本文的最后一部分讨论了这些结果并建立了未来研究领域。由于研究人员继续创新新的图像到图像GAN,因此他们非常重要地了解现有方法,数据集和指标。本文提供了全面的概述和讨论,以帮助构建此基础。
translated by 谷歌翻译
尽管具有生成对抗网络(GAN)的图像到图像(I2I)翻译的显着进步,但使用单对生成器和歧视器将图像有效地转换为多个目标域中的一组不同图像仍然具有挑战性。现有的I2i翻译方法采用多个针对不同域的特定于域的内容编码,其中每个特定于域的内容编码器仅经过来自同一域的图像的训练。然而,我们认为应从所有域之间的图像中学到内容(域变相)特征。因此,现有方案的每个特定于域的内容编码器都无法有效提取域不变特征。为了解决这个问题,我们提出了一个灵活而通用的Sologan模型,用于在多个域之间具有未配对数据的多模式I2I翻译。与现有方法相反,Solgan算法使用具有附加辅助分类器的单个投影鉴别器,并为所有域共享编码器和生成器。因此,可以使用来自所有域的图像有效地训练Solgan,从而可以有效提取域 - 不变性内容表示。在多个数据集中,针对多个同行和sologan的变体的定性和定量结果证明了该方法的优点,尤其是对于挑战i2i翻译数据集的挑战,即涉及极端形状变化的数据集或在翻译后保持复杂的背景,需要保持复杂的背景。此外,我们通过消融研究证明了Sogan中每个成分的贡献。
translated by 谷歌翻译
图像到图像翻译(I2I)是一个充满挑战的计算机视觉问题,用于多个任务的众多域。最近,眼科成为I2i的应用迅速增加的主要领域之一。一种这样的应用是合成视网膜光学相干断层(OCT)扫描的产生。现有的I2I方法需要培训多种模型,将图像从正常扫描转换为特定病理学:限制由于它们的复杂性而对这些模型的使用。要解决此问题,我们提出了一个无监督的多域I2I网络,具有预先培训的样式编码器,可将一个域中的视网膜OCT图像转换为多个域。我们假设图像分裂到域不变内容和域特定的样式代码,并预先培训这些样式代码。所执行的实验表明,所提出的模型优于Munit和Cyclangan合成不同的病理扫描等最先进的模型。
translated by 谷歌翻译
使用机器学习的数据驱动的范例在图像处理和通信中变得普遍存在。特别地,图像到图像(I2I)转换是一种通用和广泛使用的图像处理问题的方法,例如图像合成,样式传输和图像恢复。同时,神经图像压缩被出现为可视通信中传统编码方法的数据驱动替代方法。在本文中,我们将这两种范例的组合与联合I2I压缩和翻译框架一起研究,重点是多域图像合成。首先通过将量化和熵编码集成到I2I翻译框架(即i2iCodec)中提出分布式I2I转换。在实践中,也希望图像压缩功能(即自动编码),需要与常规图像编解码器一起部署。因此,我们进一步提出了一个统一的框架,其允许在单个编解码器中进行平移和自动编码功能。在翻译/压缩模式下调节的自适应残差块提供灵活的适应性对所需功能。实验表明,使用单个模型的I2I平移和图像压缩均有前景。
translated by 谷歌翻译
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available. To facilitate GAN training, current methods propose to use data-specific augmentation techniques. Despite the effectiveness, it is difficult for these methods to scale to practical applications. In this work, we present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks. We first produce augmented samples using the convex combinations of the real samples. Then, we optimize the augmented samples by minimizing the norms of the data scores, i.e., the gradients of the log-density functions. This procedure enforces the augmented samples close to the data manifold. To estimate the scores, we train a deep estimation network with multi-scale score matching. For different image synthesis tasks, we train the score estimation network using different data. We do not require the tuning of the hyperparameters or modifications to the network architecture. The ScoreMix method effectively increases the diversity of data and reduces the overfitting problem. Moreover, it can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ScoreMix method achieve significant improvements.
translated by 谷歌翻译
Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any examples of corresponding image pairs. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to state-of-the-art approaches further demonstrate the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at https://github.com/nvlabs/MUNIT.
translated by 谷歌翻译
Figure 1. The proposed pixel2style2pixel framework can be used to solve a wide variety of image-to-image translation tasks. Here we show results of pSp on StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting and super-resolution.
translated by 谷歌翻译
我们提出了Vecgan,这是一个图像到图像翻译框架,用于带有可解释潜在方向的面部属性编辑。面部属性编辑任务面临着精确属性编辑的挑战,具有可控的强度和图像的其他属性的保存。对于此目标,我们通过潜在空间分解设计属性编辑,对于每个属性,我们学习了与其他属性正交的线性方向。另一个组件是变化的可控强度,标量值。在我们的框架中,可以通过投影从参考图像中对此标量进行采样或编码。我们的工作灵感来自固定预验证的gan的潜在空间分解作品。但是,尽管这些模型无法进行端到端训练,并难以精确编辑编码的图像,但Vecgan受到了端到端的培训,用于图像翻译任务,并成功地编辑了属性,同时保留了其他属性。我们的广泛实验表明,vecgan对本地和全球编辑的最先进进行了重大改进。
translated by 谷歌翻译
虽然现代形象翻译技术可以创造光电态合成图像,但它们具有有限的风格可控性,因此可能遭受翻译误差。在这项工作中,我们表明激活功能是控制图像合成方向的重要组件之一。具体地,我们明确证明整流器的斜率参数可以改变数据分布并独立使用以控制翻译方向。为了提高风格可控性,提出了两种简单但有效的技术,包括自适应Relu(Adarelu)和结构自适应功能。 Adarelu可以根据目标风格动态调整斜率参数,并且可以用于通过与自适应实例归一化(Adain)组合来提高可控性。同时,结构适应性功能使整流器能够更有效地操纵特征图的结构。它由所提出的结构卷积(Struconv)组成,一种有效的卷积模块,可以根据AADAIN指定的平均值和方差选择要激活的区域。广泛的实验表明,所提出的技术可以大大提高基于风格的图像转换任务的网络可控性和输出分集。
translated by 谷歌翻译
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.
translated by 谷歌翻译
最近关于多领域面部图像翻译的研究取得了令人印象深刻的结果。现有方法通常提供具有辅助分类器的鉴别器,以施加域转换。但是,这些方法忽略了关于域分布匹配的重要信息。为了解决这个问题,我们提出了一种与更自适应的鉴别器结构和匹配的发电机具有更自适应的鉴别器结构和匹配的发电机之间的开关生成的对抗网络(SwitchGan),以在多个域之间执行精密图像转换。提出了一种特征切换操作以在我们的条件模块中实现特征选择和融合。我们展示了我们模型的有效性。此外,我们还引入了发电机的新功能,该功能代表了属性强度控制,并在没有定制培训的情况下提取内容信息。在视觉上和定量地显示了Morph,RAFD和Celeba数据库的实验,表明我们扩展的SwitchGan(即,门控SwitchGan)可以实现比Stargan,Attgan和Staggan更好的翻译结果。使用培训的Reset-18模型实现的属性分类准确性和使用ImageNet预先预订的Inception-V3模型获得的FIC分数也定量展示了模型的卓越性能。
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
We present a novel image inversion framework and a training pipeline to achieve high-fidelity image inversion with high-quality attribute editing. Inverting real images into StyleGAN's latent space is an extensively studied problem, yet the trade-off between the image reconstruction fidelity and image editing quality remains an open challenge. The low-rate latent spaces are limited in their expressiveness power for high-fidelity reconstruction. On the other hand, high-rate latent spaces result in degradation in editing quality. In this work, to achieve high-fidelity inversion, we learn residual features in higher latent codes that lower latent codes were not able to encode. This enables preserving image details in reconstruction. To achieve high-quality editing, we learn how to transform the residual features for adapting to manipulations in latent codes. We train the framework to extract residual features and transform them via a novel architecture pipeline and cycle consistency losses. We run extensive experiments and compare our method with state-of-the-art inversion methods. Qualitative metrics and visual comparisons show significant improvements. Code: https://github.com/hamzapehlivan/StyleRes
translated by 谷歌翻译
Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.
translated by 谷歌翻译
In this work, we are dedicated to text-guided image generation and propose a novel framework, i.e., CLIP2GAN, by leveraging CLIP model and StyleGAN. The key idea of our CLIP2GAN is to bridge the output feature embedding space of CLIP and the input latent space of StyleGAN, which is realized by introducing a mapping network. In the training stage, we encode an image with CLIP and map the output feature to a latent code, which is further used to reconstruct the image. In this way, the mapping network is optimized in a self-supervised learning way. In the inference stage, since CLIP can embed both image and text into a shared feature embedding space, we replace CLIP image encoder in the training architecture with CLIP text encoder, while keeping the following mapping network as well as StyleGAN model. As a result, we can flexibly input a text description to generate an image. Moreover, by simply adding mapped text features of an attribute to a mapped CLIP image feature, we can effectively edit the attribute to the image. Extensive experiments demonstrate the superior performance of our proposed CLIP2GAN compared to previous methods.
translated by 谷歌翻译
可以训练生成模型,以从特定域中生成图像,仅由文本提示引导,而不看到任何图像?换句话说:可以将图像生成器“盲目地训练”吗?利用大规模对比语言图像预训练(CLIP)模型的语义力量,我们提出了一种文本驱动方法,允许将生成模型转移到新域,而无需收集单个图像。我们展示通过自然语言提示和几分钟的培训,我们的方法可以通过各种风格和形状的多种域调整发电机。值得注意的是,许多这些修改难以与现有方法达到困难或完全不可能。我们在广泛的域中进行了广泛的实验和比较。这些证明了我们方法的有效性,并表明我们的移动模型保持了对下游任务吸引的生成模型的潜在空间属性。
translated by 谷歌翻译
In this work, we propose TediGAN, a novel framework for multi-modal image generation and manipulation with textual descriptions. The proposed method consists of three components: StyleGAN inversion module, visual-linguistic similarity learning, and instance-level optimization. The inversion module maps real images to the latent space of a well-trained StyleGAN. The visual-linguistic similarity learns the text-image matching by mapping the image and text into a common embedding space. The instancelevel optimization is for identity preservation in manipulation. Our model can produce diverse and high-quality images with an unprecedented resolution at 1024 2 . Using a control mechanism based on style-mixing, our Tedi-GAN inherently supports image synthesis with multi-modal inputs, such as sketches or semantic labels, with or without instance guidance. To facilitate text-guided multimodal synthesis, we propose the Multi-Modal CelebA-HQ, a large-scale dataset consisting of real face images and corresponding semantic segmentation map, sketch, and textual descriptions. Extensive experiments on the introduced dataset demonstrate the superior performance of our proposed method. Code and data are available at https://github.com/weihaox/TediGAN.
translated by 谷歌翻译