生成对抗网络(GAN)的适应旨在将预训练的GAN转移到具有有限培训数据的给定领域。在本文中,我们专注于单次案例,这在以前的作品中更具挑战性,很少探索。我们认为,从源域到目标域的适应性可以分为两个部分:全球样式(如纹理和颜色)的转移,以及不属于源域的新实体的出现。虽然先前的作品主要关注样式转移,但我们提出了一个新颖而简洁的框架\ footNote {\ url {https://github.com/thevoidname/generalized-onerized-one-one-shot-gan-adaption}},以解决\ textit {对样式和实体传输的一般性单发适应性}任务,其中提供了参考图像及其二进制实体掩码。我们的核心目标是通过切成薄片的瓦斯坦距离来限制参考文献和合成的内部分布之间的差距。为了更好地实现这一目标,首先使用样式固定来大致获得模范样式,并将辅助网络引入原始生成器以删除实体和样式传输。此外,为了实现跨域的对应关系,我们提出了变异的拉普拉斯正则化以限制适应性发生器的平滑度。定量和定性实验都证明了我们方法在各种情况下的有效性。
translated by 谷歌翻译
我们为一个拍摄域适应提供了一种新方法。我们方法的输入是训练的GaN,其可以在域B中产生域A和单个参考图像I_B的图像。所提出的算法可以将训练的GaN的任何输出从域A转换为域B.我们的主要优点有两个主要优点方法与当前现有技术相比:首先,我们的解决方案实现了更高的视觉质量,例如通过明显减少过度装箱。其次,我们的解决方案允许更多地控制域间隙的自由度,即图像I_B的哪些方面用于定义域B.从技术上讲,我们通过在预先训练的样式生成器上建立新方法作为GaN和A用于代表域间隙的预先训练的夹模型。我们提出了几种新的常规程序来控制域间隙,以优化预先训练的样式生成器的权重,以输出域B中的图像而不是域A.常规方法防止优化来自单个参考图像的太多属性。我们的结果表明,对现有技术的显着视觉改进以及突出了改进控制的多个应用程序。
translated by 谷歌翻译
一击生成域Adaption旨在仅使用一个参考图像将一个预训练的发电机传输到一个新域中。但是,适用的生成器(i)要生成从预训练的生成器继承的多种图像,而(ii)(ii)忠实地获取参考图像的特定领域特定属性和样式,这仍然非常具有挑战性。在本文中,我们提出了一种新颖的单发性生成域适应方法,即Difa,用于多元化和忠实的适应。对于全球级别的适应,我们利用参考图像的剪辑嵌入与源图像的平均嵌入之间的差异来限制目标发生器。对于本地级别的适应,我们引入了一个细心的样式损失,该损失将每个适应图像的中间令牌与参考图像的相应令牌保持一致。为了促进多样化的生成,引入了选择性的跨域一致性,以选择和保留域共享属性,以编辑潜在的$ \ MATHCAL {W}+$ $空间来继承预训练的生成器的多样性。广泛的实验表明,我们的方法在定量和定性上都优于最先进的实验,尤其是对于大域间隙的情况。此外,我们的DIFA可以轻松地扩展到零击生成域的适应性,并具有吸引力的结果。代码可从https://github.com/1170300521/difa获得。
translated by 谷歌翻译
可以训练生成模型,以从特定域中生成图像,仅由文本提示引导,而不看到任何图像?换句话说:可以将图像生成器“盲目地训练”吗?利用大规模对比语言图像预训练(CLIP)模型的语义力量,我们提出了一种文本驱动方法,允许将生成模型转移到新域,而无需收集单个图像。我们展示通过自然语言提示和几分钟的培训,我们的方法可以通过各种风格和形状的多种域调整发电机。值得注意的是,许多这些修改难以与现有方法达到困难或完全不可能。我们在广泛的域中进行了广泛的实验和比较。这些证明了我们方法的有效性,并表明我们的移动模型保持了对下游任务吸引的生成模型的潜在空间属性。
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
这项工作旨在将在一个图像域上预先训练的生成的对抗网络(GaN)转移到新域名,其仅仅是只有一个目标图像。主要挑战是,在有限的监督下,综合照片现实和高度多样化的图像非常困难,同时获取目标的代表性。不同于采用Vanilla微调策略的现有方法,我们分别将两个轻量级模块导入发电机和鉴别器。具体地,我们将属性适配器引入发电机中冻结其原始参数,通过该参数,它可以通过其重复利用现有知识,因此保持合成质量和多样性。然后,我们用一个属性分类器装备了学习良好的鉴别器骨干,以确保生成器从引用中捕获相应的字符。此外,考虑到培训数据的多样性差(即,只有一个图像),我们建议在培训过程中建议在生成域中的多样性限制,减轻优化难度。我们的方法在各种环境下提出了吸引力的结果,基本上超越了最先进的替代方案,特别是在合成多样性方面。明显的是,我们的方法即使具有大域间隙,并且在几分钟内为每个实验提供鲁棒地收敛。
translated by 谷歌翻译
本文介绍了DCT-NET,这是一种新颖的图像翻译体系结构,可用于几张肖像风格。给定有限的样式示例($ \ sim $ 100),新的体系结构可以产生高质量的样式转移结果,具有先进的能力,可以合成高保真内容和强大的一般性来处理复杂的场景(例如,遮挡和配件)。此外,它可以通过一个由部分观察(即风格化的头)训练的优雅评估网络启用全身图像翻译。几乎没有基于学习的样式转移是具有挑战性的,因为由于仅由少数几个培训示例形成的偏见分布,学到的模型很容易在目标域中过度拟合。本文旨在通过采用“首先校准,稍后翻译”的关键思想来应对挑战,并以本地注重的翻译探索增强的全球结构。具体而言,所提出的DCT-NET由三个模块组成:一个内容适配器从源照片借用功能的先验来校准目标样本的内容分布;使用仿射变换来释放空间语义约束的几何扩展模块;以及通过校准分布产生的样品的质地翻译模块学习细粒的转换。实验结果证明了所提出的方法在头部风格化方面具有优势及其对具有自适应变形的完整图像翻译的有效性。
translated by 谷歌翻译
Domain adaptation of GANs is a problem of fine-tuning the state-of-the-art GAN models (e.g. StyleGAN) pretrained on a large dataset to a specific domain with few samples (e.g. painting faces, sketches, etc.). While there are a great number of methods that tackle this problem in different ways there are still many important questions that remain unanswered. In this paper, we provide a systematic and in-depth analysis of the domain adaptation problem of GANs, focusing on the StyleGAN model. First, we perform a detailed exploration of the most important parts of StyleGAN that are responsible for adapting the generator to a new domain depending on the similarity between the source and target domains. In particular, we show that affine layers of StyleGAN can be sufficient for fine-tuning to similar domains. Second, inspired by these findings, we investigate StyleSpace to utilize it for domain adaptation. We show that there exist directions in the StyleSpace that can adapt StyleGAN to new domains. Further, we examine these directions and discover their many surprising properties. Finally, we leverage our analysis and findings to deliver practical improvements and applications in such standard tasks as image-to-image translation and cross-domain morphing.
translated by 谷歌翻译
反转生成对抗网络(GAN)可以使用预训练的发电机来促进广泛的图像编辑任务。现有方法通常采用gan的潜在空间作为反转空间,但观察到空间细节的恢复不足。在这项工作中,我们建议涉及发电机的填充空间,以通过空间信息补充潜在空间。具体来说,我们替换具有某些实例感知系数的卷积层中使用的恒定填充(例如,通常为零)。通过这种方式,可以适当地适当地适应了预训练模型中假定的归纳偏差以适合每个单独的图像。通过学习精心设计的编码器,我们设法在定性和定量上提高了反演质量,超过了现有的替代方案。然后,我们证明了这样的空间扩展几乎不会影响天然甘纳的歧管,因此我们仍然可以重复使用甘斯(Gans)对各种下游应用学到的先验知识。除了在先前的艺术中探讨的编辑任务外,我们的方法还可以进行更灵活的图像操纵,例如对面部轮廓和面部细节的单独控制,并启用一种新颖的编辑方式,用户可以高效地自定义自己的操作。
translated by 谷歌翻译
最近,大型预磨损模型(例如,BERT,STYLEGAN,CLIP)在其域内的各种下游任务中表现出很好的知识转移和泛化能力。在这些努力的启发中,在本文中,我们提出了一个统一模型,用于开放域图像编辑,重点是开放式域图像的颜色和音调调整,同时保持原始内容和结构。我们的模型了解许多现有照片编辑软件中使用的操作空间(例如,对比度,亮度,颜色曲线)更具语义,直观,易于操作的统一编辑空间。我们的模型属于图像到图像转换框架,由图像编码器和解码器组成,并且在图像之前和图像的成对上培训以产生多模式输出。我们认为,通过将图像对反馈到学习编辑空间的潜在代码中,我们的模型可以利用各种下游编辑任务,例如语言引导图像编辑,个性化编辑,编辑式聚类,检索等。我们广泛地研究实验中编辑空间的独特属性,并在上述任务上展示了卓越的性能。
translated by 谷歌翻译
我们提出了Exe-Gan,这是一种新型的使用生成对抗网络的典范引导的面部介绍框架。我们的方法不仅可以保留输入面部图像的质量,而且还可以使用类似示例性的面部属性来完成图像。我们通过同时利用输入图像的全局样式,从随机潜在代码生成的随机样式以及示例图像的示例样式来实现这一目标。我们介绍了一个新颖的属性相似性指标,以鼓励网络以一种自我监督的方式从示例中学习面部属性的风格。为了确保跨地区边界之间的自然过渡,我们引入了一种新型的空间变体梯度反向传播技术,以根据空间位置调整损耗梯度。关于公共Celeba-HQ和FFHQ数据集的广泛评估和实际应用,可以验证Exe-GAN的优越性,从面部镶嵌的视觉质量来看。
translated by 谷歌翻译
生成高质量的艺术肖像视频是计算机图形和愿景中的一项重要且理想的任务。尽管已经提出了一系列成功的肖像图像图像模型模型,但这些面向图像的方法在应用于视频(例如固定框架尺寸,面部对齐的要求,缺失的非种族细节和缺失的非种族细节和缺失的要求)时,具有明显的限制。时间不一致。在这项工作中,我们通过引入一个新颖的Vtoonify框架来研究具有挑战性的可控高分辨率肖像视频风格转移。具体而言,Vtoonify利用了Stylegan的中高分辨率层,以基于编码器提取的多尺度内容功能来渲染高质量的艺术肖像,以更好地保留框架细节。由此产生的完全卷积体系结构接受可变大小的视频中的非对齐面孔作为输入,从而有助于完整的面部区域,并在输出中自然动作。我们的框架与现有的基于Stylegan的图像图像模型兼容,以将其扩展到视频化,并继承了这些模型的吸引力,以进行柔性风格控制颜色和强度。这项工作分别为基于收藏和基于示例的肖像视频风格转移而建立在Toonify和DualStylegan的基于Toonify和Dualstylegan的Vtoonify的两个实例化。广泛的实验结果证明了我们提出的VTOONIFY框架对现有方法的有效性在生成具有灵活风格控件的高质量和临时艺术肖像视频方面的有效性。
translated by 谷歌翻译
We present a novel image inversion framework and a training pipeline to achieve high-fidelity image inversion with high-quality attribute editing. Inverting real images into StyleGAN's latent space is an extensively studied problem, yet the trade-off between the image reconstruction fidelity and image editing quality remains an open challenge. The low-rate latent spaces are limited in their expressiveness power for high-fidelity reconstruction. On the other hand, high-rate latent spaces result in degradation in editing quality. In this work, to achieve high-fidelity inversion, we learn residual features in higher latent codes that lower latent codes were not able to encode. This enables preserving image details in reconstruction. To achieve high-quality editing, we learn how to transform the residual features for adapting to manipulations in latent codes. We train the framework to extract residual features and transform them via a novel architecture pipeline and cycle consistency losses. We run extensive experiments and compare our method with state-of-the-art inversion methods. Qualitative metrics and visual comparisons show significant improvements. Code: https://github.com/hamzapehlivan/StyleRes
translated by 谷歌翻译
In this work, we are dedicated to text-guided image generation and propose a novel framework, i.e., CLIP2GAN, by leveraging CLIP model and StyleGAN. The key idea of our CLIP2GAN is to bridge the output feature embedding space of CLIP and the input latent space of StyleGAN, which is realized by introducing a mapping network. In the training stage, we encode an image with CLIP and map the output feature to a latent code, which is further used to reconstruct the image. In this way, the mapping network is optimized in a self-supervised learning way. In the inference stage, since CLIP can embed both image and text into a shared feature embedding space, we replace CLIP image encoder in the training architecture with CLIP text encoder, while keeping the following mapping network as well as StyleGAN model. As a result, we can flexibly input a text description to generate an image. Moreover, by simply adding mapped text features of an attribute to a mapped CLIP image feature, we can effectively edit the attribute to the image. Extensive experiments demonstrate the superior performance of our proposed CLIP2GAN compared to previous methods.
translated by 谷歌翻译
Figure 1. The proposed pixel2style2pixel framework can be used to solve a wide variety of image-to-image translation tasks. Here we show results of pSp on StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting and super-resolution.
translated by 谷歌翻译
Facial image manipulation has achieved great progress in recent years. However, previous methods either operate on a predefined set of face attributes or leave users little freedom to interactively manipulate images. To overcome these drawbacks, we propose a novel framework termed MaskGAN, enabling diverse and interactive face manipulation. Our key insight is that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation. MaskGAN has two main components: 1) Dense Mapping Network (DMN) and 2) Editing Behavior Simulated Training (EBST). Specifically, DMN learns style mapping between a free-form user modified mask and a target image, enabling diverse generation results. EBST models the user editing behavior on the source mask, making the overall framework more robust to various manipulated inputs. Specifically, it introduces dual-editing consistency as the auxiliary supervision signal. To facilitate extensive studies, we construct a large-scale high-resolution face dataset with fine-grained mask annotations named CelebAMask-HQ. MaskGAN is comprehensively evaluated on two challenging tasks: attribute transfer and style copy, demonstrating superior performance over other state-of-the-art methods. The code, models, and dataset are available at https://github.com/switchablenorms/CelebAMask-HQ.
translated by 谷歌翻译
由于简单但有效的训练机制和出色的图像产生质量,生成的对抗网络(GAN)引起了极大的关注。具有生成照片现实的高分辨率(例如$ 1024 \ times1024 $)的能力,最近的GAN模型已大大缩小了生成的图像与真实图像之间的差距。因此,许多最近的作品表明,通过利用良好的潜在空间和博学的gan先验来利用预先训练的GAN模型的新兴兴趣。在本文中,我们简要回顾了从三个方面利用预先培训的大规模GAN模型的最新进展,即1)大规模生成对抗网络的培训,2)探索和理解预训练的GAN模型,以及预先培训的GAN模型,以及3)利用这些模型进行后续任务,例如图像恢复和编辑。有关相关方法和存储库的更多信息,请访问https://github.com/csmliu/pretretaining-gans。
translated by 谷歌翻译
从单个样本产生图像,作为图像合成的新发展分支,引起了广泛的关注。在本文中,我们将该问题与单个图像的条件分布进行采样,提出了一种分层框架,通过关于结构,语义和纹理的分布的连续学习来简化复杂条件分布的学习学习和一代可理解。在此基础上,我们设计由三个级联的GAN组成的Exsingan,用于从给定的图像学习可解释的生成模型,级联的GANS连续模拟结构,语义和纹理的分布。由于以前的作品所做的,但也是从给定图像的内部补丁来学习的,而且来自GaN反演技术的外部获得的外部。与先前作品相比,Exsingan对内部和外部信息的适当组合有利于内部和外部信息的适当组合,对图像操纵任务进行了更强大的生成和竞争泛化能力。
translated by 谷歌翻译
最近,GaN反演方法与对比语言 - 图像预先绘制(CLIP)相结合,可以通过文本提示引导零拍摄图像操作。然而,由于GaN反转能力有限,它们对不同实物的不同实物的应用仍然困难。具体地,这些方法通常在与训练数据相比,改变对象标识或产生不需要的图像伪影的比较与新颖姿势,视图和高度可变内容重建具有新颖姿势,视图和高度可变内容的困难。为了减轻这些问题并实现真实图像的忠实操纵,我们提出了一种新的方法,Dumbused Clip,其使用扩散模型执行文本驱动的图像操纵。基于近期扩散模型的完整反转能力和高质量的图像生成功率,即使在看不见的域之间也成功地执行零拍摄图像操作。此外,我们提出了一种新颖的噪声组合方法,允许简单的多属性操作。与现有基线相比,广泛的实验和人类评估确认了我们的方法的稳健和卓越的操纵性能。
translated by 谷歌翻译
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing. While studies over extending 2D StyleGAN to 3D faces have emerged, a corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing. In this paper, we study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures. The problem is ill-posed: innumerable compositions of shape and texture could be rendered to the current image. Furthermore, with the limited capacity of a global latent code, 2D inversion methods cannot preserve faithful shape and texture at the same time when applied to 3D models. To solve this problem, we devise an effective self-training scheme to constrain the learning of inversion. The learning is done efficiently without any real-world 2D-3D training pairs but proxy samples generated from a 3D GAN. In addition, apart from a global latent code that captures the coarse shape and texture information, we augment the generation network with a local branch, where pixel-aligned features are added to faithfully reconstruct face details. We further consider a new pipeline to perform 3D view-consistent editing. Extensive experiments show that our method outperforms state-of-the-art inversion methods in both shape and texture reconstruction quality. Code and data will be released.
translated by 谷歌翻译