Facial expression recognition (FER) plays a significant role in the ubiquitous application of computer vision. We revisit this problem with a new perspective on whether it can acquire useful representations that improve FER performance in the image generation process, and propose a novel generative method based on the image inversion mechanism for the FER task, termed Inversion FER (IFER). Particularly, we devise a novel Adversarial Style Inversion Transformer (ASIT) towards IFER to comprehensively extract features of generated facial images. In addition, ASIT is equipped with an image inversion discriminator that measures the cosine similarity of semantic features between source and generated images, constrained by a distribution alignment loss. Finally, we introduce a feature modulation module to fuse the structural code and latent codes from ASIT for the subsequent FER work. We extensively evaluate ASIT on facial datasets such as FFHQ and CelebA-HQ, showing that our approach achieves state-of-the-art facial inversion performance. IFER also achieves competitive results in facial expression recognition datasets such as RAF-DB, SFEW and AffectNet. The code and models are available at https://github.com/Talented-Q/IFER-master.
translated by 谷歌翻译
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.
translated by 谷歌翻译
我们通过将此任务视为视觉令牌生成问题来提出新的视角来实现图像综合。与现有的范例不同,即直接从单个输入(例如,潜像)直接合成完整图像,新配方使得能够为不同的图像区域进行灵活的本地操作,这使得可以学习内容感知和细粒度的样式控制用于图像合成。具体地,它需要输入潜像令牌的序列,以预测用于合成图像的视觉令牌。在这种观点来看,我们提出了一个基于令牌的发电机(即Tokengan)。特别是,Tokengan输入了两个语义不同的视觉令牌,即,来自潜在空间的学习常量内容令牌和风格代币。鉴于一系列风格令牌,Tokengan能够通过用变压器将样式分配给内容令牌来控制图像合成。我们进行了广泛的实验,并表明拟议的Tokengan在几个广泛使用的图像综合基准上实现了最先进的结果,包括FFHQ和LSUN教会,具有不同的决议。特别地,发电机能够用1024x1024尺寸合成高保真图像,完全用卷曲分配。
translated by 谷歌翻译
反转生成对抗网络(GAN)可以使用预训练的发电机来促进广泛的图像编辑任务。现有方法通常采用gan的潜在空间作为反转空间,但观察到空间细节的恢复不足。在这项工作中,我们建议涉及发电机的填充空间,以通过空间信息补充潜在空间。具体来说,我们替换具有某些实例感知系数的卷积层中使用的恒定填充(例如,通常为零)。通过这种方式,可以适当地适当地适应了预训练模型中假定的归纳偏差以适合每个单独的图像。通过学习精心设计的编码器,我们设法在定性和定量上提高了反演质量,超过了现有的替代方案。然后,我们证明了这样的空间扩展几乎不会影响天然甘纳的歧管,因此我们仍然可以重复使用甘斯(Gans)对各种下游应用学到的先验知识。除了在先前的艺术中探讨的编辑任务外,我们的方法还可以进行更灵活的图像操纵,例如对面部轮廓和面部细节的单独控制,并启用一种新颖的编辑方式,用户可以高效地自定义自己的操作。
translated by 谷歌翻译
近年来,面部语义指导(包括面部地标,面部热图和面部解析图)和面部生成对抗网络(GAN)近年来已广泛用于盲面修复(BFR)。尽管现有的BFR方法在普通案例中取得了良好的性能,但这些解决方案在面对严重降解和姿势变化的图像时具有有限的弹性(例如,在现实世界情景中看起来右,左看,笑等)。在这项工作中,我们提出了一个精心设计的盲人面部修复网络,具有生成性面部先验。所提出的网络主要由非对称编解码器和stylegan2先验网络组成。在非对称编解码器中,我们采用混合的多路残留块(MMRB)来逐渐提取输入图像的弱纹理特征,从而可以更好地保留原始面部特征并避免过多的幻想。 MMRB也可以在其他网络中插入插件。此外,多亏了StyleGAN2模型的富裕和多样化的面部先验,我们采用了微调的方法来灵活地恢复自然和现实的面部细节。此外,一种新颖的自我监督训练策略是专门设计用于面部修复任务的,以使分配更接近目标并保持训练稳定性。关于合成和现实世界数据集的广泛实验表明,我们的模型在面部恢复和面部超分辨率任务方面取得了卓越的表现。
translated by 谷歌翻译
我们提出了Exe-Gan,这是一种新型的使用生成对抗网络的典范引导的面部介绍框架。我们的方法不仅可以保留输入面部图像的质量,而且还可以使用类似示例性的面部属性来完成图像。我们通过同时利用输入图像的全局样式,从随机潜在代码生成的随机样式以及示例图像的示例样式来实现这一目标。我们介绍了一个新颖的属性相似性指标,以鼓励网络以一种自我监督的方式从示例中学习面部属性的风格。为了确保跨地区边界之间的自然过渡,我们引入了一种新型的空间变体梯度反向传播技术,以根据空间位置调整损耗梯度。关于公共Celeba-HQ和FFHQ数据集的广泛评估和实际应用,可以验证Exe-GAN的优越性,从面部镶嵌的视觉质量来看。
translated by 谷歌翻译
In this work, we propose TediGAN, a novel framework for multi-modal image generation and manipulation with textual descriptions. The proposed method consists of three components: StyleGAN inversion module, visual-linguistic similarity learning, and instance-level optimization. The inversion module maps real images to the latent space of a well-trained StyleGAN. The visual-linguistic similarity learns the text-image matching by mapping the image and text into a common embedding space. The instancelevel optimization is for identity preservation in manipulation. Our model can produce diverse and high-quality images with an unprecedented resolution at 1024 2 . Using a control mechanism based on style-mixing, our Tedi-GAN inherently supports image synthesis with multi-modal inputs, such as sketches or semantic labels, with or without instance guidance. To facilitate text-guided multimodal synthesis, we propose the Multi-Modal CelebA-HQ, a large-scale dataset consisting of real face images and corresponding semantic segmentation map, sketch, and textual descriptions. Extensive experiments on the introduced dataset demonstrate the superior performance of our proposed method. Code and data are available at https://github.com/weihaox/TediGAN.
translated by 谷歌翻译
图像介入寻求一种语义一致的方法,以根据其未掩盖的内容来恢复损坏的图像。以前的方法通常将训练有素的甘恩重复使用,然后在产生逼真的斑块中用于缺少GAN反转的孔。然而,在这些算法中对硬约束的无知可能会产生gan倒置和图像插入之间的差距。在解决这个问题的情况下,我们在本文中设计了一个新颖的GAN反转模型,用于图像插入,称为Interverfill,主要由带有预调制模块的编码器和具有F&W+潜在空间的GAN生成器组成。在编码器中,预调制网络利用多尺度结构将更多的歧视语义编码为样式向量。为了弥合GAN倒置和图像插入之间的缝隙,提出了F&W+潜在空间以消除巨大的颜色差异和语义不一致。为了重建忠实和逼真的图像,一个简单而有效的软上升平均潜在模块旨在捕获更多样化的内域模式,以合成大型腐败的高保真质地。在包括Ploce2,Celeba-HQ,Metfaces和Scenery在内的四个具有挑战性的数据集上进行的全面实验表明,我们的Intervill效果优于定性和定量的高级方法,并支持室外图像的完成。
translated by 谷歌翻译
鉴于其广泛的应用,已经对人面部交换的任务进行了许多尝试。尽管现有的方法主要依赖于乏味的网络和损失设计,但它们仍然在源和目标面之间的信息平衡中挣扎,并倾向于产生可见的人工制品。在这项工作中,我们引入了一个名为StylesWap的简洁有效的框架。我们的核心想法是利用基于样式的生成器来增强高保真性和稳健的面部交换,因此可以采用发电机的优势来优化身份相似性。我们仅通过最小的修改来确定,StyleGAN2体系结构可以成功地处理来自源和目标的所需信息。此外,受到TORGB层的启发,进一步设计了交换驱动的面具分支以改善信息的融合。此外,可以采用stylegan倒置的优势。特别是,提出了交换引导的ID反转策略来优化身份相似性。广泛的实验验证了我们的框架会产生高质量的面部交换结果,从而超过了最先进的方法,既有定性和定量。
translated by 谷歌翻译
In this work, we are dedicated to text-guided image generation and propose a novel framework, i.e., CLIP2GAN, by leveraging CLIP model and StyleGAN. The key idea of our CLIP2GAN is to bridge the output feature embedding space of CLIP and the input latent space of StyleGAN, which is realized by introducing a mapping network. In the training stage, we encode an image with CLIP and map the output feature to a latent code, which is further used to reconstruct the image. In this way, the mapping network is optimized in a self-supervised learning way. In the inference stage, since CLIP can embed both image and text into a shared feature embedding space, we replace CLIP image encoder in the training architecture with CLIP text encoder, while keeping the following mapping network as well as StyleGAN model. As a result, we can flexibly input a text description to generate an image. Moreover, by simply adding mapped text features of an attribute to a mapped CLIP image feature, we can effectively edit the attribute to the image. Extensive experiments demonstrate the superior performance of our proposed CLIP2GAN compared to previous methods.
translated by 谷歌翻译
The introduction of high-quality image generation models, particularly the StyleGAN family, provides a powerful tool to synthesize and manipulate images. However, existing models are built upon high-quality (HQ) data as desired outputs, making them unfit for in-the-wild low-quality (LQ) images, which are common inputs for manipulation. In this work, we bridge this gap by proposing a novel GAN structure that allows for generating images with controllable quality. The network can synthesize various image degradation and restore the sharp image via a quality control code. Our proposed QC-StyleGAN can directly edit LQ images without altering their quality by applying GAN inversion and manipulation techniques. It also provides for free an image restoration solution that can handle various degradations, including noise, blur, compression artifacts, and their mixtures. Finally, we demonstrate numerous other applications such as image degradation synthesis, transfer, and interpolation.
translated by 谷歌翻译
尽管最近的生成面部先验和几何事物最近证明了盲面修复的高质量结果,但忠实于投入的细粒度细节仍然是一个具有挑战性的问题。由基于经典词典的方法和最近的矢量量化(VQ)技术激励,我们提出了一种基于VQ的面部恢复方法-VQFR。 VQFR利用从高质量面孔中提取的高质量低级特征银行,因此可以帮助恢复现实的面部细节。但是,通过忠实的细节和身份保存,VQ代码簿的简单应用无法取得良好的结果。因此,我们进一步介绍了两个特殊的网络设计。 1)。我们首先研究了VQ代码簿中的压缩补丁大小,并发现使用适当的压缩补丁大小设计的VQ代码簿对于平衡质量和忠诚度至关重要。 2)。为了进一步融合来自输入的低级功能,而不是“污染” VQ代码簿中生成的现实细节,我们提出了一个由纹理解码器和主要解码器组成的并行解码器。然后,这两个解码器与具有变形卷积的纹理翘曲模块进行交互。拟议的VQFR配备了VQ Codebook作为面部细节词典和平行解码器设计,可以在很大程度上提高面部细节的恢复质量,同时保持对先前方法的保真度。
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
由于GaN潜在空间的勘探和利用,近年来,现实世界的图像操纵实现了奇妙的进展。 GaN反演是该管道的第一步,旨在忠实地将真实图像映射到潜在代码。不幸的是,大多数现有的GaN反演方法都无法满足下面列出的三个要求中的至少一个:重建质量,可编辑性和快速推断。我们在本研究中提出了一种新的两阶段策略,同时适合所有要求。在第一阶段,我们训练编码器将输入图像映射到StyleGan2 $ \ Mathcal {W} $ - 空间,这被证明具有出色的可编辑性,但重建质量较低。在第二阶段,我们通过利用一系列HyperNetWorks来补充初始阶段的重建能力以在反转期间恢复缺失的信息。这两个步骤互相补充,由于Hypernetwork分支和由于$ \ Mathcal {W} $ - 空间中的反转,因此由于HyperNetwork分支和优异的可编辑性而相互作用。我们的方法完全是基于编码器的,导致极快的推断。关于两个具有挑战性的数据集的广泛实验证明了我们方法的优越性。
translated by 谷歌翻译
Recent work has shown that a variety of semantics emerge in the latent space of Generative Adversarial Networks (GANs) when being trained to synthesize images. However, it is difficult to use these learned semantics for real image editing. A common practice of feeding a real image to a trained GAN generator is to invert it back to a latent code. However, existing inversion methods typically focus on reconstructing the target image by pixel values yet fail to land the inverted code in the semantic domain of the original latent space. As a result, the reconstructed image cannot well support semantic editing through varying the inverted code. To solve this problem, we propose an in-domain GAN inversion approach, which not only faithfully reconstructs the input image but also ensures the inverted code to be semantically meaningful for editing. We first learn a novel domain-guided encoder to project a given image to the native latent space of GANs. We then propose domain-regularized optimization by involving the encoder as a regularizer to fine-tune the code produced by the encoder and better recover the target image. Extensive experiments suggest that our inversion method achieves satisfying real image reconstruction and more importantly facilitates various image editing tasks, significantly outperforming start-of-the-arts. 1
translated by 谷歌翻译
面部超分辨率(FSR),也称为面部幻觉,其旨在增强低分辨率(LR)面部图像以产生高分辨率(HR)面部图像的分辨率,是特定于域的图像超分辨率问题。最近,FSR获得了相当大的关注,并目睹了深度学习技术的发展炫目。迄今为止,有很少有基于深入学习的FSR的研究摘要。在本次调查中,我们以系统的方式对基于深度学习的FSR方法进行了全面审查。首先,我们总结了FSR的问题制定,并引入了流行的评估度量和损失功能。其次,我们详细说明了FSR中使用的面部特征和流行数据集。第三,我们根据面部特征的利用大致分类了现有方法。在每个类别中,我们从设计原则的一般描述开始,然后概述代表方法,然后讨论其中的利弊。第四,我们评估了一些最先进的方法的表现。第五,联合FSR和其他任务以及与FSR相关的申请大致介绍。最后,我们设想了这一领域进一步的技术进步的前景。在\ URL {https://github.com/junjun-jiang/face-hallucination-benchmark}上有一个策划的文件和资源的策划文件和资源清单
translated by 谷歌翻译
Figure 1. The proposed pixel2style2pixel framework can be used to solve a wide variety of image-to-image translation tasks. Here we show results of pSp on StyleGAN inversion, multi-modal conditional image synthesis, facial frontalization, inpainting and super-resolution.
translated by 谷歌翻译
在这项工作中,我们研究了面部重建的问题,鉴于从黑框面部识别引擎中提取的面部特征表示。确实,由于引擎中抽象信息的局限性,在实践中,这是非常具有挑战性的问题。因此,我们在蒸馏框架(dab-gan)中引入了一种名为基于注意力的生成对抗网络的新方法,以合成受试者的面孔,鉴于其提取的面部识别功能。鉴于主题的任何不受约束的面部特征,Dab-Gan可以在高清上重建他/她的脸。 DAB-GAN方法包括一种新型的基于注意力的生成结构,采用新的定义的Bioxtive Metrics学习方法。该框架首先引入徒图,以便可以在图像域中直接采用距离测量和度量学习过程,以进行图像重建任务。来自Blackbox面部识别引擎的信息将使用全局蒸馏过程最佳利用。然后,提出了一个基于注意力的发电机,以使一个高度可靠的发电机通过ID保存综合逼真的面孔。我们已经评估了有关具有挑战性的面部识别数据库的方法,即Celeba,LF​​W,AgeDB,CFP-FP,并始终取得了最新的结果。 Dab-Gan的进步也得到了图像现实主义和ID保存属性的证明。
translated by 谷歌翻译
多年来,2d Gans在影像肖像的一代中取得了巨大的成功。但是,他们在生成过程中缺乏3D理解,因此他们遇到了多视图不一致问题。为了减轻这个问题,已经提出了许多3D感知的甘斯,并显示出显着的结果,但是3D GAN在编辑语义属性方面努力。 3D GAN的可控性和解释性并未得到太多探索。在这项工作中,我们提出了两种解决方案,以克服2D GAN和3D感知gan的这些弱点。我们首先介绍了一种新颖的3D感知gan,Surf-Gan,它能够在训练过程中发现语义属性,并以无监督的方式控制它们。之后,我们将先验的Surf-GAN注入stylegan,以获得高保真3D控制的发电机。与允许隐姿姿势控制的现有基于潜在的方法不同,所提出的3D控制样式gan可实现明确的姿势控制对肖像生成的控制。这种蒸馏允许3D控制与许多基于样式的技术(例如,反转和风格化)之间的直接兼容性,并且在计算资源方面也带来了优势。我们的代码可从https://github.com/jgkwak95/surf-gan获得。
translated by 谷歌翻译
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing. While studies over extending 2D StyleGAN to 3D faces have emerged, a corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing. In this paper, we study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures. The problem is ill-posed: innumerable compositions of shape and texture could be rendered to the current image. Furthermore, with the limited capacity of a global latent code, 2D inversion methods cannot preserve faithful shape and texture at the same time when applied to 3D models. To solve this problem, we devise an effective self-training scheme to constrain the learning of inversion. The learning is done efficiently without any real-world 2D-3D training pairs but proxy samples generated from a 3D GAN. In addition, apart from a global latent code that captures the coarse shape and texture information, we augment the generation network with a local branch, where pixel-aligned features are added to faithfully reconstruct face details. We further consider a new pipeline to perform 3D view-consistent editing. Extensive experiments show that our method outperforms state-of-the-art inversion methods in both shape and texture reconstruction quality. Code and data will be released.
translated by 谷歌翻译