迭代创建像素艺术角色精灵板对于游戏开发过程至关重要。但是,直到完成包含不同姿势和动画片段的最终版本之前,可能需要大量精力。本文使用条件生成的对抗网络调查,以帮助设计师创建此类精灵片。我们提出了一个基于Pix2Pix的体系结构,以生成面向目标侧(例如,右)的字符图像(例如,右)在源姿势中(例如,前面)。使用小像素ART数据集的实验产生了令人鼓舞的结果,导致模型具有不同程度的概括,有时能够生成非常接近地面真相的图像。我们通过视觉检查和FID进行定量分析结果。
translated by 谷歌翻译
强大的模拟器高度降低了在培训和评估自动车辆时对真实测试的需求。数据驱动的模拟器蓬勃发展,最近有条件生成对冲网络(CGANS)的进步,提供高保真图像。主要挑战是在施加约束之后的同时合成光量造型图像。在这项工作中,我们建议通过重新思考鉴别者架构来提高所生成的图像的质量。重点是在给定对语义输入生成图像的问题类上,例如场景分段图或人体姿势。我们建立成功的CGAN模型,提出了一种新的语义感知鉴别器,更好地指导发电机。我们的目标是学习一个共享的潜在表示,编码足够的信息,共同进行语义分割,内容重建以及粗糙的粒度的对抗性推理。实现的改进是通用的,并且可以应用于任何条件图像合成的任何架构。我们展示了我们在场景,建筑和人类综合任务上的方法,跨越三个不同的数据集。代码可在https://github.com/vita-epfl/semdisc上获得。
translated by 谷歌翻译
基于图像的虚拟试验努力将服装的外观转移到目标人的图像上。先前的工作主要集中在上身衣服(例如T恤,衬衫和上衣)上,并忽略了全身或低身物品。这种缺点来自一个主要因素:用于基于图像的虚拟试验的当前公开可用数据集并不解释此品种,从而限制了该领域的进度。为了解决这种缺陷,我们介绍着着装代码,其中包含多类服装的图像。着装代码比基于图像的虚拟试验的公共可用数据集大于3倍以上,并且具有前视图,全身参考模型的高分辨率配对图像(1024x768)。为了生成具有高视觉质量且细节丰富的高清尝试图像,我们建议学习细粒度的区分功能。具体而言,我们利用一种语义意识歧视器,该歧视器在像素级而不是图像级或贴片级上进行预测。广泛的实验评估表明,所提出的方法在视觉质量和定量结果方面超过了基线和最先进的竞争者。着装码数据集可在https://github.com/aimagelab/dress-code上公开获得。
translated by 谷歌翻译
我们提出了Vologan,这是一个对抗域的适应网络,该网络将一个人的高质量3D模型的合成RGB-D图像转换为可以使用消费者深度传感器生成的RGB-D图像。该系统对于为单视3D重建算法生成大量训练数据特别有用,该算法复制了现实世界中的捕获条件,能够模仿相同的高端3D模型数据库的不同传感器类型的样式。该网络使用具有u-net体系结构的CycleGAN框架,以及受SIV-GAN启发的鉴别器。我们使用不同的优化者和学习率计划来训练发电机和鉴别器。我们进一步构建了一个单独考虑图像通道的损失函数,除其他指标外,还评估了结构相似性。我们证明,可以使用自行车来应用合成3D数据的对抗结构域适应,以训练只有少量训练样本的体积视频发电机模型。
translated by 谷歌翻译
Automatic font generation without human experts is a practical and significant problem, especially for some languages that consist of a large number of characters. Existing methods for font generation are often in supervised learning. They require a large number of paired data, which are labor-intensive and expensive to collect. In contrast, common unsupervised image-to-image translation methods are not applicable to font generation, as they often define style as the set of textures and colors. In this work, we propose a robust deformable generative network for unsupervised font generation (abbreviated as DGFont++). We introduce a feature deformation skip connection (FDSC) to learn local patterns and geometric transformations between fonts. The FDSC predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level content feature maps. The outputs of FDSC are fed into a mixer to generate final results. Moreover, we introduce contrastive self-supervised learning to learn a robust style representation for fonts by understanding the similarity and dissimilarities of fonts. To distinguish different styles, we train our model with a multi-task discriminator, which ensures that each style can be discriminated independently. In addition to adversarial loss, another two reconstruction losses are adopted to constrain the domain-invariant characteristics between generated images and content images. Taking advantage of FDSC and the adopted loss functions, our model is able to maintain spatial information and generates high-quality character images in an unsupervised manner. Experiments demonstrate that our model is able to generate character images of higher quality than state-of-the-art methods.
translated by 谷歌翻译
Cartoons are an important part of our entertainment culture. Though drawing a cartoon is not for everyone, creating it using an arrangement of basic geometric primitives that approximates that character is a fairly frequent technique in art. The key motivation behind this technique is that human bodies - as well as cartoon figures - can be split down into various basic geometric primitives. Numerous tutorials are available that demonstrate how to draw figures using an appropriate arrangement of fundamental shapes, thus assisting us in creating cartoon characters. This technique is very beneficial for children in terms of teaching them how to draw cartoons. In this paper, we develop a tool - shape2toon - that aims to automate this approach by utilizing a generative adversarial network which combines geometric primitives (i.e. circles) and generate a cartoon figure (i.e. Mickey Mouse) depending on the given approximation. For this purpose, we created a dataset of geometrically represented cartoon characters. We apply an image-to-image translation technique on our dataset and report the results in this paper. The experimental results show that our system can generate cartoon characters from input layout of geometric shapes. In addition, we demonstrate a web-based tool as a practical implication of our work.
translated by 谷歌翻译
我们介绍了特点,这是一个可以在给定角色的少数样本(8-15)上培训的生成模型。我们的模型基于Keypoint位置生成新颖姿势,可以在提供交互式反馈的同时实时修改,允许直观的重复和动画。由于我们只有非常有限的培训样本,因此关键挑战之一是如何解决(DIS)闭塞,例如,当一只手在身体后面或前面移动时。为了解决这个问题,我们介绍了一种新颖的分层方法,该方法将输入关键点分割为独立处理的不同层。这些层代表了角色的不同部分,并提供了强烈的隐含偏差,这有助于获得逼真的结果,即使具有强(DIS)闭塞。要组合各个图层的功能,我们使用所有关键点上的自适应缩放方法。最后,我们引入了一个掩模连接约束,以减少在测试时间下具有极端分布的失真伪像。我们表明我们的方法优于最近的基线,为不同的人物创造了现实的动画。我们还表明,我们的模型可以处理离散状态更改,例如左侧或右面的配置文件,即不同的图层确实学习了这些层中各个关键点的特定特征,并且在更多数据时,我们的模型将缩放到更大的数据集可用的。
translated by 谷歌翻译
Generating new fonts is a time-consuming and labor-intensive, especially in a language with a huge amount of characters like Chinese. Various deep learning models have demonstrated the ability to efficiently generate new fonts with a few reference characters of that style. This project aims to develop a few-shot cross-lingual font generator based on AGIS-Net and improve the performance metrics mentioned. Our approaches include redesigning the encoder and the loss function. We will validate our method on multiple languages and datasets mentioned.
translated by 谷歌翻译
在没有人为干预的图像自动色彩上是在机器学习界的兴趣中的一个短暂的时间。分配颜色到图像是一个非常令人虐待的问题,因为它具有非常高的自由度的先天性;给定图像,通常没有单一的颜色组合是正确的。除了着色之外,图像重建中的另一个问题是单图像超分辨率,其旨在将低分辨率图像转换为更高的分辨率。该研究旨在通过专注于图像的非常特定的图像,即天文图像,并使用生成的对抗网络(GAN)来提供自动化方法。我们探索两种不同颜色空间,RGB和L * A *中各种型号的使用。我们使用传输学习,由于小数据集,使用预先训练的Reset-18作为骨干,即U-Net的编码器,进一步微调。该模型产生视觉上有吸引力的图像,其在原始图像中不存在的这些结果中呈现的高分辨率高分辨率,着色数据。我们通过使用所有通道的每个颜色空间中的距离度量(例如L1距离和L2距离)评估GAN来提供我们的结果,以提供比较分析。我们使用Frechet Inception距离(FID)将生成的图像的分布与实际图像的分布进行比较,以评估模型的性能。
translated by 谷歌翻译
生成的对抗网络(GANS)最近引入了执行图像到图像翻译的有效方法。这些模型可以应用于图像到图像到图像转换中的各种域而不改变任何参数。在本文中,我们调查并分析了八个图像到图像生成的对策网络:PIX2PX,Cyclegan,Cogan,Stargan,Munit,Stargan2,Da-Gan,以及自我关注GaN。这些模型中的每一个都呈现了最先进的结果,并引入了构建图像到图像的新技术。除了对模型的调查外,我们还调查了他们接受培训的18个数据集,并在其上进行了评估的9个指标。最后,我们在常见的一组指标和数据集中呈现6种这些模型的受控实验的结果。结果混合并显示,在某些数据集,任务和指标上,某些型号优于其他型号。本文的最后一部分讨论了这些结果并建立了未来研究领域。由于研究人员继续创新新的图像到图像GAN,因此他们非常重要地了解现有方法,数据集和指标。本文提供了全面的概述和讨论,以帮助构建此基础。
translated by 谷歌翻译
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. An appealing alternative is to render synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images often fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that attempt to map representations between the two domains or learn to extract features that are domain-invariant. In this work, we present a new approach that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based model adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译
与CNN的分类,分割或对象检测相比,生成网络的目标和方法根本不同。最初,它们不是作为图像分析工具,而是生成自然看起来的图像。已经提出了对抗性训练范式来稳定生成方法,并已被证明是非常成功的 - 尽管绝不是第一次尝试。本章对生成对抗网络(GAN)的动机进行了基本介绍,并通​​过抽象基本任务和工作机制并得出了早期实用方法的困难来追溯其成功的道路。将显示进行更稳定的训练方法,也将显示出不良收敛及其原因的典型迹象。尽管本章侧重于用于图像生成和图像分析的gan,但对抗性训练范式本身并非特定于图像,并且在图像分析中也概括了任务。在将GAN与最近进入场景的进一步生成建模方法进行对比之前,将闻名图像语义分割和异常检测的架构示例。这将允许对限制的上下文化观点,但也可以对gans有好处。
translated by 谷歌翻译
Image-based virtual try-on techniques have shown great promise for enhancing the user-experience and improving customer satisfaction on fashion-oriented e-commerce platforms. However, existing techniques are currently still limited in the quality of the try-on results they are able to produce from input images of diverse characteristics. In this work, we propose a Context-Driven Virtual Try-On Network (C-VTON) that addresses these limitations and convincingly transfers selected clothing items to the target subjects even under challenging pose configurations and in the presence of self-occlusions. At the core of the C-VTON pipeline are: (i) a geometric matching procedure that efficiently aligns the target clothing with the pose of the person in the input images, and (ii) a powerful image generator that utilizes various types of contextual information when synthesizing the final try-on result. C-VTON is evaluated in rigorous experiments on the VITON and MPV datasets and in comparison to state-of-the-art techniques from the literature. Experimental results show that the proposed approach is able to produce photo-realistic and visually convincing results and significantly improves on the existing state-of-the-art.
translated by 谷歌翻译
由于技术成本的降低和卫星发射的增加,卫星图像变得越来越流行和更容易获得。除了提供仁慈的目的外,还可以出于恶意原因(例如错误信息)使用卫星数据。事实上,可以依靠一般图像编辑工具来轻松操纵卫星图像。此外,随着深层神经网络(DNN)的激增,可以生成属于各种领域的现实合成图像,与合成生成的卫星图像的扩散有关的其他威胁正在出现。在本文中,我们回顾了关于卫星图像的产生和操纵的最新技术(SOTA)。特别是,我们既关注从头开始的合成卫星图像的产生,又要通过图像转移技术对卫星图像进行语义操纵,包括从一种类型的传感器到另一种传感器获得的图像的转换。我们还描述了迄今已研究的法医检测技术,以对合成图像伪造进行分类和检测。虽然我们主要集中在法医技术上明确定制的,该技术是针对AI生成的合成内容物的检测,但我们还审查了一些用于一般剪接检测的方法,这些方法原则上也可以用于发现AI操纵图像
translated by 谷歌翻译
本文的目标是对面部素描合成(FSS)问题进行全面的研究。然而,由于获得了手绘草图数据集的高成本,因此缺乏完整的基准,用于评估过去十年的FSS算法的开发。因此,我们首先向FSS引入高质量的数据集,名为FS2K,其中包括2,104个图像素描对,跨越三种类型的草图样式,图像背景,照明条件,肤色和面部属性。 FS2K与以前的FSS数据集不同于难度,多样性和可扩展性,因此应促进FSS研究的进展。其次,我们通过调查139种古典方法,包括34个手工特征的面部素描合成方法,37个一般的神经式传输方法,43个深映像到图像翻译方法,以及35个图像 - 素描方法。此外,我们详细说明了现有的19个尖端模型的综合实验。第三,我们为FSS提供了一个简单的基准,名为FSGAN。只有两个直截了当的组件,即面部感知屏蔽和风格矢量扩展,FSGAN将超越所提出的FS2K数据集的所有先前最先进模型的性能,通过大边距。最后,我们在过去几年中汲取的经验教训,并指出了几个未解决的挑战。我们的开源代码可在https://github.com/dengpingfan/fsgan中获得。
translated by 谷歌翻译
Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.
translated by 谷歌翻译
Labels to Facade BW to Color Aerial to Map Labels to Street Scene Edges to Photo input output input input input input output output output output input output Day to Night Figure 1: Many problems in image processing, graphics, and vision involve translating an input image into a corresponding output image.These problems are often treated with application-specific algorithms, even though the setting is always the same: map pixels to pixels. Conditional adversarial nets are a general-purpose solution that appears to work well on a wide variety of these problems. Here we show results of the method on several. In each case we use the same architecture and objective, and simply train on different data.
translated by 谷歌翻译
随着几个行业正在朝着建模大规模的3D虚拟世界迈进,因此需要根据3D内容的数量,质量和多样性来扩展的内容创建工具的需求变得显而易见。在我们的工作中,我们旨在训练Parterant 3D生成模型,以合成纹理网格,可以通过3D渲染引擎直接消耗,因此立即在下游应用中使用。 3D生成建模的先前工作要么缺少几何细节,因此在它们可以生成的网格拓扑中受到限制,通常不支持纹理,或者在合成过程中使用神经渲染器,这使得它们在常见的3D软件中使用。在这项工作中,我们介绍了GET3D,这是一种生成模型,该模型直接生成具有复杂拓扑,丰富几何细节和高保真纹理的显式纹理3D网格。我们在可区分的表面建模,可区分渲染以及2D生成对抗网络中桥接了最新成功,以从2D图像集合中训练我们的模型。 GET3D能够生成高质量的3D纹理网格,从汽车,椅子,动物,摩托车和人类角色到建筑物,对以前的方法进行了重大改进。
translated by 谷歌翻译
图像到图像转换是最近使用生成对冲网络(GaN)将图像从一个域转换为另一个域的趋势。现有的GaN模型仅利用转换的输入和输出方式执行培训。在本文中,我们执行GaN模型的语义注射训练。具体而言,我们用原始输入和输出方式训练,并注入几个时代,用于从输入到语义地图的翻译。让我们将原始培训称为输入图像转换为目标域的培训。原始训练中的语义训练注射改善了训练的GaN模型的泛化能力。此外,它还以更好的方式在生成的图像中以更好的方式保留分类信息。语义地图仅在训练时间使用,并且在测试时间不需要。通过在城市景观和RGB-NIR立体数据集上使用最先进的GaN模型进行实验。与原始训练相比,在注入语义训练后,我们遵守SSIM,FID和KID等方面的提高性能。
translated by 谷歌翻译