自动化生成和(用户)逼真的虚拟地形的创作是VR模型和游戏等多媒体应用最受寻求的。地形采用的最常见的代表是数字海拔模型(DEM)。现有地形创作和建模技术已经解决了其中一些并且可以广泛地分类为:程序建模,仿真方法和基于示例的方法。在本文中,我们提出了一种由VAE和生成条件GaN模型组合的新型现实地形创作框架。我们的框架是一种基于示例的方法,该方法通过从真实世界地形数据集学习潜在的空间来克服现有方法的局限性。此潜在空间允许我们从单个输入生成地形的多个变体,以及地形之间的内插,同时保持所生成的地形接近真实数据分布。我们还开发了一个交互式工具,让用户使用最低纲领派的输入生成不同的地形。我们进行彻底的定性和定量分析,并提供与其他SOTA方法的比较。我们打算向学术界发出我们的代码/工具。
translated by 谷歌翻译
Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.
translated by 谷歌翻译
草图是一种从个人的创造性角度传达视觉场景的媒介。添加颜色基本上增强了草图的总体表征。本文提出了通过利用轮廓绘制数据集来模仿人绘制着色草图的两种方法。我们的第一个方法通过应用k-means颜色聚类辅助的图像处理技术来呈现彩色的轮廓草图。第二种方法使用生成的对抗性网络来开发一个可以从先前未观察到的图像生成彩色草图的模型。我们评估通过定量和定性评估获得的结果。
translated by 谷歌翻译
We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.
translated by 谷歌翻译
Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task.
translated by 谷歌翻译
可变形的模型对于3D面的统计建模至关重要。以前的可变形模型的作品主要集中在大规模的面部几何形状上,但忽略了面部细节。本文通过学习一种结构含义的可编辑形态模型(SEMM)来增强形象模型。 SEMM基于皱纹线的距离字段引入了细节结构表示,并以细节位移进行建模,以建立更好的对应关系并实现对皱纹结构的直观操纵。此外,SEMM还引入了两个转换模块,以将表达式的融合体权重和年龄值转化为潜在空间的变化,从而在维持身份的同时可以有效的语义细节编辑。广泛的实验表明,所提出的模型紧凑地表示面部细节,在定性和定量上表达动画中的先前方法,并实现了面部细节的有效年龄编辑和皱纹线编辑。代码和模型可在https://github.com/gerwang/facial-detail-manipulation上找到。
translated by 谷歌翻译
随着脑成像技术和机器学习工具的出现,很多努力都致力于构建计算模型来捕获人脑中的视觉信息的编码。最具挑战性的大脑解码任务之一是通过功能磁共振成像(FMRI)测量的脑活动的感知自然图像的精确重建。在这项工作中,我们调查了来自FMRI的自然图像重建的最新学习方法。我们在架构设计,基准数据集和评估指标方面检查这些方法,并在标准化评估指标上呈现公平的性能评估。最后,我们讨论了现有研究的优势和局限,并提出了潜在的未来方向。
translated by 谷歌翻译
从文本描述中综合现实图像是计算机视觉中的主要挑战。当前对图像合成方法的文本缺乏产生代表文本描述符的高分辨率图像。大多数现有的研究都依赖于生成的对抗网络(GAN)或变异自动编码器(VAE)。甘斯具有产生更清晰的图像的能力,但缺乏输出的多样性,而VAE擅长生产各种输出,但是产生的图像通常是模糊的。考虑到gan和vaes的相对优势,我们提出了一个新的有条件VAE(CVAE)和条件gan(CGAN)网络架构,用于合成以文本描述为条件的图像。这项研究使用条件VAE作为初始发电机来生成文本描述符的高级草图。这款来自第一阶段的高级草图输出和文本描述符被用作条件GAN网络的输入。第二阶段GAN产生256x256高分辨率图像。所提出的体系结构受益于条件加强和有条件的GAN网络的残留块,以实现结果。使用CUB和Oxford-102数据集进行了多个实验,并将所提出方法的结果与Stackgan等最新技术进行了比较。实验表明,所提出的方法生成了以文本描述为条件的高分辨率图像,并使用两个数据集基于Inception和Frechet Inception评分产生竞争结果
translated by 谷歌翻译
我们提出了COGS,这是一种新颖的方法,用于图像的样式条件,素描驱动的合成。 COGS可以为给定的草图对象探索各种外观可能性,从而对输出的结构和外观进行了脱钩的控制。通过输入草图和基于变压器的草图和样式编码器的示例“样式”调理图像启用了对物体结构和外观的粗粒粒度控制,以生成离散的代码簿表示。我们将代码簿表示形式映射到度量空间中,从而在通过量化量化的GAN(VQGAN)解码器生成图像之前,可以对多个合成选项之间的选择和插值进行细粒度的控制和插值。我们的框架因此统一了搜索和综合任务,因为草图和样式对可以用于运行初始合成,该合成可以通过结合结合在搜索语料库中结合使用,以使图像更加与用户的意图更匹配。我们表明,我们的模型对新创建的Pseudosketches数据集的125个对象类培训,能够生产出多种语义内容和外观样式的范围。
translated by 谷歌翻译
In recent years, substantial progress has been achieved in learning-based reconstruction of 3D objects. At the same time, generative models were proposed that can generate highly realistic images. However, despite this success in these closely related tasks, texture reconstruction of 3D objects has received little attention from the research community and state-of-the-art methods are either limited to comparably low resolution or constrained experimental setups. A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques. In this paper, we propose Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network. Our approach circumvents limiting factors like shape discretization and parameterization, as the proposed texture representation is independent of the shape representation of the 3D object. We show that Texture Fields are able to represent high frequency texture and naturally blend with modern deep learning techniques. Experimentally, we find that Texture Fields compare favorably to state-of-the-art methods for conditional texture reconstruction of 3D objects and enable learning of probabilistic generative models for texturing unseen 3D models. We believe that Texture Fields will become an important building block for the next generation of generative 3D models.
translated by 谷歌翻译
带有变异自动编码器(VAE)的学习分解表示通常归因于损失的正则化部分。在这项工作中,我们强调了数据与损失的重建项之间的相互作用,这是VAE中解散的主要贡献者。我们注意到,标准化的基准数据集的构建方式有利于学习似乎是分解的表示形式。我们设计了一个直观的对抗数据集,该数据集利用这种机制破坏了现有的最新分解框架。最后,我们提供了一种解决方案,可以通过修改重建损失来实现分离,从而影响VAES如何感知数据点之间的距离。
translated by 谷歌翻译
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.
translated by 谷歌翻译
Our result (c) Application: Edit object appearance (b) Application: Change label types (a) Synthesized resultFigure 1: We propose a generative adversarial framework for synthesizing 2048 × 1024 images from semantic label maps (lower left corner in (a)). Compared to previous work [5], our results express more natural textures and details. (b) We can change labels in the original label map to create new scenes, like replacing trees with buildings. (c) Our framework also allows a user to edit the appearance of individual objects in the scene, e.g. changing the color of a car or the texture of a road. Please visit our website for more side-by-side comparisons as well as interactive editing demos.
translated by 谷歌翻译
Facial image manipulation has achieved great progress in recent years. However, previous methods either operate on a predefined set of face attributes or leave users little freedom to interactively manipulate images. To overcome these drawbacks, we propose a novel framework termed MaskGAN, enabling diverse and interactive face manipulation. Our key insight is that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation. MaskGAN has two main components: 1) Dense Mapping Network (DMN) and 2) Editing Behavior Simulated Training (EBST). Specifically, DMN learns style mapping between a free-form user modified mask and a target image, enabling diverse generation results. EBST models the user editing behavior on the source mask, making the overall framework more robust to various manipulated inputs. Specifically, it introduces dual-editing consistency as the auxiliary supervision signal. To facilitate extensive studies, we construct a large-scale high-resolution face dataset with fine-grained mask annotations named CelebAMask-HQ. MaskGAN is comprehensively evaluated on two challenging tasks: attribute transfer and style copy, demonstrating superior performance over other state-of-the-art methods. The code, models, and dataset are available at https://github.com/switchablenorms/CelebAMask-HQ.
translated by 谷歌翻译
Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to "fall off" the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user's scribbles 1 .
translated by 谷歌翻译
Image-generating machine learning models are typically trained with loss functions based on distance in the image space. This often leads to over-smoothed results. We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), that mitigate this problem. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric better reflects perceptually similarity of images and thus leads to better results. We show three applications: autoencoder training, a modification of a variational autoencoder, and inversion of deep convolutional networks. In all cases, the generated images look sharp and resemble natural images.
translated by 谷歌翻译
在过去的几年中,深层神经网络方法的反向成像问题产生了令人印象深刻的结果。在本文中,我们考虑在跨问题方法中使用生成模型。所考虑的正规派对图像进行了惩罚,这些图像远非生成模型的范围,该模型学会了产生类似于训练数据集的图像。我们命名这个家庭\ textit {生成正规派}。生成常规人的成功取决于生成模型的质量,因此我们提出了一组所需的标准来评估生成模型并指导未来的研究。在我们的数值实验中,我们根据我们所需的标准评估了三种常见的生成模型,自动编码器,变异自动编码器和生成对抗网络。我们还测试了三个不同的生成正规疗法仪,关于脱毛,反卷积和断层扫描的逆问题。我们表明,逆问题的限制解决方案完全位于生成模型的范围内可以给出良好的结果,但是允许与发电机范围的小偏差产生更一致的结果。
translated by 谷歌翻译
目标。我们通过先进的深度学习(DL)技术以自动化方式生成人工叶片。我们的目标是为现代作物管理的AI申请处理培训样本来源。这些应用需要大量数据,而叶片图像不是真正稀缺的,图像收集和注释仍然是一个非常耗时的过程。可以通过在属于小型数据集的样本简单转换的增强技术来解决数据稀缺,但增强数据的丰富性有限:这激励了寻找替代方法的搜索。方法。追求一种基于DL生成模型的方法,我们提出了一个叶片到叶片翻译(L2L)程序分为两个步骤:首先,从伴侣二值化骨架开始生成剩余变分的自动化器架构生成合成叶骨架(叶片轮廓和静脉)真实的图像。在第二步中,我们通过PIX2PIX框架进行平移,该框架使用条件发生器对外网络来再现叶片的着色,保持形状和风景图案。结果。 L2L程序产生具有现实外观的叶子的合成图像。我们以定性和定量方式解决了性能测量;对于后一种评价,我们使用DL异常检测策略,该策略量化了合成叶的异常程度相对于真正的样品。结论。生成DL方法有可能成为一种新的范例,为计算机辅助应用提供低成本有意义的合成样品。本发明的L2L方法代表了迈向该目标的步骤,能够产生与真实叶子相关的定性和定量相似的合成样本。
translated by 谷歌翻译
最近已经示出了从2D图像中提取隐式3D表示的生成神经辐射场(GNERF)模型,以产生代表刚性物体的现实图像,例如人面或汽车。然而,他们通常难以产生代表非刚性物体的高质量图像,例如人体,这对许多计算机图形应用具有很大的兴趣。本文提出了一种用于人类图像综合的3D感知语义导向生成模型(3D-SAGGA),其集成了GNERF和纹理发生器。前者学习人体的隐式3D表示,并输出一组2D语义分段掩模。后者将这些语义面部掩模转化为真实的图像,为人类的外观添加了逼真的纹理。如果不需要额外的3D信息,我们的模型可以使用照片现实可控生成学习3D人类表示。我们在Deepfashion DataSet上的实验表明,3D-SAGGAN显着优于最近的基线。
translated by 谷歌翻译
由于技术成本的降低和卫星发射的增加,卫星图像变得越来越流行和更容易获得。除了提供仁慈的目的外,还可以出于恶意原因(例如错误信息)使用卫星数据。事实上,可以依靠一般图像编辑工具来轻松操纵卫星图像。此外,随着深层神经网络(DNN)的激增,可以生成属于各种领域的现实合成图像,与合成生成的卫星图像的扩散有关的其他威胁正在出现。在本文中,我们回顾了关于卫星图像的产生和操纵的最新技术(SOTA)。特别是,我们既关注从头开始的合成卫星图像的产生,又要通过图像转移技术对卫星图像进行语义操纵,包括从一种类型的传感器到另一种传感器获得的图像的转换。我们还描述了迄今已研究的法医检测技术,以对合成图像伪造进行分类和检测。虽然我们主要集中在法医技术上明确定制的,该技术是针对AI生成的合成内容物的检测,但我们还审查了一些用于一般剪接检测的方法,这些方法原则上也可以用于发现AI操纵图像
translated by 谷歌翻译