本文提出了有条件生成对抗性网络(CGANS)的两个重要贡献,以改善利用此架构的各种应用。第一个主要贡献是对CGANS的分析表明它们没有明确条件。特别地,将显示鉴别者和随后的Cgan不会自动学习输入之间的条件。第二种贡献是一种新方法,称为逆时针,该方法通过新颖的逆损失明确地模拟了对抗架构的两部分的条件,涉及培训鉴别者学习无条件(不利)示例。这导致了用于GANS(逆学习)的新型数据增强方法,其允许使用不利示例将发电机的搜索空间限制为条件输出。通过提出概率分布分析,进行广泛的实验以评估判别符的条件。与不同应用的CGAN架构的比较显示了众所周知的数据集的性能的显着改进,包括使用不同度量的不同度量的语义图像合成,图像分割,单眼深度预测和“单个标签” - 图像(FID) ),平均联盟(Miou)交叉口,根均线误差日志(RMSE日志)和统计上不同的箱数(NDB)。
translated by 谷歌翻译
强大的模拟器高度降低了在培训和评估自动车辆时对真实测试的需求。数据驱动的模拟器蓬勃发展,最近有条件生成对冲网络(CGANS)的进步,提供高保真图像。主要挑战是在施加约束之后的同时合成光量造型图像。在这项工作中,我们建议通过重新思考鉴别者架构来提高所生成的图像的质量。重点是在给定对语义输入生成图像的问题类上,例如场景分段图或人体姿势。我们建立成功的CGAN模型,提出了一种新的语义感知鉴别器,更好地指导发电机。我们的目标是学习一个共享的潜在表示,编码足够的信息,共同进行语义分割,内容重建以及粗糙的粒度的对抗性推理。实现的改进是通用的,并且可以应用于任何条件图像合成的任何架构。我们展示了我们在场景,建筑和人类综合任务上的方法,跨越三个不同的数据集。代码可在https://github.com/vita-epfl/semdisc上获得。
translated by 谷歌翻译
生成的对抗网络(GANS)产生高质量的图像,但致力于训练。它们需要仔细正常化,大量计算和昂贵的超参数扫描。我们通过将生成和真实样本投影到固定的预级特征空间中,在这些问题上进行了重要的头路。发现鉴别者无法充分利用来自预押模型的更深层次的特征,我们提出了更有效的策略,可以在渠道和分辨率中混合特征。我们预计的GaN提高了图像质量,样品效率和收敛速度。它与最多一个百万像素的分辨率进一步兼容,并在二十二个基准数据集上推进最先进的FR \'Echet Inception距离(FID)。重要的是,预计的GAN符合先前最低的FID速度快40倍,鉴于相同的计算资源,将壁钟时间从5天切割到不到3小时。
translated by 谷歌翻译
从文本描述中综合现实图像是计算机视觉中的主要挑战。当前对图像合成方法的文本缺乏产生代表文本描述符的高分辨率图像。大多数现有的研究都依赖于生成的对抗网络(GAN)或变异自动编码器(VAE)。甘斯具有产生更清晰的图像的能力,但缺乏输出的多样性,而VAE擅长生产各种输出,但是产生的图像通常是模糊的。考虑到gan和vaes的相对优势,我们提出了一个新的有条件VAE(CVAE)和条件gan(CGAN)网络架构,用于合成以文本描述为条件的图像。这项研究使用条件VAE作为初始发电机来生成文本描述符的高级草图。这款来自第一阶段的高级草图输出和文本描述符被用作条件GAN网络的输入。第二阶段GAN产生256x256高分辨率图像。所提出的体系结构受益于条件加强和有条件的GAN网络的残留块,以实现结果。使用CUB和Oxford-102数据集进行了多个实验,并将所提出方法的结果与Stackgan等最新技术进行了比较。实验表明,所提出的方法生成了以文本描述为条件的高分辨率图像,并使用两个数据集基于Inception和Frechet Inception评分产生竞争结果
translated by 谷歌翻译
Labels to Facade BW to Color Aerial to Map Labels to Street Scene Edges to Photo input output input input input input output output output output input output Day to Night Figure 1: Many problems in image processing, graphics, and vision involve translating an input image into a corresponding output image.These problems are often treated with application-specific algorithms, even though the setting is always the same: map pixels to pixels. Conditional adversarial nets are a general-purpose solution that appears to work well on a wide variety of these problems. Here we show results of the method on several. In each case we use the same architecture and objective, and simply train on different data.
translated by 谷歌翻译
我们提出了一种具有多个鉴别器的生成的对抗性网络,其中每个鉴别者都专门用于区分真实数据集的子集。这种方法有助于学习与底层数据分布重合的发电机,从而减轻慢性模式崩溃问题。从多项选择学习的灵感来看,我们引导每个判别者在整个数据的子集中具有专业知识,并允许发电机在没有监督训练示例和鉴别者的数量的情况下自动找到潜伏和真实数据空间之间的合理对应关系。尽管使用多种鉴别器,但骨干网络在鉴别器中共享,并且培训成本的增加最小化。我们使用多个评估指标展示了我们算法在标准数据集中的有效性。
translated by 谷歌翻译
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.
translated by 谷歌翻译
In biomedical image analysis, the applicability of deep learning methods is directly impacted by the quantity of image data available. This is due to deep learning models requiring large image datasets to provide high-level performance. Generative Adversarial Networks (GANs) have been widely utilized to address data limitations through the generation of synthetic biomedical images. GANs consist of two models. The generator, a model that learns how to produce synthetic images based on the feedback it receives. The discriminator, a model that classifies an image as synthetic or real and provides feedback to the generator. Throughout the training process, a GAN can experience several technical challenges that impede the generation of suitable synthetic imagery. First, the mode collapse problem whereby the generator either produces an identical image or produces a uniform image from distinct input features. Second, the non-convergence problem whereby the gradient descent optimizer fails to reach a Nash equilibrium. Thirdly, the vanishing gradient problem whereby unstable training behavior occurs due to the discriminator achieving optimal classification performance resulting in no meaningful feedback being provided to the generator. These problems result in the production of synthetic imagery that is blurry, unrealistic, and less diverse. To date, there has been no survey article outlining the impact of these technical challenges in the context of the biomedical imagery domain. This work presents a review and taxonomy based on solutions to the training problems of GANs in the biomedical imaging domain. This survey highlights important challenges and outlines future research directions about the training of GANs in the domain of biomedical imagery.
translated by 谷歌翻译
生成的对抗网络(GANS)最近引入了执行图像到图像翻译的有效方法。这些模型可以应用于图像到图像到图像转换中的各种域而不改变任何参数。在本文中,我们调查并分析了八个图像到图像生成的对策网络:PIX2PX,Cyclegan,Cogan,Stargan,Munit,Stargan2,Da-Gan,以及自我关注GaN。这些模型中的每一个都呈现了最先进的结果,并引入了构建图像到图像的新技术。除了对模型的调查外,我们还调查了他们接受培训的18个数据集,并在其上进行了评估的9个指标。最后,我们在常见的一组指标和数据集中呈现6种这些模型的受控实验的结果。结果混合并显示,在某些数据集,任务和指标上,某些型号优于其他型号。本文的最后一部分讨论了这些结果并建立了未来研究领域。由于研究人员继续创新新的图像到图像GAN,因此他们非常重要地了解现有方法,数据集和指标。本文提供了全面的概述和讨论,以帮助构建此基础。
translated by 谷歌翻译
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available. To facilitate GAN training, current methods propose to use data-specific augmentation techniques. Despite the effectiveness, it is difficult for these methods to scale to practical applications. In this work, we present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks. We first produce augmented samples using the convex combinations of the real samples. Then, we optimize the augmented samples by minimizing the norms of the data scores, i.e., the gradients of the log-density functions. This procedure enforces the augmented samples close to the data manifold. To estimate the scores, we train a deep estimation network with multi-scale score matching. For different image synthesis tasks, we train the score estimation network using different data. We do not require the tuning of the hyperparameters or modifications to the network architecture. The ScoreMix method effectively increases the diversity of data and reduces the overfitting problem. Moreover, it can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ScoreMix method achieve significant improvements.
translated by 谷歌翻译
与CNN的分类,分割或对象检测相比,生成网络的目标和方法根本不同。最初,它们不是作为图像分析工具,而是生成自然看起来的图像。已经提出了对抗性训练范式来稳定生成方法,并已被证明是非常成功的 - 尽管绝不是第一次尝试。本章对生成对抗网络(GAN)的动机进行了基本介绍,并通​​过抽象基本任务和工作机制并得出了早期实用方法的困难来追溯其成功的道路。将显示进行更稳定的训练方法,也将显示出不良收敛及其原因的典型迹象。尽管本章侧重于用于图像生成和图像分析的gan,但对抗性训练范式本身并非特定于图像,并且在图像分析中也概括了任务。在将GAN与最近进入场景的进一步生成建模方法进行对比之前,将闻名图像语义分割和异常检测的架构示例。这将允许对限制的上下文化观点,但也可以对gans有好处。
translated by 谷歌翻译
培训监督图像综合模型需要批评评论权来比较两个图像:结果的原始真相。然而,这种基本功能仍然是一个公开问题。流行的方法使用L1(平均绝对误差)丢失,或者在预先预留的深网络的像素或特征空间中。然而,我们观察到这些损失倾向于产生过度模糊和灰色的图像,以及其他技术,如GAN需要用于对抗这些伪影。在这项工作中,我们介绍了一种基于信息理论的方法来测量两个图像之间的相似性。我们认为,良好的重建应该具有较高的相互信息与地面真相。这种观点使得能够以对比的方式学习轻量级批评者以“校准”特征空间,使得相应的空间贴片的重建被置于擦除其他贴片。我们表明,当用作L1损耗的替代时,我们的配方立即提升输出图像的感知现实主义,有或没有额外的GaN丢失。
translated by 谷歌翻译
最近,已探索了一系列算法,用于GaN压缩,旨在在部署资源受限的边缘设备上的GAN时减少巨大的计算开销和内存使用。然而,大多数现有的GaN压缩工作仅重点介绍如何压缩发电机,而未能考虑鉴别者。在这项工作中,我们重新审视鉴别者在GaN压缩中的作用和设计一种用于GAN压缩的新型发电机 - 鉴别器协作压缩方案,称为GCC。在GCC中,选择性激活鉴别器根据局部容量约束和全局协调约束自动选择和激活卷积通道,这有助于在对策训练期间与轻质发电机保持纳什平衡,避免模式塌陷。原始发电机和鉴别器也从头开始优化,作为教师模型,逐步优化修剪的发生器和选择性激活鉴别器。一种新的在线协同蒸馏方案旨在充分利用教师发生器和鉴别器的中间特征,以进一步提高轻质发电机的性能。对各种GAN的一代任务的广泛实验证明了GCC的有效性和泛化。其中,GCC有助于降低80%的计算成本,同时在图像转换任务中保持相当的性能。我们的代码和模型可在https://github.com/sjleo/gcc上使用。
translated by 谷歌翻译
Domain Adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial network. This is in contrast to methods which use the adversarial framework for realistic data generation and retraining deep models with such data. We demonstrate the strength and generality of our approach by performing experiments on three different tasks with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain adaptation from synthetic to real data. Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.
translated by 谷歌翻译
Our result (c) Application: Edit object appearance (b) Application: Change label types (a) Synthesized resultFigure 1: We propose a generative adversarial framework for synthesizing 2048 × 1024 images from semantic label maps (lower left corner in (a)). Compared to previous work [5], our results express more natural textures and details. (b) We can change labels in the original label map to create new scenes, like replacing trees with buildings. (c) Our framework also allows a user to edit the appearance of individual objects in the scene, e.g. changing the color of a car or the texture of a road. Please visit our website for more side-by-side comparisons as well as interactive editing demos.
translated by 谷歌翻译
Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.
translated by 谷歌翻译
图像生成在学术界和工业领域提出了巨大的关注,特别是对于有条件和目标导向的图像生成,例如犯罪肖像和时装设计。虽然目前的研究已经沿着这个方向实现了初步结果,但它们总是将课堂标签集中在阶级标签中作为空间内容从潜伏向量随机产生的条件。边缘细节通常模糊,因为空间信息难以保持。鉴于此,我们提出了一种新型的空间受限的生成对抗网络(SCAGAN),其从潜伏向量中分离出空间约束,并使这些约束可行作为额外的可控信号。为了增强空间可控性,发电机网络专门设计用于逐步采用语义分割,潜在的传染媒介和属性级标签作为输入。此外,构造分段网络以对发电机施加空间约束。在实验上,我们在Celeba和Deepfashion数据集中提供视觉和定量结果,并证明所提出的Scang在控制空间内容以及产生高质量图像方面非常有效。
translated by 谷歌翻译
与生成对抗网络(GAN)的图像和分割掩模的联合合成有望减少用像素通过像素注释收集图像数据所需的精力。但是,要学习高保真图像掩码合成,现有的GAN方法首先需要一个需要大量图像数据的预训练阶段,这限制了其在受限图像域中的利用。在这项工作中,我们迈出了一步,以减少此限制,从而引入了单次图像掩码合成的任务。我们旨在仅给出一个单个标记的示例,生成各种图像及其分割面具,并假设与以前的模型相反,则无法访问任何预训练数据。为此,我们受到单图像gan的最新体系结构发展的启发,我们介绍了OSMIS模型,该模型可以合成分割掩模,这些掩模与单次镜头中生成的图像完全一致。除了实现产生的口罩的高保真度外,OSMIS在图像合成质量和多样性中的最先进的单图像模型优于最先进的单位图。此外,尽管没有使用任何其他数据,OSMIS还是表现出令人印象深刻的能力,可以作为一击细分应用程序的有用数据增强的来源,提供了与标准数据增强技术相辅相成的性能提高。代码可从https://github.com/ boschresearch/One-shot-synthesis获得
translated by 谷歌翻译
生成的对抗网络(GANS)通常需要充分的数据进行培训,以综合高保真图像。最近的研究表明,由于鉴别器过度拟合,带有有限数据的培训GAN仍然是强大的,阻碍发电机收敛的根本原因。本文介绍了一种称为自适应伪增强(APA)的新战略,以鼓励发电机与鉴别者之间的健康竞争。作为依赖标准数据增强或模型正则化的现有方法的替代方法,APA通过采用发电机本身增加具有生成图像的真实数据分布来缓解过度装备,这使得判别符号自适应地欺骗鉴别器。广泛的实验证明了APA在降低数据制度中改善合成质量方面的有效性。我们提供了理论分析,以研究我们新培训策略的收敛性和合理性。 APA简单有效。它可以无缝添加到强大的当代GAN,例如Stylegan2,计算成本可忽略不计。
translated by 谷歌翻译