结构拓扑优化旨在找到最大化机械性能的最佳物理结构,在航空航天,机械和土木工程中的工程设计应用中至关重要。生成对抗网络(GAN)最近成为传统迭代拓扑优化方法的流行替代品。但是,这些模型通常很难训练,具有有限的概括性,并且由于它们的目标是模仿最佳拓扑,忽视生产性和诸如机械合规性之类的性能目标。我们提出了TopoDiff,这是一种有条件的基于扩散模型的体系结构,以执行克服这些问题的性能感知和可制造性感的拓扑优化。我们的模型介绍了基于替代模型的指导策略,该策略积极利用依从性低和良好的制造性的结构。我们的方法通过将物理性能的平均误差降低了8倍,并且产生的不可行样本少11倍,从而极大地超过了最先进的条件gan。通过将扩散模型引入拓扑优化,我们表明条件扩散模型也具有在工程设计合成应用中的表现。我们的工作还提出了使用扩散模型以及外部性能和约束意识指导的工程优化问题的一般框架。
translated by 谷歌翻译
We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for fidelity using gradients from a classifier. We achieve an FID of 2.97 on ImageNet 128×128, 4.59 on ImageNet 256×256, and 7.72 on ImageNet 512×512, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.94 on ImageNet 256×256 and 3.85 on ImageNet 512×512. We release our code at https://github.com/openai/guided-diffusion.
translated by 谷歌翻译
DeNoising扩散模型代表了计算机视觉中最新的主题,在生成建模领域表现出了显着的结果。扩散模型是一个基于两个阶段的深层生成模型,一个正向扩散阶段和反向扩散阶段。在正向扩散阶段,通过添加高斯噪声,输入数据在几个步骤中逐渐受到干扰。在反向阶段,模型的任务是通过学习逐步逆转扩散过程来恢复原始输入数据。尽管已知的计算负担,即由于采样过程中涉及的步骤数量,扩散模型对生成样品的质量和多样性得到了广泛赞赏。在这项调查中,我们对视觉中应用的denoising扩散模型的文章进行了全面综述,包括该领域的理论和实际贡献。首先,我们识别并介绍了三个通用扩散建模框架,这些框架基于扩散概率模型,噪声调节得分网络和随机微分方程。我们进一步讨论了扩散模型与其他深层生成模型之间的关系,包括变异自动编码器,生成对抗网络,基于能量的模型,自回归模型和正常流量。然后,我们介绍了计算机视觉中应用的扩散模型的多角度分类。最后,我们说明了扩散模型的当前局限性,并设想了一些有趣的未来研究方向。
translated by 谷歌翻译
分类器指南是一种最近引入的方法,可在有条件扩散模型的培训后进行交易模式覆盖范围和样本保真度,其精神与其他类型的生成模型中的低温采样或截断相同。分类器指南将扩散模型的得分估计与图像分类器的梯度相结合,因此需要训练与扩散模型分开的图像分类器。它还提出了一个问题,即在没有分类器的情况下是否可以执行指导。我们表明,确实可以通过没有这样的分类器的纯生成模型来执行指导:在我们所谓的无分类器指导中,我们共同训练有条件的和无条件的扩散模型,我们结合了所得的条件和无条件得分估算样本质量和多样性之间的权衡类似于使用分类器指南获得的样本质量和多样性。
translated by 谷歌翻译
降级扩散概率模型(DDPM)是最近获得最新结果的生成模型系列。为了获得类条件生成,建议通过从时间依赖性分类器中梯度指导扩散过程。尽管这个想法在理论上是合理的,但基于深度学习的分类器臭名昭著地容易受到基于梯度的对抗攻击的影响。因此,尽管传统分类器可能会达到良好的精度分数,但它们的梯度可能不可靠,并可能阻碍了生成结果的改善。最近的工作发现,对抗性稳健的分类器表现出与人类感知一致的梯度,这些梯度可以更好地指导生成过程,以实现语义有意义的图像。我们通过定义和训练时间依赖性的对抗性分类器来利用这一观察结果,并将其用作生成扩散模型的指导。在有关高度挑战性和多样化的Imagenet数据集的实验中,我们的方案引入了更明显的中间梯度,更好地与理论发现的一致性以及在几个评估指标下的改进的生成结果。此外,我们进行了一项意见调查,其发现表明人类评估者更喜欢我们的方法的结果。
translated by 谷歌翻译
数字艺术合成在多媒体社区中受到越来越多的关注,因为有效地与公众参与了艺术。当前的数字艺术合成方法通常使用单模式输入作为指导,从而限制了模型的表现力和生成结果的多样性。为了解决这个问题,我们提出了多模式引导的艺术品扩散(MGAD)模型,该模型是一种基于扩散的数字艺术品生成方法,它利用多模式提示作为控制无分类器扩散模型的指导。此外,对比度语言图像预处理(剪辑)模型用于统一文本和图像模式。关于生成的数字艺术绘画质量和数量的广泛实验结果证实了扩散模型和多模式指导的组合有效性。代码可从https://github.com/haha-lisa/mgad-multimodal-guided-artwork-diffusion获得。
translated by 谷歌翻译
利用深度学习的最新进展,文本到图像生成模型目前具有吸引公众关注的优点。其中两个模型Dall-E 2和Imagen已经证明,可以从图像的简单文本描述中生成高度逼真的图像。基于一种称为扩散模型的新型图像生成方法,文本对图像模型可以生产许多不同类型的高分辨率图像,其中人类想象力是唯一的极限。但是,这些模型需要大量的计算资源来训练,并处理从互联网收集的大量数据集。此外,代码库和模型均未发布。因此,它可以防止AI社区尝试这些尖端模型,从而使其结果复制变得复杂,即使不是不可能。在本文中,我们的目标是首先回顾这些模型使用的不同方法和技术,然后提出我们自己的文本模型模型实施。高度基于DALL-E 2,我们引入了一些轻微的修改,以应对所引起的高计算成本。因此,我们有机会进行实验,以了解这些模型的能力,尤其是在低资源制度中。特别是,我们提供了比Dall-e 2的作者(包括消融研究)更深入的分析。此外,扩散模型使用所谓的指导方法来帮助生成过程。我们引入了一种新的指导方法,该方法可以与其他指导方法一起使用,以提高图像质量。最后,我们的模型产生的图像质量相当好,而不必维持最先进的文本对图像模型的重大培训成本。
translated by 谷歌翻译
Conditional diffusion probabilistic models can model the distribution of natural images and can generate diverse and realistic samples based on given conditions. However, oftentimes their results can be unrealistic with observable color shifts and textures. We believe that this issue results from the divergence between the probabilistic distribution learned by the model and the distribution of natural images. The delicate conditions gradually enlarge the divergence during each sampling timestep. To address this issue, we introduce a new method that brings the predicted samples to the training data manifold using a pretrained unconditional diffusion model. The unconditional model acts as a regularizer and reduces the divergence introduced by the conditional model at each sampling step. We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks. The improvements obtained by our method suggest that the priors can be incorporated as a general plugin for improving conditional diffusion models.
translated by 谷歌翻译
过去十年已经开发了各种各样的深度生成模型。然而,这些模型通常同时努力解决三个关键要求,包括:高样本质量,模式覆盖和快速采样。我们称之为这些要求所征收的挑战是生成的学习Trielemma,因为现有模型经常为他人交易其中一些。特别是,去噪扩散模型表明了令人印象深刻的样本质量和多样性,但它们昂贵的采样尚未允许它们在许多现实世界应用中应用。在本文中,我们认为这些模型中的缓慢采样基本上归因于去噪步骤中的高斯假设,这些假设仅针对小型尺寸的尺寸。为了使得具有大步骤的去噪,从而减少去噪步骤的总数,我们建议使用复杂的多模态分布来模拟去噪分布。我们引入了去噪扩散生成的对抗网络(去噪扩散GANS),其使用多模式条件GaN模拟每个去噪步骤。通过广泛的评估,我们表明去噪扩散GAN获得原始扩散模型的样本质量和多样性,而在CIFAR-10数据集中是2000 $ \时代。与传统的GAN相比,我们的模型表现出更好的模式覆盖和样本多样性。据我们所知,去噪扩散GaN是第一模型,可在扩散模型中降低采样成本,以便允许它们廉价地应用于现实世界应用。项目页面和代码:https://nvlabs.github.io/denoising-diffusion-gan
translated by 谷歌翻译
denoisis扩散概率模型(DDPM)能够通过引入独立的噪声吸引分类器来在每次deosoing过程的时间步骤中提供条件梯度指导,从而使有条件的图像从先前的噪声到真实数据。但是,由于分类器能够轻松地区分不完全生成的图像仅具有高级结构的能力,因此梯度是一种类信息指导,倾向于尽早消失,导致从条件生成过程中崩溃到无条件过程。为了解决这个问题,我们从两个角度提出了两种简单但有效的方法。对于抽样程序,我们将预测分布的熵作为指导消失水平的度量,并提出一种熵感知的缩放方法,以适应性地恢复条件语义指导。每个生成样品的%。对于训练阶段,我们提出了熵吸引的优化目标,以减轻噪音数据的过度自信预测。在Imagenet1000 256x256中,我们提出的采样方案和训练有素的分类器(预训练的条件和无条件的DDPM模型可以实现10.89%(4.59至4.59至4.09))和43.5%(12至6.78)FID改善。
translated by 谷歌翻译
扩散模型是一类深入生成模型,在具有密集理论建立的各种任务上显示出令人印象深刻的结果。尽管与其他最先进的模型相比,扩散模型的样本合成质量和多样性令人印象深刻,但它们仍然遭受了昂贵的抽样程序和次优可能的估计。最近的研究表明,对提高扩散模型的性能的热情非常热情。在本文中,我们对扩散模型的现有变体进行了首次全面综述。具体而言,我们提供了扩散模型的第一个分类法,并将它们分类为三种类型,即采样加速增强,可能性最大化的增强和数据将来增强。我们还详细介绍了其他五个生成模型(即变异自动编码器,生成对抗网络,正常流量,自动回归模型和基于能量的模型),并阐明扩散模型与这些生成模型之间的连接。然后,我们对扩散模型的应用进行彻底研究,包括计算机视觉,自然语言处理,波形信号处理,多模式建模,分子图生成,时间序列建模和对抗性纯化。此外,我们提出了与这种生成模型的发展有关的新观点。
translated by 谷歌翻译
最近已被证明扩散模型产生高质量的合成图像,尤其是与指导技术配对,以促进忠诚的多样性。我们探索文本条件图像综合问题的扩散模型,并比较了两种不同的指导策略:剪辑指导和自由分类指导。我们发现后者是人类评估者的优选,用于光敏和标题相似度,并且通常产生光素质拟种样品。使用自由分类指导的35亿参数文本条件扩散模型的样本由人类评估者对来自Dall-E的人的人们青睐,即使后者使用昂贵的剪辑重新划分。此外,我们发现我们的模型可以进行微调,以执行图像修复,从而实现强大的文本驱动的图像编辑。我们在过滤的数据集中培训较小的模型,并在https://github.com/openai/glide-text2im释放代码和权重。
translated by 谷歌翻译
In recent years, generative models have undergone significant advancement due to the success of diffusion models. The success of these models is often attributed to their use of guidance techniques, such as classifier and classifier-free methods, which provides effective mechanisms to trade-off between fidelity and diversity. However, these methods are not capable of guiding a generated image to be aware of its geometric configuration, e.g., depth, which hinders the application of diffusion models to areas that require a certain level of depth awareness. To address this limitation, we propose a novel guidance approach for diffusion models that uses estimated depth information derived from the rich intermediate representations of diffusion models. To do this, we first present a label-efficient depth estimation framework using the internal representations of diffusion models. At the sampling phase, we utilize two guidance techniques to self-condition the generated image using the estimated depth map, the first of which uses pseudo-labeling, and the subsequent one uses a depth-domain diffusion prior. Experiments and extensive ablation studies demonstrate the effectiveness of our method in guiding the diffusion models toward geometrically plausible image generation. Project page is available at https://ku-cvlab.github.io/DAG/.
translated by 谷歌翻译
In this paper, we propose a diffusion-based face swapping framework for the first time, called DiffFace, composed of training ID conditional DDPM, sampling with facial guidance, and a target-preserving blending. In specific, in the training process, the ID conditional DDPM is trained to generate face images with the desired identity. In the sampling process, we use the off-the-shelf facial expert models to make the model transfer source identity while preserving target attributes faithfully. During this process, to preserve the background of the target image and obtain the desired face swapping result, we additionally propose a target-preserving blending strategy. It helps our model to keep the attributes of the target face from noise while transferring the source facial identity. In addition, without any re-training, our model can flexibly apply additional facial guidance and adaptively control the ID-attributes trade-off to achieve the desired results. To the best of our knowledge, this is the first approach that applies the diffusion model in face swapping task. Compared with previous GAN-based approaches, by taking advantage of the diffusion model for the face swapping task, DiffFace achieves better benefits such as training stability, high fidelity, diversity of the samples, and controllability. Extensive experiments show that our DiffFace is comparable or superior to the state-of-the-art methods on several standard face swapping benchmarks.
translated by 谷歌翻译
Generating photos satisfying multiple constraints find broad utility in the content creation industry. A key hurdle to accomplishing this task is the need for paired data consisting of all modalities (i.e., constraints) and their corresponding output. Moreover, existing methods need retraining using paired data across all modalities to introduce a new condition. This paper proposes a solution to this problem based on denoising diffusion probabilistic models (DDPMs). Our motivation for choosing diffusion models over other generative models comes from the flexible internal structure of diffusion models. Since each sampling step in the DDPM follows a Gaussian distribution, we show that there exists a closed-form solution for generating an image given various constraints. Our method can unite multiple diffusion models trained on multiple sub-tasks and conquer the combined task through our proposed sampling strategy. We also introduce a novel reliability parameter that allows using different off-the-shelf diffusion models trained across various datasets during sampling time alone to guide it to the desired outcome satisfying multiple constraints. We perform experiments on various standard multimodal tasks to demonstrate the effectiveness of our approach. More details can be found in https://nithin-gk.github.io/projectpages/Multidiff/index.html
translated by 谷歌翻译
我们表明,级联扩散模型能够在类条件的想象生成基准上生成高保真图像,而无需辅助图像分类器的任何帮助来提高样品质量。级联的扩散模型包括多个扩散模型的流水线,其产生越来越多的分辨率,以最低分辨率的标准扩散模型开始,然后是一个或多个超分辨率扩散模型,其连续上追随图像并添加更高的分辨率细节。我们发现级联管道的样本质量至关重要的是调节增强,我们提出的数据增强较低分辨率调节输入到超级分辨率模型的方法。我们的实验表明,调节增强防止在级联模型中采样过程中的复合误差,帮助我们在256×256分辨率下,在128x128和4.88,优于63.02的分类精度分数,培训级联管道。 %(TOP-1)和84.06%(TOP-5)在256x256,优于VQ-VAE-2。
translated by 谷歌翻译
可控图像合成模型允许根据文本指令或来自示例图像的指导创建不同的图像。最近,已经显示出去噪扩散概率模型比现有方法产生更现实的图像,并且已在无条件和类条件设置中成功展示。我们探索细粒度,连续控制该模型类,并引入了一种新颖的统一框架,用于语义扩散指导,允许语言或图像指导,或两者。使用图像文本或图像匹配分数的梯度将指导注入预训练的无条件扩散模型中。我们探讨基于剪辑的文本指导,以及以统一形式的基于内容和类型的图像指导。我们的文本引导综合方法可以应用于没有相关文本注释的数据集。我们对FFHQ和LSUN数据集进行实验,并显示出细粒度的文本引导图像合成的结果,与样式或内容示例图像相关的图像的合成,以及具有文本和图像引导的示例。
translated by 谷歌翻译
Denoising diffusion models hold great promise for generating diverse and realistic human motions. However, existing motion diffusion models largely disregard the laws of physics in the diffusion process and often generate physically-implausible motions with pronounced artifacts such as floating, foot sliding, and ground penetration. This seriously impacts the quality of generated motions and limits their real-world application. To address this issue, we present a novel physics-guided motion diffusion model (PhysDiff), which incorporates physical constraints into the diffusion process. Specifically, we propose a physics-based motion projection module that uses motion imitation in a physics simulator to project the denoised motion of a diffusion step to a physically-plausible motion. The projected motion is further used in the next diffusion step to guide the denoising diffusion process. Intuitively, the use of physics in our model iteratively pulls the motion toward a physically-plausible space. Experiments on large-scale human motion datasets show that our approach achieves state-of-the-art motion quality and improves physical plausibility drastically (>78% for all datasets).
translated by 谷歌翻译
Diffusion Probabilistic Models (DPMs) have recently been employed for image deblurring. DPMs are trained via a stochastic denoising process that maps Gaussian noise to the high-quality image, conditioned on the concatenated blurry input. Despite their high-quality generated samples, image-conditioned Diffusion Probabilistic Models (icDPM) rely on synthetic pairwise training data (in-domain), with potentially unclear robustness towards real-world unseen images (out-of-domain). In this work, we investigate the generalization ability of icDPMs in deblurring, and propose a simple but effective guidance to significantly alleviate artifacts, and improve the out-of-distribution performance. Particularly, we propose to first extract a multiscale domain-generalizable representation from the input image that removes domain-specific information while preserving the underlying image structure. The representation is then added into the feature maps of the conditional diffusion model as an extra guidance that helps improving the generalization. To benchmark, we focus on out-of-distribution performance by applying a single-dataset trained model to three external and diverse test sets. The effectiveness of the proposed formulation is demonstrated by improvements over the standard icDPM, as well as state-of-the-art performance on perceptual quality and competitive distortion metrics compared to existing methods.
translated by 谷歌翻译
深度学习表现出巨大的生成任务潜力。生成模型是可以根据某些隐含参数随机生成观测值的模型类。最近,扩散模型由于其发电能力而成为一类生成模型。如今,已经取得了巨大的成就。除了计算机视觉,语音产生,生物信息学和自然语言处理外,还需要在该领域探索更多应用。但是,扩散模型具有缓慢生成过程的自然缺点,从而导致许多增强的作品。该调查总结了扩散模型的领域。我们首先说明了两项具有里程碑意义的作品的主要问题-DDPM和DSM。然后,我们提供各种高级技术,以加快扩散模型 - 训练时间表,无训练采样,混合模型以及得分和扩散统一。关于现有模型,我们还根据特定的NFE提供了FID得分的基准和NLL。此外,引入了带有扩散模型的应用程序,包括计算机视觉,序列建模,音频和科学AI。最后,该领域以及局限性和进一步的方向都进行了摘要。
translated by 谷歌翻译