扩散模型可以用作解决各种反问题的学习先验。但是,大多数现有方法仅限于线性问题,从而将其适用性限制在更普遍的情况下。在本文中,我们建立在降级扩散恢复模型(DDRM)的基础上,并提出了一种解决某些非线性反问题的方法。我们利用DDRM中使用的伪内运算符并将此概念推广到其他测量操作员,这使我们能够使用预先训练的无条件扩散模型进行JPEG伪影校正等应用。我们从经验上证明了我们方法在各种质量因素中的有效性,从而达到与专门针对JPEG恢复任务训练的最先进方法相当的性能水平。
translated by 谷歌翻译
Conditional diffusion probabilistic models can model the distribution of natural images and can generate diverse and realistic samples based on given conditions. However, oftentimes their results can be unrealistic with observable color shifts and textures. We believe that this issue results from the divergence between the probabilistic distribution learned by the model and the distribution of natural images. The delicate conditions gradually enlarge the divergence during each sampling timestep. To address this issue, we introduce a new method that brings the predicted samples to the training data manifold using a pretrained unconditional diffusion model. The unconditional model acts as a regularizer and reduces the divergence introduced by the conditional model at each sampling step. We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks. The improvements obtained by our method suggest that the priors can be incorporated as a general plugin for improving conditional diffusion models.
translated by 谷歌翻译
Most existing Image Restoration (IR) models are task-specific, which can not be generalized to different degradation operators. In this work, we propose the Denoising Diffusion Null-Space Model (DDNM), a novel zero-shot framework for arbitrary linear IR problems, including but not limited to image super-resolution, colorization, inpainting, compressed sensing, and deblurring. DDNM only needs a pre-trained off-the-shelf diffusion model as the generative prior, without any extra training or network modifications. By refining only the null-space contents during the reverse diffusion process, we can yield diverse results satisfying both data consistency and realness. We further propose an enhanced and robust version, dubbed DDNM+, to support noisy restoration and improve restoration quality for hard tasks. Our experiments on several IR tasks reveal that DDNM outperforms other state-of-the-art zero-shot IR methods. We also demonstrate that DDNM+ can solve complex real-world applications, e.g., old photo restoration.
translated by 谷歌翻译
在不利天气条件下的图像恢复对各种计算机视觉应用引起了重大兴趣。最近的成功方法取决于深度神经网络架构设计(例如,具有视觉变压器)的当前进展。由最新的条件生成模型取得的最新进展的动机,我们提出了一种基于贴片的图像恢复算法,基于脱氧扩散概率模型。我们的基于贴片的扩散建模方法可以通过使用指导的DeNoising过程进行尺寸 - 不足的图像恢复,并在推理过程中对重叠贴片进行平滑的噪声估计。我们在基准数据集上经验评估了我们的模型,以进行图像,混合的降低和飞行以及去除雨滴的去除。我们展示了我们在特定天气和多天气图像恢复上实现最先进的表演的方法,并在质量上表现出对现实世界测试图像的强烈概括。
translated by 谷歌翻译
扩散模型是一类新的生成模型,在依靠固体概率原理的同时,标志着高质量图像生成中的里程碑。这使他们成为神经图像压缩的有前途的候选模型。本文概述了基于有条件扩散模型的端到端优化框架。除了扩散过程固有的潜在变量外,该模型还引入了额外的“ content”潜在变量,以调节降解过程。解码后,扩散过程有条件地生成/重建祖先采样。我们的实验表明,这种方法的表现优于表现最佳的传统图像编解码器之一(BPG)和一个在两个压缩基准上的神经编解码器,我们将重点放在速率感知权衡方面。定性地,我们的方法显示出比经典方法更少的减压工件。
translated by 谷歌翻译
图像deBlurring是一种对给定输入图像的多种合理的解决方案是一个不适的问题。然而,大多数现有方法产生了清洁图像的确定性估计,并且训练以最小化像素级失真。已知这些指标与人类感知差,并且通常导致不切实际的重建。我们基于条件扩散模型介绍了盲脱模的替代框架。与现有技术不同,我们训练一个随机采样器,它改进了确定性预测器的输出,并且能够为给定输入产生多样化的合理重建。这导致跨多个标准基准的现有最先进方法的感知质量的显着提高。与典型的扩散模型相比,我们的预测和精致方法也能实现更有效的采样。结合仔细调整的网络架构和推理过程,我们的方法在PSNR等失真度量方面具有竞争力。这些结果表明了我们基于扩散和挑战的扩散和挑战的策略的显着优势,生产单一确定性重建的广泛使用策略。
translated by 谷歌翻译
Recent deep learning methods have achieved promising results in image shadow removal. However, their restored images still suffer from unsatisfactory boundary artifacts, due to the lack of degradation prior embedding and the deficiency in modeling capacity. Our work addresses these issues by proposing a unified diffusion framework that integrates both the image and degradation priors for highly effective shadow removal. In detail, we first propose a shadow degradation model, which inspires us to build a novel unrolling diffusion model, dubbed ShandowDiffusion. It remarkably improves the model's capacity in shadow removal via progressively refining the desired output with both degradation prior and diffusive generative prior, which by nature can serve as a new strong baseline for image restoration. Furthermore, ShadowDiffusion progressively refines the estimated shadow mask as an auxiliary task of the diffusion generator, which leads to more accurate and robust shadow-free image generation. We conduct extensive experiments on three popular public datasets, including ISTD, ISTD+, and SRD, to validate our method's effectiveness. Compared to the state-of-the-art methods, our model achieves a significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB over SRD dataset.
translated by 谷歌翻译
尽管许多远程成像系统旨在支持扩展视力应用,但由于大气湍流,其操作的自然障碍是退化。大气湍流通过引入模糊和几何变形而导致图像质量的显着降解。近年来,在文献中提出了各种基于深度学习的单图像缓解方法,包括基于CNN的基于CNN和基于GAN的反转方法,这些方法试图消除图像中的失真。但是,其中一些方法很难训练,并且通常无法重建面部特征并产生不切实际的结果,尤其是在高湍流的情况下。降级扩散概率模型(DDPM)最近由于其稳定的训练过程和产生高质量图像的能力而获得了一些吸引力。在本文中,我们提出了第一个基于DDPM的解决方案,用于缓解大气湍流问题。我们还提出了一种快速采样技术,用于减少条件DDPM的推理时间。对合成和现实世界数据进行了广泛的实验,以显示我们模型的重要性。为了促进进一步的研究,在审查过程之后,所有代码和验证的模型都将公开。
translated by 谷歌翻译
扩散模型已显示出令人印象深刻的图像产生性能,并已用于各种计算机视觉任务。不幸的是,使用扩散模型的图像生成非常耗时,因为它需要数千个采样步骤。为了解决这个问题,我们在这里提出了一种新型的金字塔扩散模型,以使用训练有位置嵌入的单个分数函数从更粗的分辨率图像开始生成高分辨率图像。这使图像生成的时间效率抽样可以解决,并在资源有限的训练时也可以解决低批量的大小问题。此外,我们表明,使用单个分数函数可以有效地用于多尺度的超分辨率问题。
translated by 谷歌翻译
In recent years, denoising diffusion models have demonstrated outstanding image generation performance. The information on natural images captured by these models is useful for many image reconstruction applications, where the task is to restore a clean image from its degraded observations. In this work, we propose a conditional sampling scheme that exploits the prior learned by diffusion models while retaining agreement with the observations. We then combine it with a novel approach for adapting pretrained diffusion denoising networks to their input. We examine two adaption strategies: the first uses only the degraded image, while the second, which we advocate, is performed using images that are ``nearest neighbors'' of the degraded image, retrieved from a diverse dataset using an off-the-shelf visual-language model. To evaluate our method, we test it on two state-of-the-art publicly available diffusion models, Stable Diffusion and Guided Diffusion. We show that our proposed `adaptive diffusion for image reconstruction' (ADIR) approach achieves a significant improvement in the super-resolution, deblurring, and text-based editing tasks.
translated by 谷歌翻译
现代监视系统使用基于深度学习的面部验证网络执行人员认可。大多数最先进的面部验证系统都是使用可见光谱图像训练的。但是,在弱光和夜间条件的情况下,在可见光谱中获取图像是不切实际的,并且通常在诸如热红外域之类的替代域中捕获图像。在检索相应的可见域图像后,通常在热图像中进行面部验证。这是一个公认的问题,通常称为热能(T2V)图像翻译。在本文中,我们建议针对面部图像的T2V翻译基于Denoising扩散概率模型(DDPM)解决方案。在训练过程中,该模型通过扩散过程了解了它们相应的热图像,可见面部图像的条件分布。在推断过程中,可见的域图像是通过从高斯噪声开始并反复执行的。 DDPM的现有推理过程是随机且耗时的。因此,我们提出了一种新颖的推理策略,以加快DDPM的推理时间,特别是用于T2V图像翻译问题。我们在多个数据集上实现了最新结果。代码和验证的模型可在http://github.com/nithin-gk/t2v-ddpm上公开获得
translated by 谷歌翻译
通过将图像形成过程分解成逐个申请的去噪自身额,扩散模型(DMS)实现了最先进的合成导致图像数据和超越。另外,它们的配方允许引导机构来控制图像生成过程而不会再刷新。然而,由于这些模型通常在像素空间中直接操作,因此强大的DMS的优化通常消耗数百个GPU天,并且由于顺序评估,推理是昂贵的。为了在保留其质量和灵活性的同时启用有限计算资源的DM培训,我们将它们应用于强大的佩带自动化器的潜在空间。与以前的工作相比,这种代表上的培训扩散模型允许第一次达到复杂性降低和细节保存之间的近乎最佳点,极大地提高了视觉保真度。通过将跨关注层引入模型架构中,我们将扩散模型转化为强大而柔性的发电机,以进行诸如文本或边界盒和高分辨率合成的通用调节输入,以卷积方式变得可以实现。我们的潜在扩散模型(LDMS)实现了一种新的技术状态,可在各种任务中进行图像修复和高竞争性能,包括无条件图像生成,语义场景合成和超级分辨率,同时与基于像素的DMS相比显着降低计算要求。代码可在https://github.com/compvis/lattent-diffusion获得。
translated by 谷歌翻译
由于其高质量的重建以及将现有迭代求解器结合起来的易于性,因此最近将扩散模型作为强大的生成反问题解决器研究。但是,大多数工作都专注于在无噪声设置中解决简单的线性逆问题,这显着不足以使实际问题的复杂性不足。在这项工作中,我们将扩散求解器扩展求解器,以通过后采样的拉普拉斯近似有效地处理一般噪声(非)线性反问题。有趣的是,所得的后验采样方案是扩散采样的混合版本,具有歧管约束梯度,而没有严格的测量一致性投影步骤,与先前的研究相比,在嘈杂的设置中产生了更可取的生成路径。我们的方法表明,扩散模型可以结合各种测量噪声统计量,例如高斯和泊松,并且还有效处理嘈杂的非线性反问题,例如傅立叶相检索和不均匀的脱毛。
translated by 谷歌翻译
深度MRI重建通常是使用有条件的模型进行的,该模型将其映射到完全采样的数据作为输出中。有条件的模型在加速成像运算符的知识下执行了脱氧,因此在操作员的域转移下,它们概括了很差。无条件模型是一种强大的替代方法,相反,它可以学习生成图像先验,以提高针对领域转移的可靠性。鉴于它们的高度代表性多样性和样本质量,最近的扩散模型特别有希望。然而,事先通过静态图像进行预测会导致次优性能。在这里,我们提出了一种基于适应性扩散的新型MRI重建Adadiff。为了启用有效的图像采样,引入了一个可以使用大扩散步骤的对抗映射器。使用受过训练的先验进行两阶段的重建:一个快速扩散阶段,产生初始重建阶段,以及一个适应阶段,其中更新扩散先验以最大程度地减少获得的K空间数据的重建损失。关于多对比的大脑MRI的演示清楚地表明,Adadiff在跨域任务中的竞争模型以及域内任务中的卓越或PAR性能方面取得了出色的性能。
translated by 谷歌翻译
DeNoising扩散模型代表了计算机视觉中最新的主题,在生成建模领域表现出了显着的结果。扩散模型是一个基于两个阶段的深层生成模型,一个正向扩散阶段和反向扩散阶段。在正向扩散阶段,通过添加高斯噪声,输入数据在几个步骤中逐渐受到干扰。在反向阶段,模型的任务是通过学习逐步逆转扩散过程来恢复原始输入数据。尽管已知的计算负担,即由于采样过程中涉及的步骤数量,扩散模型对生成样品的质量和多样性得到了广泛赞赏。在这项调查中,我们对视觉中应用的denoising扩散模型的文章进行了全面综述,包括该领域的理论和实际贡献。首先,我们识别并介绍了三个通用扩散建模框架,这些框架基于扩散概率模型,噪声调节得分网络和随机微分方程。我们进一步讨论了扩散模型与其他深层生成模型之间的关系,包括变异自动编码器,生成对抗网络,基于能量的模型,自回归模型和正常流量。然后,我们介绍了计算机视觉中应用的扩散模型的多角度分类。最后,我们说明了扩散模型的当前局限性,并设想了一些有趣的未来研究方向。
translated by 谷歌翻译
Diffusion Probabilistic Models (DPMs) have recently been employed for image deblurring. DPMs are trained via a stochastic denoising process that maps Gaussian noise to the high-quality image, conditioned on the concatenated blurry input. Despite their high-quality generated samples, image-conditioned Diffusion Probabilistic Models (icDPM) rely on synthetic pairwise training data (in-domain), with potentially unclear robustness towards real-world unseen images (out-of-domain). In this work, we investigate the generalization ability of icDPMs in deblurring, and propose a simple but effective guidance to significantly alleviate artifacts, and improve the out-of-distribution performance. Particularly, we propose to first extract a multiscale domain-generalizable representation from the input image that removes domain-specific information while preserving the underlying image structure. The representation is then added into the feature maps of the conditional diffusion model as an extra guidance that helps improving the generalization. To benchmark, we focus on out-of-distribution performance by applying a single-dataset trained model to three external and diverse test sets. The effectiveness of the proposed formulation is demonstrated by improvements over the standard icDPM, as well as state-of-the-art performance on perceptual quality and competitive distortion metrics compared to existing methods.
translated by 谷歌翻译
自由格式介绍是在任意二进制掩码指定的区域中向图像中添加新内容的任务。大多数现有方法训练了一定的面具分布,这将其概括能力限制为看不见的掩模类型。此外,通过像素和知觉损失的训练通常会导致对缺失区域的简单质地扩展,而不是语义上有意义的一代。在这项工作中,我们提出重新启动:基于deno的扩散概率模型(DDPM)的内部介入方法,甚至适用于极端掩模。我们采用预定的无条件DDPM作为生成先验。为了调节生成过程,我们仅通过使用给定的图像信息对未掩盖的区域进行采样来改变反向扩散迭代。由于该技术不会修改或调节原始DDPM网络本身,因此该模型可为任何填充形式产生高质量和不同的输出图像。我们使用标准面具和极端口罩验证面部和通用图像的方法。重新粉刷优于最先进的自动回归,而GAN的方法至少在六个面具分布中进行了五个。 github存储库:git.io/repaint
translated by 谷歌翻译
标准扩散模型涉及图像变换 - 添加高斯噪声 - 以及逆转此降解的图像恢复操作员。我们观察到,扩散模型的生成行为并不是很大程度上取决于图像降解的选择,实际上,可以通过改变这种选择来构建整个生成模型家族。即使使用完全确定性的降解(例如,模糊,掩蔽等),培训和测试时间更新规则是基于扩散模型的培训和测试时间更新规则,可以轻松地概括为创建生成模型。这些完全确定的模型的成功使社区对扩散模型的理解质疑,这依赖于梯度Langevin动力学或变异推理中的噪声,并为反转任意过程的广义扩散模型铺平了道路。我们的代码可从https://github.com/arpitbansal297/cold-diffusion-models获得
translated by 谷歌翻译
基于得分的扩散模型已成为深度生成型号最有前途的框架之一。在这项工作中,我们对基于得分的扩散模型进行了学习条件概率分布的不同方法的系统比较和理论分析。特别是,我们证明了结果为条件分数最成功的估算之一提供了理论典范。此外,我们引入了多速扩散框架,这导致了一个新的估算器,用于条件得分,与先前的最先进的方法相提并论。我们的理论和实验结果伴随着开源库MSDIFF,允许应用和进一步研究多速扩散模型。
translated by 谷歌翻译
Denoising diffusion (score-based) generative models have recently achieved significant accomplishments in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data into noise and a backward denoising process for sampling data from noise. Unfortunately, the generation process of current denoising diffusion models is notoriously slow due to the lengthy iterative noise estimations, which rely on cumbersome neural networks. It prevents the diffusion models from being widely deployed, especially on edge devices. Previous works accelerate the generation process of diffusion model (DM) via finding shorter yet effective sampling trajectories. However, they overlook the cost of noise estimation with a heavy network in every iteration. In this work, we accelerate generation from the perspective of compressing the noise estimation network. Due to the difficulty of retraining DMs, we exclude mainstream training-aware compression paradigms and introduce post-training quantization (PTQ) into DM acceleration. However, the output distributions of noise estimation networks change with time-step, making previous PTQ methods fail in DMs since they are designed for single-time step scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three aspects: quantized operations, calibration dataset, and calibration metric. We summarize and use several observations derived from all-inclusive investigations to formulate our method, which especially targets the unique multi-time-step structure of DMs. Experimentally, our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner. Importantly, our method can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM.
translated by 谷歌翻译