In this paper, we study two challenging but less-touched problems in image restoration, namely, i) how to quantify the relationship between different image degradations and ii) how to improve the performance of a specific restoration task using the quantified relationship. To tackle the first challenge, Degradation Relationship Index (DRI) is proposed to measure the degradation relationship, which is defined as the drop rate difference in the validation loss between two models, i.e., one is trained using the anchor task only and another is trained using the anchor and the auxiliary tasks. Through quantifying the relationship between different degradations using DRI, we empirically observe that i) the degradation combination proportion is crucial to the image restoration performance. In other words, the combinations with only appropriate degradation proportions could improve the performance of the anchor restoration; ii) a positive DRI always predicts the performance improvement of image restoration. Based on the observations, we propose an adaptive Degradation Proportion Determination strategy (DPD) which could improve the performance of the anchor restoration task by using another restoration task as auxiliary. Extensive experimental results verify the effective of our method by taking image dehazing as the anchor task and denoising, desnowing, and deraining as the auxiliary tasks. The code will be released after acceptance.
translated by 谷歌翻译
Image restoration under hazy weather condition, which is called single image dehazing, has been of significant interest for various computer vision applications. In recent years, deep learning-based methods have achieved success. However, existing image dehazing methods typically neglect the hierarchy of features in the neural network and fail to exploit their relationships fully. To this end, we propose an effective image dehazing method named Hierarchical Contrastive Dehazing (HCD), which is based on feature fusion and contrastive learning strategies. HCD consists of a hierarchical dehazing network (HDN) and a novel hierarchical contrastive loss (HCL). Specifically, the core design in the HDN is a Hierarchical Interaction Module, which utilizes multi-scale activation to revise the feature responses hierarchically. To cooperate with the training of HDN, we propose HCL which performs contrastive learning on hierarchically paired exemplars, facilitating haze removal. Extensive experiments on public datasets, RESIDE, HazeRD, and DENSE-HAZE, demonstrate that HCD quantitatively outperforms the state-of-the-art methods in terms of PSNR, SSIM and achieves better visual quality.
translated by 谷歌翻译
最近,基于深度学习的图像降级方法在测试数据上具有与训练集相同的测试数据的有希望的性能,在该数据中,已经学习了基于合成或收集的现实世界训练数据的各种denoising模型。但是,在处理真实世界的嘈杂图像时,Denoising的性能仍然受到限制。在本文中,我们提出了一种简单而有效的贝叶斯深集合(BDE)方法,用于真实世界图像denoising,其中可以融合使用各种训练数据设置进行预训练的几位代表性的深层Denoiser,以提高稳健性。 BDE的基础是,现实世界的图像噪声高度取决于信号依赖性,并且在现实世界中的嘈杂图像中的异质噪声可以由不同的Deoisiser分别处理。特别是,我们将受过良好训练的CBDNET,NBNET,HINET,UFORFORMER和GMSNET进入Denoiser池,并采用U-NET来预测Pixel的加权图以融合这些DeOisiser。引入了贝叶斯深度学习策略,而不是仅仅学习像素的加权地图,而是为了预测加权不确定性和加权图,可以通过该策略来建模预测差异,以改善现实世界中的嘈杂图像的鲁棒性。广泛的实验表明,可以通过融合现有的DINOISER而不是训练一个以昂贵的成本来训练一个大的Denoiser来更好地消除现实世界的噪音。在DND数据集上,我们的BDE实现了 +0.28〜dB PSNR的增益,而不是最先进的denoising方法。此外,我们注意到,在应用于现实世界嘈杂的图像时,基于不同高斯噪声水平的BDE Denoiser优于最先进的CBDNET。此外,我们的BDE可以扩展到其他图像恢复任务,并在基准数据集上获得 +0.30dB, +0.18dB和 +0.12dB PSNR的收益,以分别用于图像去除图像,图像降低和单个图像超级分辨率。
translated by 谷歌翻译
最近的基于学习的图像雨和噪声衰减的繁荣主要是由于精心设计的神经网络架构和大型标记数据集。但是,我们发现当前的图像雨和噪声去除方法导致图像的利用率低。为了减轻对大型标签数据集的依赖,我们提出了基于引入的补丁分析策略的任务驱动的图像雨和噪声(TRNR)。补丁分析策略提供了具有各种空间和统计特性的图像贴片,用于培训,并已被验证以增加图像的利用率。此外,补丁分析策略激励我们考虑学习图像雨和噪声去除任务驱动而不是数据驱动。因此,我们介绍了TRNR的N频率-K射击学习任务。每个N频率-K-Shot学习任务基于包含补丁分析策略采样的NK图像修补的微小数据集。 TRNR使神经网络能够从足够的数据以外的丰富N频率-K射击学习任务中学习。为了验证TRNR的有效性,我们建立了一个浅色多尺度残差网络(MSRESNet),具有约0.9米的参数来学习图像雨量拆卸,并使用简单的RESET与大约1.2M参数配合DNNET进行盲目高斯噪声删除,有一些图像(例如,20.0%的Rain100h培训赛车组)。实验结果表明,TRNR使MSRESNet能够从更少的图像中学到更好的学习。此外,MSRESNet和DNNET利用TRNR获得的性能比大多数最近的深度学习方法在大型标记数据集上受过训练的数据驱动。这些实验结果证实了所提出的TRNR的有效性和优越性。 TRNR的代码将很快公开。
translated by 谷歌翻译
放映摄像头(UDC)已被广泛利用,以帮助智能手机实现全屏显示。但是,由于屏幕不可避免地会影响光传播过程,因此UDC系统捕获的图像通常包含耀斑,雾霾,模糊和噪声。特别是,UDC图像中的耀斑和模糊可能会严重恶化高动态范围(HDR)场景的用户体验。在本文中,我们提出了一个新的深层模型,即UDC-UNET,以解决HDR场景中已知点扩展功能(PSF)的UDC图像恢复问题。在已知UDC系统的点扩散函数(PSF)的前提下,我们将UDC图像恢复视为非盲图像恢复问题,并提出了一种基于学习的新方法。我们的网络由三个部分组成,包括使用多尺度信息的U形基础网络,一个条件分支以执行空间变体调制以及一个内核分支,以提供给定PSF的先验知识。根据HDR数据的特征,我们还设计了音调映射损失,以稳定网络优化并获得更好的视觉质量。实验结果表明,所提出的UDC-UNET在定量和定性比较方面优于最新方法。我们的方法赢得了MIPI Challenge的UDC图像修复轨道的第二名。代码将公开可用。
translated by 谷歌翻译
Image restoration tasks have achieved tremendous performance improvements with the rapid advancement of deep neural networks. However, most prevalent deep learning models perform inference statically, ignoring that different images have varying restoration difficulties and lightly degraded images can be well restored by slimmer subnetworks. To this end, we propose a new solution pipeline dubbed ClassPruning that utilizes networks with different capabilities to process images with varying restoration difficulties. In particular, we use a lightweight classifier to identify the image restoration difficulty, and then the sparse subnetworks with different capabilities can be sampled based on predicted difficulty by performing dynamic N:M fine-grained structured pruning on base restoration networks. We further propose a novel training strategy along with two additional loss terms to stabilize training and improve performance. Experiments demonstrate that ClassPruning can help existing methods save approximately 40% FLOPs while maintaining performance.
translated by 谷歌翻译
盲图修复(IR)是计算机视觉中常见但充满挑战的问题。基于经典模型的方法和最新的深度学习(DL)方法代表了有关此问题的两种不同方法,每种方法都有自己的优点和缺点。在本文中,我们提出了一种新颖的盲图恢复方法,旨在整合它们的两种优势。具体而言,我们为盲IR构建了一个普通的贝叶斯生成模型,该模型明确描绘了降解过程。在此提出的模型中,PICEL的非I.I.D。高斯分布用于适合图像噪声。它的灵活性比简单的I.I.D。在大多数常规方法中采用的高斯或拉普拉斯分布,以处理图像降解中包含的更复杂的噪声类型。为了解决该模型,我们设计了一个变异推理算法,其中所有预期的后验分布都被参数化为深神经网络,以提高其模型能力。值得注意的是,这种推论算法诱导统一的框架共同处理退化估计和图像恢复的任务。此外,利用了前一种任务中估计的降解信息来指导后一种红外过程。对两项典型的盲型IR任务进行实验,即图像降解和超分辨率,表明所提出的方法比当前最新的方法实现了卓越的性能。
translated by 谷歌翻译
Deconvolution is a widely used strategy to mitigate the blurring and noisy degradation of hyperspectral images~(HSI) generated by the acquisition devices. This issue is usually addressed by solving an ill-posed inverse problem. While investigating proper image priors can enhance the deconvolution performance, it is not trivial to handcraft a powerful regularizer and to set the regularization parameters. To address these issues, in this paper we introduce a tuning-free Plug-and-Play (PnP) algorithm for HSI deconvolution. Specifically, we use the alternating direction method of multipliers (ADMM) to decompose the optimization problem into two iterative sub-problems. A flexible blind 3D denoising network (B3DDN) is designed to learn deep priors and to solve the denoising sub-problem with different noise levels. A measure of 3D residual whiteness is then investigated to adjust the penalty parameters when solving the quadratic sub-problems, as well as a stopping criterion. Experimental results on both simulated and real-world data with ground-truth demonstrate the superiority of the proposed method.
translated by 谷歌翻译
现实世界图像Denoising是一个实用的图像恢复问题,旨在从野外嘈杂的输入中获取干净的图像。最近,Vision Transformer(VIT)表现出强大的捕获远程依赖性的能力,许多研究人员试图将VIT应用于图像DeNosing任务。但是,现实世界的图像是一个孤立的框架,它使VIT构建了内部贴片的远程依赖性,该依赖性将图像分为贴片并混乱噪声模式和梯度连续性。在本文中,我们建议通过使用连续的小波滑动转换器来解决此问题,该小波滑动转换器在现实世界中构建频率对应关系,称为dnswin。具体而言,我们首先使用CNN编码器从嘈杂的输入图像中提取底部功能。 DNSWIN的关键是将高频和低频信息与功能和构建频率依赖性分开。为此,我们提出了小波滑动窗口变压器,该变压器利用离散的小波变换,自我注意力和逆离散小波变换来提取深度特征。最后,我们使用CNN解码器将深度特征重建为DeNo的图像。对现实世界的基准测试的定量和定性评估都表明,拟议的DNSWIN对最新方法的表现良好。
translated by 谷歌翻译
盲人面部修复(BFR)旨在从低品质的图像中恢复高质量的面部图像,并通常求助于面部先验,以改善恢复性能。但是,当前的方法仍然遇到两个主要困难:1)如何在不进行大规模调整的情况下得出强大的网络体系结构; 2)如何从一个网络中的多个面部先验捕获互补信息以提高恢复性能。为此,我们提出了一个面部修复搜索网络(FRSNET),以适应我们指定的搜索空间内的合适特征提取体系结构,这可以直接有助于恢复质量。在FRSNET的基础上,我们通过多个学习方案进一步设计了多个面部先验搜索网络(MFPSNET)。 MFPSNET最佳地从不同的面部先验中提取信息,并将信息融合到图像特征中,以确保保留外部指导和内部特征。通过这种方式,MFPSNet充分利用了语义级别(解析图),几何级别(面部热图),参考级别(面部词典)和像素级(降级图像)信息,从而产生忠实且逼真的图像。定量和定性实验表明,MFPSNET在合成和现实世界数据集上对最先进的BFR方法表现出色。这些代码可公开可用:https://github.com/yyj1ang/mfpsnet。
translated by 谷歌翻译
Neural Architecture Search (NAS) for automatically finding the optimal network architecture has shown some success with competitive performances in various computer vision tasks. However, NAS in general requires a tremendous amount of computations. Thus reducing computational cost has emerged as an important issue. Most of the attempts so far has been based on manual approaches, and often the architectures developed from such efforts dwell in the balance of the network optimality and the search cost. Additionally, recent NAS methods for image restoration generally do not consider dynamic operations that may transform dimensions of feature maps because of the dimensionality mismatch in tensor calculations. This can greatly limit NAS in its search for optimal network structure. To address these issues, we re-frame the optimal search problem by focusing at component block level. From previous work, it's been shown that an effective denoising block can be connected in series to further improve the network performance. By focusing at block level, the search space of reinforcement learning becomes significantly smaller and evaluation process can be conducted more rapidly. In addition, we integrate an innovative dimension matching modules for dealing with spatial and channel-wise mismatch that may occur in the optimal design search. This allows much flexibility in optimal network search within the cell block. With these modules, then we employ reinforcement learning in search of an optimal image denoising network at a module level. Computational efficiency of our proposed Denoising Prior Neural Architecture Search (DPNAS) was demonstrated by having it complete an optimal architecture search for an image restoration task by just one day with a single GPU.
translated by 谷歌翻译
构建高质量的角色图像数据集很具有挑战性,因为现实世界图像通常受图像退化的影响。将当前图像恢复方法应用于此类现实世界字符图像时存在局限性,因为(i)字符图像中的噪声类别与一般图像中的噪声类别不同; (ii)现实世界字符图像通常包含更复杂的图像降解,例如不同噪声水平的混合噪声。为了解决这些问题,我们提出了一个现实世界角色恢复网络(RCRN),以有效恢复降级的角色图像,其中使用字符骨架信息和比例安装特征提取来获得更好的恢复性能。所提出的方法由骨架提取器(SENET)和角色图像修复器(CIRNET)组成。 Senet旨在保持角色的结构一致性并使复杂的噪声正常化。然后,Cirnet从降级的角色图像及其骨骼中重建了清洁图像。由于缺乏现实世界字符图像恢复的基准,我们构建了一个包含1,606个字符图像的数据集,这些图像具有现实世界中的降级,以评估所提出方法的有效性。实验结果表明,RCRN在定量和质量上优于最先进的方法。
translated by 谷歌翻译
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation. Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels, which significantly restricts their practical deployments on resource-limited devices. In this paper, we propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR. Specifically, we first introduce the light degradation prediction network to regress the degradation vector to simulate the real-world degradations, upon which the channel splitting vector is generated as the input for an efficient SR model. Then, a learnable octave convolution block is proposed to adaptively decide the channel splitting scale for low- and high-frequency features at each block, reducing computation overhead and memory cost by offering the large scale to low-frequency features and the small scale to the high ones. To further improve the RISR performance, Non-local regularization is employed to supplement the knowledge of patches from LR and HR subspace with free-computation inference. Extensive experiments demonstrate the effectiveness of DCS-RISR on different benchmark datasets. Our DCS-RISR not only achieves the best trade-off between computation/parameter and PSNR/SSIM metric, and also effectively handles real-world images with different degradation levels.
translated by 谷歌翻译
Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
translated by 谷歌翻译
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.
translated by 谷歌翻译
Diffusion Probabilistic Models (DPMs) have recently been employed for image deblurring. DPMs are trained via a stochastic denoising process that maps Gaussian noise to the high-quality image, conditioned on the concatenated blurry input. Despite their high-quality generated samples, image-conditioned Diffusion Probabilistic Models (icDPM) rely on synthetic pairwise training data (in-domain), with potentially unclear robustness towards real-world unseen images (out-of-domain). In this work, we investigate the generalization ability of icDPMs in deblurring, and propose a simple but effective guidance to significantly alleviate artifacts, and improve the out-of-distribution performance. Particularly, we propose to first extract a multiscale domain-generalizable representation from the input image that removes domain-specific information while preserving the underlying image structure. The representation is then added into the feature maps of the conditional diffusion model as an extra guidance that helps improving the generalization. To benchmark, we focus on out-of-distribution performance by applying a single-dataset trained model to three external and diverse test sets. The effectiveness of the proposed formulation is demonstrated by improvements over the standard icDPM, as well as state-of-the-art performance on perceptual quality and competitive distortion metrics compared to existing methods.
translated by 谷歌翻译
缺乏大规模嘈杂的图像对限制了监督的去噪方法在实际应用中部署。虽然现有无监督的方法能够在没有地面真理清洁图像的情况下学习图像去噪,但它们要么在不切实际的设置下表现出差或工作不佳(例如,配对嘈杂的图像)。在本文中,我们提出了一种实用的无监督图像去噪方法,以实现最先进的去噪性能。我们的方法只需要单一嘈杂的图像和噪声模型,可以在实际的原始图像去噪中轻松访问。它迭代地执行两个步骤:(1)构造具有来自噪声模型的随机噪声的噪声噪声数据集; (2)在噪声 - 嘈杂数据集上培训模型,并使用经过培训的模型来优化嘈杂的图像以获得下一轮中使用的目标。我们进一步近似我们的全迭代方法,具有快速算法,以实现更高效的培训,同时保持其原始高性能。实验对现实世界,合成和相关噪声的实验表明,我们提出的无监督的去噪方法具有卓越的现有无监督方法和具有监督方法的竞争性能。此外,我们认为现有的去噪数据集质量低,只包含少数场景。为了评估现实世界应用中的原始图像去噪表现,我们建立了一个高质量的原始图像数据集Sensenoise-500,包含500个现实生活场景。数据集可以作为更好地评估原始图像去噪的强基准。代码和数据集将在https://github.com/zhangyi-3/idr发布
translated by 谷歌翻译
尽管目前基于深度学习的方法在盲目的单图像超分辨率(SISR)任务中已获得了有希望的表现,但其中大多数主要集中在启发式上构建多样化的网络体系结构,并更少强调对Blur之间的物理发电机制的明确嵌入内核和高分辨率(HR)图像。为了减轻这个问题,我们提出了一个模型驱动的深神经网络,称为blind SISR。具体而言,为了解决经典的SISR模型,我们提出了一种简单的效果迭代算法。然后,通过将所涉及的迭代步骤展开到相应的网络模块中,我们自然构建了KXNET。所提出的KXNET的主要特异性是整个学习过程与此SISR任务的固有物理机制完全合理地集成在一起。因此,学习的模糊内核具有清晰的物理模式,并且模糊内核和HR图像之间的相互迭代过程可以很好地指导KXNET沿正确的方向发展。关于合成和真实数据的广泛实验很好地证明了我们方法的卓越准确性和一般性超出了当前代表性的最先进的盲目SISR方法。代码可在:\ url {https://github.com/jiahong-fu/kxnet}中获得。
translated by 谷歌翻译
现有的视频denoising方法通常假设嘈杂的视频通过添加高斯噪声从干净的视频中降低。但是,经过这种降解假设训练的深层模型将不可避免地导致由于退化不匹配而导致的真实视频的性能差。尽管一些研究试图在摄像机捕获的嘈杂和无噪声视频对上训练深层模型,但此类模型只能对特定的相机很好地工作,并且对其他视频的推广不佳。在本文中,我们建议提高此限制,并专注于一般真实视频的问题,目的是在看不见的现实世界视频上概括。我们首先调查视频噪音的共同行为来解决这个问题,并观察两个重要特征:1)缩减有助于降低空间空间中的噪声水平; 2)来自相邻框架的信息有助于消除时间上的当前框架的噪声空间。在这两个观察结果的推动下,我们通过充分利用上述两个特征提出了多尺度的复发架构。其次,我们通过随机调整不同的噪声类型来训练Denoising模型来提出合成真实的噪声降解模型。借助合成和丰富的降解空间,我们的退化模型可以帮助弥合训练数据和现实世界数据之间的分布差距。广泛的实验表明,与现有方法相比,我们所提出的方法实现了最先进的性能和更好的概括能力,而在合成高斯denoising和实用的真实视频denoisising方面都具有现有方法。
translated by 谷歌翻译
派生是一个重要而基本的计算机视觉任务,旨在消除在下雨天捕获的图像或视频中的雨条纹和累积。现有的派威方法通常会使雨水模型的启发式假设,这迫使它们采用复杂的优化或迭代细化以获得高回收质量。然而,这导致耗时的方法,并影响解决从假设偏离的雨水模式的有效性。在本文中,我们通过在没有复杂的雨水模型假设的情况下,通过在没有复杂的雨水模型假设的情况下制定污染作为预测滤波问题的简单而有效的污染方法。具体地,我们识别通过深网络自适应地预测适当的核的空间变型预测滤波(SPFILT以过滤不同的各个像素。由于滤波可以通过加速卷积来实现,因此我们的方法可以显着效率。我们进一步提出了eFderain +,其中包含三个主要贡献来解决残留的雨迹,多尺度和多样化的雨水模式而不会损害效率。首先,我们提出了不确定感知的级联预测滤波(UC-PFILT),其可以通过预测的内核来识别重建清洁像素的困难,并有效地移除残留的雨水迹线。其次,我们设计重量共享多尺度扩张过滤(WS-MS-DFILT),以处理多尺度雨条纹,而不会损害效率。第三,消除各种雨水模式的差距,我们提出了一种新颖的数据增强方法(即Rainmix)来培养我们的深层模型。通过对不同变体的复杂分析的所有贡献相结合,我们的最终方法在恢复质量和速度方面优于四个单像辐照数据集和一个视频派威数据集的基线方法。
translated by 谷歌翻译