最近,基于深度学习的图像降级方法在测试数据上具有与训练集相同的测试数据的有希望的性能,在该数据中,已经学习了基于合成或收集的现实世界训练数据的各种denoising模型。但是,在处理真实世界的嘈杂图像时,Denoising的性能仍然受到限制。在本文中,我们提出了一种简单而有效的贝叶斯深集合(BDE)方法,用于真实世界图像denoising,其中可以融合使用各种训练数据设置进行预训练的几位代表性的深层Denoiser,以提高稳健性。 BDE的基础是,现实世界的图像噪声高度取决于信号依赖性,并且在现实世界中的嘈杂图像中的异质噪声可以由不同的Deoisiser分别处理。特别是,我们将受过良好训练的CBDNET,NBNET,HINET,UFORFORMER和GMSNET进入Denoiser池,并采用U-NET来预测Pixel的加权图以融合这些DeOisiser。引入了贝叶斯深度学习策略,而不是仅仅学习像素的加权地图,而是为了预测加权不确定性和加权图,可以通过该策略来建模预测差异,以改善现实世界中的嘈杂图像的鲁棒性。广泛的实验表明,可以通过融合现有的DINOISER而不是训练一个以昂贵的成本来训练一个大的Denoiser来更好地消除现实世界的噪音。在DND数据集上,我们的BDE实现了 +0.28〜dB PSNR的增益,而不是最先进的denoising方法。此外,我们注意到,在应用于现实世界嘈杂的图像时,基于不同高斯噪声水平的BDE Denoiser优于最先进的CBDNET。此外,我们的BDE可以扩展到其他图像恢复任务,并在基准数据集上获得 +0.30dB, +0.18dB和 +0.12dB PSNR的收益,以分别用于图像去除图像,图像降低和单个图像超级分辨率。
translated by 谷歌翻译
盲图修复(IR)是计算机视觉中常见但充满挑战的问题。基于经典模型的方法和最新的深度学习(DL)方法代表了有关此问题的两种不同方法,每种方法都有自己的优点和缺点。在本文中,我们提出了一种新颖的盲图恢复方法,旨在整合它们的两种优势。具体而言,我们为盲IR构建了一个普通的贝叶斯生成模型,该模型明确描绘了降解过程。在此提出的模型中,PICEL的非I.I.D。高斯分布用于适合图像噪声。它的灵活性比简单的I.I.D。在大多数常规方法中采用的高斯或拉普拉斯分布,以处理图像降解中包含的更复杂的噪声类型。为了解决该模型,我们设计了一个变异推理算法,其中所有预期的后验分布都被参数化为深神经网络,以提高其模型能力。值得注意的是,这种推论算法诱导统一的框架共同处理退化估计和图像恢复的任务。此外,利用了前一种任务中估计的降解信息来指导后一种红外过程。对两项典型的盲型IR任务进行实验,即图像降解和超分辨率,表明所提出的方法比当前最新的方法实现了卓越的性能。
translated by 谷歌翻译
Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
translated by 谷歌翻译
基于深度学习的高光谱图像(HSI)恢复方法因其出色的性能而广受欢迎,但每当任务更改的细节时,通常都需要昂贵的网络再培训。在本文中,我们建议使用有效的插入方法以统一的方法恢复HSI,该方法可以共同保留基于优化方法的灵活性,并利用深神经网络的强大表示能力。具体而言,我们首先开发了一个新的深HSI DeNoiser,利用了门控复发单元,短期和长期的跳过连接以及增强的噪声水平图,以更好地利用HSIS内丰富的空间光谱信息。因此,这导致在高斯和复杂的噪声设置下,在HSI DeNosing上的最新性能。然后,在处理各种HSI恢复任务之前,将提议的DeNoiser插入即插即用的框架中。通过对HSI超分辨率,压缩感测和内部进行的广泛实验,我们证明了我们的方法经常实现卓越的性能,这与每个任务上的最先进的竞争性或甚至更好任何特定任务的培训。
translated by 谷歌翻译
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.
translated by 谷歌翻译
现实世界图像Denoising是一个实用的图像恢复问题,旨在从野外嘈杂的输入中获取干净的图像。最近,Vision Transformer(VIT)表现出强大的捕获远程依赖性的能力,许多研究人员试图将VIT应用于图像DeNosing任务。但是,现实世界的图像是一个孤立的框架,它使VIT构建了内部贴片的远程依赖性,该依赖性将图像分为贴片并混乱噪声模式和梯度连续性。在本文中,我们建议通过使用连续的小波滑动转换器来解决此问题,该小波滑动转换器在现实世界中构建频率对应关系,称为dnswin。具体而言,我们首先使用CNN编码器从嘈杂的输入图像中提取底部功能。 DNSWIN的关键是将高频和低频信息与功能和构建频率依赖性分开。为此,我们提出了小波滑动窗口变压器,该变压器利用离散的小波变换,自我注意力和逆离散小波变换来提取深度特征。最后,我们使用CNN解码器将深度特征重建为DeNo的图像。对现实世界的基准测试的定量和定性评估都表明,拟议的DNSWIN对最新方法的表现良好。
translated by 谷歌翻译
学习自然图像恢复的一般性先验是一项重要但具有挑战性的任务。早期方法主要涉及手工制作的先验,包括归一化稀疏性,L_0梯度,暗通道先验等。最近,深层神经网络已用于学习各种图像先验,但不能保证概括。在本文中,我们提出了一种新颖的方法,该方法将任务敏捷的先验嵌入到变压器中。我们的方法称为任务不合时宜的先验嵌入(磁带),由两个阶段组成,即,任务不合时宜的预训练和特定于任务的微调,第一阶段将有关自然图像的先验知识嵌入到变压器中,第二阶段嵌入了第二阶段。阶段提取知识以帮助下游图像恢复。对各种降解的实验验证了胶带的有效性。根据PSNR的图像恢复性能提高了多达1.45dB,甚至超过了特定于任务的算法。更重要的是,磁带显示了从退化的图像中解开广义图像先验的能力,这些图像具有良好的转移能力,可以转移到未知的下游任务。
translated by 谷歌翻译
本文提出了图像恢复的新变异推理框架和一个卷积神经网络(CNN)结构,该结构可以解决所提出的框架所描述的恢复问题。较早的基于CNN的图像恢复方法主要集中在网络体系结构设计或培训策略上,具有非盲方案,其中已知或假定降解模型。为了更接近现实世界的应用程序,CNN还接受了整个数据集的盲目培训,包括各种降解。然而,给定有多样化的图像的高质量图像的条件分布太复杂了,无法通过单个CNN学习。因此,也有一些方法可以提供其他先验信息来培训CNN。与以前的方法不同,我们更多地专注于基于贝叶斯观点以及如何重新重新重构目标的恢复目标。具体而言,我们的方法放松了原始的后推理问题,以更好地管理子问题,因此表现得像分裂和互动方案。结果,与以前的框架相比,提出的框架提高了几个恢复问题的性能。具体而言,我们的方法在高斯denoising,现实世界中的降噪,盲图超级分辨率和JPEG压缩伪像减少方面提供了最先进的性能。
translated by 谷歌翻译
使用注意机制的深度卷积神经网络(CNN)在动态场景中取得了巨大的成功。在大多数这些网络中,只能通过注意图精炼的功能传递到下一层,并且不同层的注意力图彼此分开,这并不能充分利用来自CNN中不同层的注意信息。为了解决这个问题,我们引入了一种新的连续跨层注意传播(CCLAT)机制,该机制可以利用所有卷积层的分层注意信息。基于CCLAT机制,我们使用非常简单的注意模块来构建一个新型残留的密集注意融合块(RDAFB)。在RDAFB中,从上述RDAFB的输出中推断出的注意图和每一层直接连接到后续的映射,从而导致CRLAT机制。以RDAFB为基础,我们为动态场景Deblurring设计了一个名为RDAFNET的有效体系结构。基准数据集上的实验表明,所提出的模型的表现优于最先进的脱毛方法,并证明了CCLAT机制的有效性。源代码可在以下网址提供:https://github.com/xjmz6/rdafnet。
translated by 谷歌翻译
派生是一个重要而基本的计算机视觉任务,旨在消除在下雨天捕获的图像或视频中的雨条纹和累积。现有的派威方法通常会使雨水模型的启发式假设,这迫使它们采用复杂的优化或迭代细化以获得高回收质量。然而,这导致耗时的方法,并影响解决从假设偏离的雨水模式的有效性。在本文中,我们通过在没有复杂的雨水模型假设的情况下,通过在没有复杂的雨水模型假设的情况下制定污染作为预测滤波问题的简单而有效的污染方法。具体地,我们识别通过深网络自适应地预测适当的核的空间变型预测滤波(SPFILT以过滤不同的各个像素。由于滤波可以通过加速卷积来实现,因此我们的方法可以显着效率。我们进一步提出了eFderain +,其中包含三个主要贡献来解决残留的雨迹,多尺度和多样化的雨水模式而不会损害效率。首先,我们提出了不确定感知的级联预测滤波(UC-PFILT),其可以通过预测的内核来识别重建清洁像素的困难,并有效地移除残留的雨水迹线。其次,我们设计重量共享多尺度扩张过滤(WS-MS-DFILT),以处理多尺度雨条纹,而不会损害效率。第三,消除各种雨水模式的差距,我们提出了一种新颖的数据增强方法(即Rainmix)来培养我们的深层模型。通过对不同变体的复杂分析的所有贡献相结合,我们的最终方法在恢复质量和速度方面优于四个单像辐照数据集和一个视频派威数据集的基线方法。
translated by 谷歌翻译
最近,卷积神经网络(CNN)已被广泛用于图像DeNoising。现有方法受益于剩余学习并获得高性能。许多研究都注意到优化CNN的网络体系结构,但忽略了残留学习的局限性。本文提出了两个局限性。一个是残留学习的重点是估计噪声,从而忽略图像信息。另一个是图像自相似性没有被有效考虑。本文提出了一个组成剥落网络(CDN),其图像信息路径(IIP)和噪声估计路径(NEP)将分别解决这两个问题。 IIP通过图像到图像的方法来培训图像信息。对于NEP,它从训练的角度利用了图像自相似性。这种基于相似性的训练方法将NEP限制为输出具有特定类型噪声的不同图像贴片的相似估计噪声分布。最后,将全面考虑图像信息和噪声分布信息,以进行图像denoising。实验表明,CDN达到最新的结果会导致合成和现实世界图像降解。我们的代码将在https://github.com/jiahongz/cdn上发布。
translated by 谷歌翻译
突发超级分辨率(SR)提供了从低质量图像恢复丰富细节的可能性。然而,由于实际应用中的低分辨率(LR)图像具有多种复杂和未知的降级,所以现有的非盲(例如,双臂)设计的网络通常导致恢复高分辨率(HR)图像的严重性能下降。此外,处理多重未对准的嘈杂的原始输入也是具有挑战性的。在本文中,我们解决了从现代手持设备获取的原始突发序列重建HR图像的问题。中央观点是一个内核引导策略,可以用两个步骤解决突发SR:内核建模和HR恢复。前者估计来自原始输入的突发内核,而后者基于估计的内核预测超分辨图像。此外,我们引入了内核感知可变形对准模块,其可以通过考虑模糊的前沿而有效地对准原始图像。对综合和现实世界数据集的广泛实验表明,所提出的方法可以在爆发SR问题中对最先进的性能进行。
translated by 谷歌翻译
现有的视频denoising方法通常假设嘈杂的视频通过添加高斯噪声从干净的视频中降低。但是,经过这种降解假设训练的深层模型将不可避免地导致由于退化不匹配而导致的真实视频的性能差。尽管一些研究试图在摄像机捕获的嘈杂和无噪声视频对上训练深层模型,但此类模型只能对特定的相机很好地工作,并且对其他视频的推广不佳。在本文中,我们建议提高此限制,并专注于一般真实视频的问题,目的是在看不见的现实世界视频上概括。我们首先调查视频噪音的共同行为来解决这个问题,并观察两个重要特征:1)缩减有助于降低空间空间中的噪声水平; 2)来自相邻框架的信息有助于消除时间上的当前框架的噪声空间。在这两个观察结果的推动下,我们通过充分利用上述两个特征提出了多尺度的复发架构。其次,我们通过随机调整不同的噪声类型来训练Denoising模型来提出合成真实的噪声降解模型。借助合成和丰富的降解空间,我们的退化模型可以帮助弥合训练数据和现实世界数据之间的分布差距。广泛的实验表明,与现有方法相比,我们所提出的方法实现了最先进的性能和更好的概括能力,而在合成高斯denoising和实用的真实视频denoisising方面都具有现有方法。
translated by 谷歌翻译
As a common weather, rain streaks adversely degrade the image quality. Hence, removing rains from an image has become an important issue in the field. To handle such an ill-posed single image deraining task, in this paper, we specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks and has clear interpretability. In specific, we first establish a RCD model for representing rain streaks and utilize the proximal gradient descent technique to design an iterative algorithm only containing simple operators for solving the model. By unfolding it, we then build the RCDNet in which every network module has clear physical meanings and corresponds to each operation involved in the algorithm. This good interpretability greatly facilitates an easy visualization and analysis on what happens inside the network and why it works well in inference process. Moreover, taking into account the domain gap issue in real scenarios, we further design a novel dynamic RCDNet, where the rain kernels can be dynamically inferred corresponding to input rainy images and then help shrink the space for rain layer estimation with few rain maps so as to ensure a fine generalization performance in the inconsistent scenarios of rain types between training and testing data. By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted, faithfully characterizing the features of both rain and clean background layers, and thus naturally lead to better deraining performance. Comprehensive experiments substantiate the superiority of our method, especially on its well generality to diverse testing scenarios and good interpretability for all its modules. Code is available in \emph{\url{https://github.com/hongwang01/DRCDNet}}.
translated by 谷歌翻译
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation. Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels, which significantly restricts their practical deployments on resource-limited devices. In this paper, we propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR. Specifically, we first introduce the light degradation prediction network to regress the degradation vector to simulate the real-world degradations, upon which the channel splitting vector is generated as the input for an efficient SR model. Then, a learnable octave convolution block is proposed to adaptively decide the channel splitting scale for low- and high-frequency features at each block, reducing computation overhead and memory cost by offering the large scale to low-frequency features and the small scale to the high ones. To further improve the RISR performance, Non-local regularization is employed to supplement the knowledge of patches from LR and HR subspace with free-computation inference. Extensive experiments demonstrate the effectiveness of DCS-RISR on different benchmark datasets. Our DCS-RISR not only achieves the best trade-off between computation/parameter and PSNR/SSIM metric, and also effectively handles real-world images with different degradation levels.
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
引导过滤器是计算机视觉和计算机图形中的基本工具,旨在将结构信息从引导图像传输到目标图像。大多数现有方法构造来自指导本身的滤波器内核,而不考虑指导和目标之间的相互依赖性。然而,由于两种图像中通常存在显着不同的边沿,只需将引导的所有结构信息传送到目标即将导致各种伪像。要应对这个问题,我们提出了一个名为Deep Enterponal引导图像过滤的有效框架,其过滤过程可以完全集成两个图像中包含的互补信息。具体地,我们提出了一种注意力内核学习模块,分别从引导和目标生成双组滤波器内核,然后通过在两个图像之间建模像素方向依赖性来自适应地组合它们。同时,我们提出了一种多尺度引导图像滤波模块,以粗略的方式通过所构造的内核逐渐产生滤波结果。相应地,引入了多尺度融合策略以重用中间导点在粗略的过程中。广泛的实验表明,所提出的框架在广泛的引导图像滤波应用中,诸如引导超分辨率,横向模态恢复,纹理拆除和语义分割的最先进的方法。
translated by 谷歌翻译
Recently, Transformer-based image restoration networks have achieved promising improvements over convolutional neural networks due to parameter-independent global interactions. To lower computational cost, existing works generally limit self-attention computation within non-overlapping windows. However, each group of tokens are always from a dense area of the image. This is considered as a dense attention strategy since the interactions of tokens are restrained in dense regions. Obviously, this strategy could result in restricted receptive fields. To address this issue, we propose Attention Retractable Transformer (ART) for image restoration, which presents both dense and sparse attention modules in the network. The sparse attention module allows tokens from sparse areas to interact and thus provides a wider receptive field. Furthermore, the alternating application of dense and sparse attention modules greatly enhances representation ability of Transformer while providing retractable attention on the input image.We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks. Experimental results validate that our proposed ART outperforms state-of-the-art methods on various benchmark datasets both quantitatively and visually. We also provide code and models at the website https://github.com/gladzhang/ART.
translated by 谷歌翻译
在各种基于学习的图像恢复任务(例如图像降解和图像超分辨率)中,降解表示形式被广泛用于建模降解过程并处理复杂的降解模式。但是,在基于学习的图像deblurring中,它们的探索程度较低,因为在现实世界中挑战性的情况下,模糊内核估计不能很好地表现。我们认为,对于图像降低的降解表示形式是特别必要的,因为模糊模式通常显示出比噪声模式或高频纹理更大的变化。在本文中,我们提出了一个框架来学习模糊图像的空间自适应降解表示。提出了一种新颖的联合图像re毁和脱蓝色的学习过程,以提高降解表示的表现力。为了使学习的降解表示有效地启动和降解,我们提出了一个多尺度退化注入网络(MSDI-NET),以将它们集成到神经网络中。通过集成,MSDI-NET可以适应各种复杂的模糊模式。 GoPro和Realblur数据集上的实验表明,我们提出的具有学识渊博的退化表示形式的Deblurring框架优于最先进的方法,具有吸引人的改进。该代码在https://github.com/dasongli1/learning_degradation上发布。
translated by 谷歌翻译
While deep convolutional neural networks (CNNs) have achieved impressive success in image denoising with additive white Gaussian noise (AWGN), their performance remains limited on real-world noisy photographs. The main reason is that their learned models are easy to overfit on the simplified AWGN model which deviates severely from the complicated real-world noise model. In order to improve the generalization ability of deep CNN denoisers, we suggest training a convolutional blind denoising network (CBDNet) with more realistic noise model and real-world noisy-clean image pairs. On the one hand, both signaldependent noise and in-camera signal processing pipeline is considered to synthesize realistic noisy images. On the other hand, real-world noisy photographs and their nearly noise-free counterparts are also included to train our CBD-Net. To further provide an interactive strategy to rectify denoising result conveniently, a noise estimation subnetwork with asymmetric learning to suppress under-estimation of noise level is embedded into CBDNet. Extensive experimental results on three datasets of real-world noisy photographs clearly demonstrate the superior performance of CBDNet over state-of-the-arts in terms of quantitative metrics and visual quality. The code has been made available at https://github.com/GuoShi28/CBDNet.
translated by 谷歌翻译