In single image deblurring, the "coarse-to-fine" scheme, i.e. gradually restoring the sharp image on different resolutions in a pyramid, is very successful in both traditional optimization-based methods and recent neural-networkbased approaches. In this paper, we investigate this strategy and propose a Scale-recurrent Network (SRN-DeblurNet) for this deblurring task. Compared with the many recent learning-based approaches in [25], it has a simpler network structure, a smaller number of parameters and is easier to train. We evaluate our method on large-scale deblurring datasets with complex motion. Results show that our method can produce better quality results than state-of-thearts, both quantitatively and qualitatively.
translated by 谷歌翻译
Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multiscale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.
translated by 谷歌翻译
使用注意机制的深度卷积神经网络(CNN)在动态场景中取得了巨大的成功。在大多数这些网络中,只能通过注意图精炼的功能传递到下一层,并且不同层的注意力图彼此分开,这并不能充分利用来自CNN中不同层的注意信息。为了解决这个问题,我们引入了一种新的连续跨层注意传播(CCLAT)机制,该机制可以利用所有卷积层的分层注意信息。基于CCLAT机制,我们使用非常简单的注意模块来构建一个新型残留的密集注意融合块(RDAFB)。在RDAFB中,从上述RDAFB的输出中推断出的注意图和每一层直接连接到后续的映射,从而导致CRLAT机制。以RDAFB为基础,我们为动态场景Deblurring设计了一个名为RDAFNET的有效体系结构。基准数据集上的实验表明,所提出的模型的表现优于最先进的脱毛方法,并证明了CCLAT机制的有效性。源代码可在以下网址提供:https://github.com/xjmz6/rdafnet。
translated by 谷歌翻译
Despite deep end-to-end learning methods have shown their superiority in removing non-uniform motion blur, there still exist major challenges with the current multi-scale and scale-recurrent models: 1) Deconvolution/upsampling operations in the coarse-to-fine scheme result in expensive runtime; 2) Simply increasing the model depth with finer-scale levels cannot improve the quality of deblurring. To tackle the above problems, we present a deep hierarchical multi-patch network inspired by Spatial Pyramid Matching to deal with blurry images via a fine-tocoarse hierarchical representation. To deal with the performance saturation w.r.t. depth, we propose a stacked version of our multi-patch model. Our proposed basic multi-patch model achieves the state-of-the-art performance on the Go-Pro dataset while enjoying a 40× faster runtime compared to current multi-scale methods. With 30ms to process an image at 1280×720 resolution, it is the first real-time deep motion deblurring model for 720p images at 30fps. For stacked networks, significant improvements (over 1.2dB) are achieved on the GoPro dataset by increasing the network depth. Moreover, by varying the depth of the stacked model, one can adapt the performance and runtime of the same network for different application scenarios.
translated by 谷歌翻译
大多数现有的基于深度学习的单图像动态场景盲目脱毛(SIDSBD)方法通常设计深网络,以直接从一个输入的运动模糊图像中直接删除空间变化的运动模糊,而无需模糊的内核估计。在本文中,受投射运动路径模糊(PMPB)模型和可变形卷积的启发,我们提出了一个新颖的约束可变形的卷积网络(CDCN),以进行有效的单图像动态场景,同时实现了准确的空间变化,以及仅观察到的运动模糊图像的高质量图像恢复。在我们提出的CDCN中,我们首先构建了一种新型的多尺度多级多输入多输出(MSML-MIMO)编码器架构,以提高功能提取能力。其次,与使用多个连续帧的DLVBD方法不同,提出了一种新颖的约束可变形卷积重塑(CDCR)策略,其中首先将可变形的卷积应用于输入的单运动模糊图像的模糊特征,用于学习学习的抽样点,以学习学习的采样点每个像素的运动模糊内核类似于PMPB模型中摄像机震动的运动密度函数的估计,然后提出了一种基于PMPB的新型重塑损耗函数来限制学习的采样点收敛,这可以使得可以使得可以使其产生。学习的采样点与每个像素的相对运动轨迹匹配,并促进空间变化的运动模糊内核估计的准确性。
translated by 谷歌翻译
在文献中,粗细或缩放 - 重复性方法是从其低分辨率版本逐步恢复清洁图像,已成功用于单图像去孔。然而,现有方法的主要缺点是需要配对数据;即夏普尔图像对同一场景,这是一种复杂和繁琐的采集程序。此外,由于对损耗功能的强烈监督,此类网络的预先训练模型对训练期间的模糊强烈偏向,并且在推理时间内的新模糊内核面对时倾向于提供子最佳性能。为了解决上述问题,我们使用秤 - 自适应注意模块(Saam)提出了无监督的域特定的去孔。我们的网络不需要监督对进行训练,并且防夹机制主要由逆势丢失引导,从而使我们的网络适用于模糊功能的分布。给定模糊的输入图像,在训练期间我们的模型中使用相同图像的不同分辨率,Saam允许在整个分辨率上有效的信息流。对于特定规模的网络培训,Saam作为当前规模的函数参加较低的尺度功能。不同的消融研究表明,我们的粗细机制优于端到端无监督的模型,而Saam能够与文学中使用的注意力相比更好地参加。定性和定量比较(在无参考度量标准)表明我们的方法优于现有无监督的方法。
translated by 谷歌翻译
在各种基于学习的图像恢复任务(例如图像降解和图像超分辨率)中,降解表示形式被广泛用于建模降解过程并处理复杂的降解模式。但是,在基于学习的图像deblurring中,它们的探索程度较低,因为在现实世界中挑战性的情况下,模糊内核估计不能很好地表现。我们认为,对于图像降低的降解表示形式是特别必要的,因为模糊模式通常显示出比噪声模式或高频纹理更大的变化。在本文中,我们提出了一个框架来学习模糊图像的空间自适应降解表示。提出了一种新颖的联合图像re毁和脱蓝色的学习过程,以提高降解表示的表现力。为了使学习的降解表示有效地启动和降解,我们提出了一个多尺度退化注入网络(MSDI-NET),以将它们集成到神经网络中。通过集成,MSDI-NET可以适应各种复杂的模糊模式。 GoPro和Realblur数据集上的实验表明,我们提出的具有学识渊博的退化表示形式的Deblurring框架优于最先进的方法,具有吸引人的改进。该代码在https://github.com/dasongli1/learning_degradation上发布。
translated by 谷歌翻译
Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on the alignment of nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task that requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-toend to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high frame rate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines 1 .
translated by 谷歌翻译
Multi-Scale and U-shaped Networks are widely used in various image restoration problems, including deblurring. Keeping in mind the wide range of applications, we present a comparison of these architectures and their effects on image deblurring. We also introduce a new block called as NFResblock. It consists of a Fast Fourier Transformation layer and a series of modified Non-Linear Activation Free Blocks. Based on these architectures and additions, we introduce NFResnet and NFResnet+, which are modified multi-scale and U-Net architectures, respectively. We also use three different loss functions to train these architectures: Charbonnier Loss, Edge Loss, and Frequency Reconstruction Loss. Extensive experiments on the Deep Video Deblurring dataset, along with ablation studies for each component, have been presented in this paper. The proposed architectures achieve a considerable increase in Peak Signal to Noise (PSNR) ratio and Structural Similarity Index (SSIM) value.
translated by 谷歌翻译
在本文中,我们研究了现实世界图像脱毛的问题,并考虑了改善深度图像脱布模型的性能的两个关键因素,即培训数据综合和网络体系结构设计。经过现有合成数据集训练的脱毛模型在由于域移位引起的真实模糊图像上的表现较差。为了减少合成和真实域之间的域间隙,我们提出了一种新颖的现实模糊合成管道来模拟摄像机成像过程。由于我们提出的合成方法,可以使现有的Deblurring模型更强大,以处理现实世界的模糊。此外,我们开发了一个有效的脱蓝色模型,该模型同时捕获特征域中的非本地依赖性和局部上下文。具体而言,我们将多路径变压器模块介绍给UNET架构,以进行丰富的多尺度功能学习。在三个现实世界数据集上进行的全面实验表明,所提出的Deblurring模型的性能优于最新方法。
translated by 谷歌翻译
This paper tackles the problem of motion deblurring of dynamic scenes. Although end-to-end fully convolutional designs have recently advanced the state-of-the-art in nonuniform motion deblurring, their performance-complexity trade-off is still sub-optimal. Existing approaches achieve a large receptive field by increasing the number of generic convolution layers and kernel-size, but this comes at the expense of of the increase in model size and inference speed. In this work, we propose an efficient pixel adaptive and feature attentive design for handling large blur variations across different spatial locations and process each test image adaptively. We also propose an effective content-aware global-local filtering module that significantly improves performance by considering not only global dependencies but also by dynamically exploiting neighboring pixel information. We use a patch-hierarchical attentive architecture composed of the above module that implicitly discovers the spatial variations in the blur present in the input image and in turn, performs local and global modulation of intermediate features. Extensive qualitative and quantitative comparisons with prior art on deblurring benchmarks demonstrate that our design offers significant improvements over the state-of-the-art in accuracy as well as speed.
translated by 谷歌翻译
夜间摄影通常由于昏暗的环境和长期使用而遭受弱光和模糊问题。尽管现有的光增强和脱毛方法可以单独解决每个问题,但一系列此类方法不能和谐地适应可见性和纹理的共同降解。训练端到端网络也是不可行的,因为没有配对数据可以表征低光和模糊的共存。我们通过引入新的数据合成管道来解决该问题,该管道对现实的低光模糊降解进行建模。使用管道,我们介绍了第一个用于关节低光增强和去皮的大型数据集。数据集,LOL-BLUR,包含12,000个低Blur/正常出现的对,在不同的情况下具有不同的黑暗和运动模糊。我们进一步提出了一个名为LEDNET的有效网络,以执行关节弱光增强和脱毛。我们的网络是独一无二的,因为它是专门设计的,目的是考虑两个相互连接的任务之间的协同作用。拟议的数据集和网络都为这项具有挑战性的联合任务奠定了基础。广泛的实验证明了我们方法对合成和现实数据集的有效性。
translated by 谷歌翻译
模糊文物可以严重降低图像的视觉质量,并且已经提出了许多用于特定场景的脱模方法。然而,在大多数现实世界的图像中,模糊是由不同因素引起的,例如运动和散焦。在本文中,我们解决了不同的去纹身方法如何在一般类型的模糊上进行。对于深入的性能评估,我们构建一个名为(MC-Blur)的新型大规模的多个原因图像去孔数据集,包括现实世界和合成模糊图像,具有模糊的混合因素。采用不同的技术收集所提出的MC-Blur数据集中的图像:卷积超高清(UHD)具有大核的锐利图像,平均由1000 FPS高速摄像头捕获的清晰图像,向图像添加Defocus,而且真实-world模糊的图像由各种相机型号捕获。这些结果概述了当前的去纹理方法的优缺点。此外,我们提出了一种新的基线模型,适应多种模糊的原因。通过包括对不同程度的特征的不同重量,所提出的网络导出更强大的特征,重量分配给更重要的水平,从而增强了特征表示。新数据集上的广泛实验结果展示了多原因模糊情景所提出的模型的有效性。
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
图像恢复是从降级版本中恢复清洁图像的任务。在大多数情况下,劣化是空间变化的,并且它需要恢复网络到本地化并恢复受影响的区域。在本文中,我们提出了一种适用于处理受实际发生的伪像(如模糊,雨杆)的图像中的图像中降解的图像特异性和空间不同性质的新方法。与直接学习劣化和清洁图像之间的映射直接学习映射的现有方法不同,我们将恢复任务分解为劣化定位和降级的区域引导恢复的两个阶段。我们的前提是使用劣化掩模预测的辅助任务来指导恢复过程。我们展示了对此辅助任务培训的模型包含重要地区知识,可以利用使用细心知识蒸馏技术来指导恢复网络的培训。此外,我们提出了掩模引导的卷积和全局上下文聚合模块,专注于恢复劣化区域。通过实现强大基线的显着改善,证明了所提出的方法的有效性。
translated by 谷歌翻译
Image restoration tasks demand a complex balance between spatial details and high-level contextualized information while recovering images. In this paper, we propose a novel synergistic design that can optimally balance these competing goals. Our main proposal is a multi-stage architecture, that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps. Specifically, our model first learns the contextualized features using encoder-decoder architectures and later combines them with a high-resolution branch that retains local information. At each stage, we introduce a novel per-pixel adaptive design that leverages in-situ supervised attention to reweight the local features. A key ingredient in such a multi-stage architecture is the information exchange between different stages. To this end, we propose a twofaceted approach where the information is not only exchanged sequentially from early to late stages, but lateral connections between feature processing blocks also exist to avoid any loss of information. The resulting tightly interlinked multi-stage architecture, named as MPRNet, delivers strong performance gains on ten datasets across a range of tasks including image deraining, deblurring, and denoising. The source code and pre-trained models are available at https://github.com/swz30/MPRNet.
translated by 谷歌翻译
Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge [26].
translated by 谷歌翻译
的状态的最先进的视频去模糊方法的成功主要源于潜伏视频恢复相邻帧之间的对准隐式或显式的估计。然而,由于模糊效果的影响,估计从所述模糊的相邻帧的对准信息是不是一个简单的任务。不准确的估计将干扰随后的帧的恢复。相反,估计比对信息,我们提出了一个简单而有效的深层递归神经网络与多尺度双向传播(RNN-MBP),有效传播和收集未对齐的相邻帧的信息,更好的视频去模糊。具体来说,我们建立与这可以通过在不同的尺度整合他们直接利用从非对齐相邻隐藏状态帧间信息的两个U形网RNN细胞多尺度双向传播〜(MBP)模块。此外,为了更好地评估算法和国家的最先进的存在于现实世界的模糊场景的方法,我们也通过一个精心设计的数字视频采集系统创建一个真实世界的模糊视频数据集(RBVD)(的DVA)并把它作为训练和评估数据集。大量的实验结果表明,该RBVD数据集有效地提高了对现实世界的模糊的视频现有算法的性能,并且算法进行从优对三个典型基准的国家的最先进的方法。该代码可在https://github.com/XJTU-CVLAB-LOWLEVEL/RNN-MBP。
translated by 谷歌翻译
本文铲球动态场景去模糊的问题。虽然终端到终端的全卷积的设计最近提出的国家的最先进的非匀速运动去模糊,他们的表现复杂的权衡仍是次优的。现有的方法在普通卷积层,内核尺寸的数量,来与模型的大小和推理速度的增加的负担,一个简单的增量实现大的感受野。在这项工作中,我们提出了一个有效的像素适应并配内和跨不同的图像处理大量的模糊变化周到的设计。我们还提出了一种有效的内容感知全局 - 局部滤波模块通过不仅考虑像素的全局依赖关系还动态使用相邻像素是显著提高性能。我们使用上述模块构成的补丁分层架构周到隐式地发现存在于所述输入图像并依次模糊的空间变化进行的中间特征局部和全局调制。与现有技术的上去模糊基准广泛的定性和定量的比较表明了该网络的优越性。
translated by 谷歌翻译
用于深度卷积神经网络的视频插值的现有方法,因此遭受其内在限制,例如内部局限性核心权重和受限制的接收领域。为了解决这些问题,我们提出了一种基于变换器的视频插值框架,允许内容感知聚合权重,并考虑具有自我关注操作的远程依赖性。为避免全球自我关注的高计算成本,我们将当地注意的概念引入视频插值并将其扩展到空间域。此外,我们提出了一个节省时间的分离策略,以节省内存使用,这也提高了性能。此外,我们开发了一种多尺度帧合成方案,以充分实现变压器的潜力。广泛的实验证明了所提出的模型对最先进的方法来说,定量和定性地在各种基准数据集上进行定量和定性。
translated by 谷歌翻译