经典图像去噪方法利用非本地自相似原理来有效地从嘈杂的图像中恢复图像内容。目前的最先进的方法使用深卷积神经网络(CNNS),以有效地学习从嘈杂到清洁图像的映射。深度去噪CNNS表现出高学习能力,并集成了由于大量隐藏层所产生的大型接收领域而整合非本地信息。然而,深网络也是计算复杂的并且需要大数据进行培训。为了解决这些问题,本研究旨在通过一种新的神经元模型赋予自组织的操作神经网络(自我onns)的重点,该模型可以通过紧凑且浅的模型实现类似或更好的去噪性能。最近,已经引入了超神经元的概念,其通过利用未局限性的内核位置来增强生成神经元的非线性变换,以获得增强的接受场大小。这是赋予深度网络配置需求的关键成就。由于已知非本地信息的整合受益于去噪,在这项工作中,我们研究了超神经元对合成和现实世界图像去噪的使用。我们还讨论了在GPU上实施超神经元模型的实际问题,并提出了非本地化操作的异质性与计算复杂性之间的权衡。我们的结果表明,具有相同的宽度和深度,具有超级神经元的自动onn,具有对具有生成和卷积神经元的网络的去噪性能,为脱结任务提供了显着的促进。此外,结果表明,具有超神经元的自串,可以分别为合成和真实世界的众所周知的众所周知的深层CNN去噪者达到竞争和优越的合成表演。
translated by 谷歌翻译
Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.
translated by 谷歌翻译
在许多计算机视觉子域中,图像降级仍然是一个具有挑战性的问题。最近的研究表明,在有监督的环境中取得了重大改进。但是,很少有挑战(例如空间忠诚度和类似卡通的平滑度)仍未解决或果断地忽略。我们的研究提出了一个简单而有效的架构,用于解决上述问题的降级问题。所提出的体系结构重新审视了模块化串联的概念,而不是长时间和更深的级联连接,以恢复给定图像的更清洁近似。我们发现不同的模块可以捕获多功能表示形式,而串联表示为低级图像恢复创造了更丰富的子空间。所提出的架构的参数数量仍然小于以前的大多数网络的数量,并且仍然对当前最新网络进行了重大改进。
translated by 谷歌翻译
Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance; in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.
translated by 谷歌翻译
现实世界图像Denoising是一个实用的图像恢复问题,旨在从野外嘈杂的输入中获取干净的图像。最近,Vision Transformer(VIT)表现出强大的捕获远程依赖性的能力,许多研究人员试图将VIT应用于图像DeNosing任务。但是,现实世界的图像是一个孤立的框架,它使VIT构建了内部贴片的远程依赖性,该依赖性将图像分为贴片并混乱噪声模式和梯度连续性。在本文中,我们建议通过使用连续的小波滑动转换器来解决此问题,该小波滑动转换器在现实世界中构建频率对应关系,称为dnswin。具体而言,我们首先使用CNN编码器从嘈杂的输入图像中提取底部功能。 DNSWIN的关键是将高频和低频信息与功能和构建频率依赖性分开。为此,我们提出了小波滑动窗口变压器,该变压器利用离散的小波变换,自我注意力和逆离散小波变换来提取深度特征。最后,我们使用CNN解码器将深度特征重建为DeNo的图像。对现实世界的基准测试的定量和定性评估都表明,拟议的DNSWIN对最新方法的表现良好。
translated by 谷歌翻译
许多图像处理网络在整个输入图像上应用一组静态卷积核,这是自然图像的次优,因为它们通常由异质视觉模式组成。最近在分类,分割和图像恢复方面的工作已经证明,动态核优于局部图像统计数据的静态内核。然而,这些工作经常采用每像素卷积核,这引入了高存储器和计算成本。为了在没有显着开销的情况下实现空间变化的处理,我们呈现\ TextBF {Malle} Chable \ TextBF {CONV} olution(\ textbf {malleconv}),作为动态卷积的有效变体。 \我们的权重由能够在特定空间位置产生内容相关的输出的有效预测器网络动态地产生。与以前的作品不同,\我们从输入生成一组更小的空间变化内核,这会扩大网络的接收领域,并显着降低计算和内存成本。然后通过具有最小内存开销的高效切片和-Conver操作员将这些内核应用于全分辨率的特征映射。我们进一步使用MalleConv建立了高效的去噪网络,被创建为\ textbf {mallenet}。它实现了高质量的结果,没有非常深的架构,例如,它是8.91 $ \ times $的速度快于最好的去噪算法(Swinir),同时保持类似的性能。我们还表明,添加到标准的基于卷积的骨干的单个\我们可以贡献显着降低计算成本或以相似的成本提高图像质量。项目页面:https://yifanjiang.net/malleconv.html
translated by 谷歌翻译
Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) in single image super-resolution (SISR). However, existing CNN-based SISR methods mostly assume that a low-resolution (LR) image is bicubicly downsampled from a high-resolution (HR) image, thus inevitably giving rise to poor performance when the true degradation does not follow this assumption. Moreover, they lack scalability in learning a single model to nonblindly deal with multiple degradations. To address these issues, we propose a general framework with dimensionality stretching strategy that enables a single convolutional super-resolution network to take two key factors of the SISR degradation process, i.e., blur kernel and noise level, as input. Consequently, the super-resolver can handle multiple and even spatially variant degradations, which significantly improves the practicability. Extensive experimental results on synthetic and real LR images show that the proposed convolutional super-resolution network not only can produce favorable results on multiple degradations but also is computationally efficient, providing a highly effective and scalable solution to practical SISR applications.
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, superresolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.
translated by 谷歌翻译
盲图修复(IR)是计算机视觉中常见但充满挑战的问题。基于经典模型的方法和最新的深度学习(DL)方法代表了有关此问题的两种不同方法,每种方法都有自己的优点和缺点。在本文中,我们提出了一种新颖的盲图恢复方法,旨在整合它们的两种优势。具体而言,我们为盲IR构建了一个普通的贝叶斯生成模型,该模型明确描绘了降解过程。在此提出的模型中,PICEL的非I.I.D。高斯分布用于适合图像噪声。它的灵活性比简单的I.I.D。在大多数常规方法中采用的高斯或拉普拉斯分布,以处理图像降解中包含的更复杂的噪声类型。为了解决该模型,我们设计了一个变异推理算法,其中所有预期的后验分布都被参数化为深神经网络,以提高其模型能力。值得注意的是,这种推论算法诱导统一的框架共同处理退化估计和图像恢复的任务。此外,利用了前一种任务中估计的降解信息来指导后一种红外过程。对两项典型的盲型IR任务进行实验,即图像降解和超分辨率,表明所提出的方法比当前最新的方法实现了卓越的性能。
translated by 谷歌翻译
Image super-resolution (SR) serves as a fundamental tool for the processing and transmission of multimedia data. Recently, Transformer-based models have achieved competitive performances in image SR. They divide images into fixed-size patches and apply self-attention on these patches to model long-range dependencies among pixels. However, this architecture design is originated for high-level vision tasks, which lacks design guideline from SR knowledge. In this paper, we aim to design a new attention block whose insights are from the interpretation of Local Attribution Map (LAM) for SR networks. Specifically, LAM presents a hierarchical importance map where the most important pixels are located in a fine area of a patch and some less important pixels are spread in a coarse area of the whole image. To access pixels in the coarse area, instead of using a very large patch size, we propose a lightweight Global Pixel Access (GPA) module that applies cross-attention with the most similar patch in an image. In the fine area, we use an Intra-Patch Self-Attention (IPSA) module to model long-range pixel dependencies in a local patch, and then a $3\times3$ convolution is applied to process the finest details. In addition, a Cascaded Patch Division (CPD) strategy is proposed to enhance perceptual quality of recovered images. Extensive experiments suggest that our method outperforms state-of-the-art lightweight SR methods by a large margin. Code is available at https://github.com/passerer/HPINet.
translated by 谷歌翻译
最近,卷积神经网络(CNN)已被广泛用于图像DeNoising。现有方法受益于剩余学习并获得高性能。许多研究都注意到优化CNN的网络体系结构,但忽略了残留学习的局限性。本文提出了两个局限性。一个是残留学习的重点是估计噪声,从而忽略图像信息。另一个是图像自相似性没有被有效考虑。本文提出了一个组成剥落网络(CDN),其图像信息路径(IIP)和噪声估计路径(NEP)将分别解决这两个问题。 IIP通过图像到图像的方法来培训图像信息。对于NEP,它从训练的角度利用了图像自相似性。这种基于相似性的训练方法将NEP限制为输出具有特定类型噪声的不同图像贴片的相似估计噪声分布。最后,将全面考虑图像信息和噪声分布信息,以进行图像denoising。实验表明,CDN达到最新的结果会导致合成和现实世界图像降解。我们的代码将在https://github.com/jiahongz/cdn上发布。
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译
Objective: Despite numerous studies proposed for audio restoration in the literature, most of them focus on an isolated restoration problem such as denoising or dereverberation, ignoring other artifacts. Moreover, assuming a noisy or reverberant environment with limited number of fixed signal-to-distortion ratio (SDR) levels is a common practice. However, real-world audio is often corrupted by a blend of artifacts such as reverberation, sensor noise, and background audio mixture with varying types, severities, and duration. In this study, we propose a novel approach for blind restoration of real-world audio signals by Operational Generative Adversarial Networks (Op-GANs) with temporal and spectral objective metrics to enhance the quality of restored audio signal regardless of the type and severity of each artifact corrupting it. Methods: 1D Operational-GANs are used with generative neuron model optimized for blind restoration of any corrupted audio signal. Results: The proposed approach has been evaluated extensively over the benchmark TIMIT-RAR (speech) and GTZAN-RAR (non-speech) datasets corrupted with a random blend of artifacts each with a random severity to mimic real-world audio signals. Average SDR improvements of over 7.2 dB and 4.9 dB are achieved, respectively, which are substantial when compared with the baseline methods. Significance: This is a pioneer study in blind audio restoration with the unique capability of direct (time-domain) restoration of real-world audio whilst achieving an unprecedented level of performance for a wide SDR range and artifact types. Conclusion: 1D Op-GANs can achieve robust and computationally effective real-world audio restoration with significantly improved performance. The source codes and the generated real-world audio datasets are shared publicly with the research community in a dedicated GitHub repository1.
translated by 谷歌翻译
Deep neural networks provide unprecedented performance gains in many real world problems in signal and image processing. Despite these gains, future development and practical deployment of deep networks is hindered by their blackbox nature, i.e., lack of interpretability, and by the need for very large training sets. An emerging technique called algorithm unrolling or unfolding offers promise in eliminating these issues by providing a concrete and systematic connection between iterative algorithms that are used widely in signal processing and deep neural networks. Unrolling methods were first proposed to develop fast neural network approximations for sparse coding. More recently, this direction has attracted enormous attention and is rapidly growing both in theoretic investigations and practical applications. The growing popularity of unrolled deep networks is due in part to their potential in developing efficient, high-performance and yet interpretable network architectures from reasonable size training sets. In this article, we review algorithm unrolling for signal and image processing. We extensively cover popular techniques for algorithm unrolling in various domains of signal and image processing including imaging, vision and recognition, and speech processing. By reviewing previous works, we reveal the connections between iterative algorithms and neural networks and present recent theoretical results. Finally, we provide a discussion on current limitations of unrolling and suggest possible future research directions.
translated by 谷歌翻译
预训练在高级计算机视觉中标志着众多艺术状态,但曾经有很少的尝试调查图像处理系统中的预训练方式。在本文中,我们对图像预培训进行了深入研究。在实用价值考虑到实际价值的实际基础进行本研究,我们首先提出了一种通用,经济高效的变压器的图像处理框架。它在一系列低级任务中产生了高度竞争的性能,但在约束参数和计算复杂性下。然后,基于此框架,我们设计了一整套原则性的评估工具,认真对待和全面地诊断不同任务的图像预训练,并揭示其对内部网络表示的影响。我们发现预训练在低级任务中发挥着惊人的不同角色。例如,预训练将更多本地信息引入超级分辨率(SR)的更高层数,产生显着的性能增益,而预培训几乎不会影响去噪的内部特征表示,导致稍微收益。此外,我们探索了不同的预训练方法,揭示了多任务预训练更有效和数据效率。所有代码和模型将在https://github.com/fenglinglwb/edt发布。
translated by 谷歌翻译
深度卷积神经网络(CNN)最近已达到最先进的手写文本识别(HTR)性能。但是,最近的研究表明,典型的CNN的学习性能是有限的,因为它们是具有简单(线性)神经元模型的同质网络。由于它们的异质网络结构结合了非线性神经元,最近提出了操作神经网络(ONNS)来解决这一缺点。自我结合是具有生成神经元模型的ONN的自组织变化,可以使用泰勒近似来生成任何非线性函数。在这项研究中,为了提高HTR的最新性能水平,提出了新型网络模型核心中的2D自组织(自我强调)。此外,本研究中使用了可变形的卷积,最近被证明可以更好地解决写作风格的变化。 IAM英语数据集和Hadara80p阿拉伯数据集中的结果表明,具有自我影响的操作层的拟议模型显着提高了字符错误率(CER)和单词错误率(WER)。与同行CNN相比,Hadara80p中的自我强调将CER和3.4%降低,在IAM数据集中,自我强调将CER降低1.2%和3.4%,为0.199%和1.244%。基准IAM上的结果表明,与自相互紧缩的操作层的拟议模型通过显着的边缘优于最近的深CNN模型,而使用具有可变形卷积的自我冲突表明了出色的结果。
translated by 谷歌翻译
We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding/vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.
translated by 谷歌翻译
最近的变形金刚和多层Perceptron(MLP)模型的进展为计算机视觉任务提供了新的网络架构设计。虽然这些模型在许多愿景任务中被证明是有效的,但在图像识别之类的愿景中,仍然存在挑战,使他们适应低级视觉。支持高分辨率图像和本地注意力的局限性的不灵活性可能是使用变压器和MLP在图像恢复中的主要瓶颈。在这项工作中,我们介绍了一个多轴MLP基于MARIC的架构,称为Maxim,可用作用于图像处理任务的高效和灵活的通用视觉骨干。 Maxim使用UNET形的分层结构,并支持由空间门控MLP启用的远程交互。具体而言,Maxim包含两个基于MLP的构建块:多轴门控MLP,允许局部和全球视觉线索的高效和可扩展的空间混合,以及交叉栅栏,替代跨关注的替代方案 - 细分互补。这两个模块都仅基于MLP,而且还受益于全局和“全卷积”,两个属性对于图像处理是可取的。我们广泛的实验结果表明,所提出的Maxim模型在一系列图像处理任务中实现了十多个基准的最先进的性能,包括去噪,失败,派热,脱落和增强,同时需要更少或相当的数量参数和拖鞋而不是竞争模型。
translated by 谷歌翻译
深卷积神经网络(CNN)用于图像通过自动挖掘精确的结构信息进行图像。但是,大多数现有的CNN依赖于扩大设计网络的深度以获得更好的降级性能,这可能会导致训练难度。在本文中,我们通过三个阶段(即动态卷积块(DCB),两个级联的小波变换和增强块(网络)和残留块(RB)(RB)(RB)(RB),提出了带有小波变换(MWDCNN)的多阶段图像。 。 DCB使用动态卷积来动态调整几次卷积的参数,以在降级性能和计算成本之间做出权衡。 Web使用信号处理技术(即小波转换)和判别性学习的组合来抑制噪声,以恢复图像Denoising中更详细的信息。为了进一步删除冗余功能,RB用于完善获得的功能,以改善通过改进残留密度架构来重建清洁图像的特征。实验结果表明,在定量和定性分析方面,提出的MWDCNN优于一些流行的非授权方法。代码可在https://github.com/hellloxiaotian/mwdcnn上找到。
translated by 谷歌翻译