Recently, deep convolutional neural networks (CNNs) have been widely explored in single image super-resolution (SISR) and obtained remarkable performance. However, most of the existing CNN-based SISR methods mainly focus on wider or deeper architecture design, neglecting to explore the feature correlations of intermediate layers, hence hindering the representational power of CNNs. To address this issue, in this paper, we propose a second-order attention network (SAN) for more powerful feature expression and feature correlation learning. Specifically, a novel trainable second-order channel attention (SOCA) module is developed to adaptively rescale the channel-wise features by using second-order feature statistics for more discriminative representations. Furthermore, we present a non-locally enhanced residual group (NLRG) structure, which not only incorporates non-local operations to capture long-distance spatial contextual information, but also contains repeated local-source residual attention groups (LSRAG) to learn increasingly abstract feature representations. Experimental results demonstrate the superiority of our SAN network over state-of-the-art SISR methods in terms of both quantitative metrics and visual quality.
translated by 谷歌翻译
Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The lowresolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.
translated by 谷歌翻译
通过利用大型内核分解和注意机制,卷积神经网络(CNN)可以在许多高级计算机视觉任务中与基于变压器的方法竞争。但是,由于远程建模的优势,具有自我注意力的变压器仍然主导着低级视野,包括超分辨率任务。在本文中,我们提出了一个基于CNN的多尺度注意网络(MAN),该网络由多尺度的大内核注意力(MLKA)和一个封闭式的空间注意单元(GSAU)组成,以提高卷积SR网络的性能。在我们的MLKA中,我们使用多尺度和栅极方案纠正LKA,以在各种粒度水平上获得丰富的注意图,从而共同汇总了全局和局部信息,并避免了潜在的阻塞伪像。在GSAU中,我们集成了栅极机制和空间注意力,以消除不必要的线性层和汇总信息丰富的空间环境。为了确认我们的设计的有效性,我们通过简单地堆叠不同数量的MLKA和GSAU来评估具有多种复杂性的人。实验结果表明,我们的人可以在最先进的绩效和计算之间实现各种权衡。代码可从https://github.com/icandle/man获得。
translated by 谷歌翻译
单个图像超分辨率(SISR)是一个不良问题,旨在获得从低分辨率(LR)输入的高分辨率(HR)输出,在此期间应该添加额外的高频信息以改善感知质量。现有的SISR工作主要通过最小化平均平方重建误差来在空间域中运行。尽管高峰峰值信噪比(PSNR)结果,但难以确定模型是否正确地添加所需的高频细节。提出了一些基于基于残余的结构,以指导模型暗示高频率特征。然而,由于空间域度量的解释是有限的,如何验证这些人为细节的保真度仍然是一个问题。在本文中,我们提出了频率域视角来的直观管道,解决了这个问题。由现有频域的工作启发,我们将图像转换为离散余弦变换(DCT)块,然后改革它们以获取DCT功能映射,它用作我们模型的输入和目标。设计了专门的管道,我们进一步提出了符合频域任务的性质的频率损失功能。我们的SISR方法在频域中可以明确地学习高频信息,为SR图像提供保真度和良好的感知质量。我们进一步观察到我们的模型可以与其他空间超分辨率模型合并,以提高原始SR输出的质量。
translated by 谷歌翻译
卷积神经网络在过去十年中允许在单个图像超分辨率(SISR)中的显着进展。在SISR最近的进展中,关注机制对于高性能SR模型至关重要。但是,注意机制仍然不清楚为什么它在SISR中的工作原理。在这项工作中,我们试图量化和可视化SISR中的注意力机制,并表明并非所有关注模块都同样有益。然后,我们提出了关注网络(A $ ^ 2 $ n)的注意力,以获得更高效和准确的SISR。具体来说,$ ^ 2 $ n包括非关注分支和耦合注意力分支。提出了一种动态注意力模块,为这两个分支产生权重,以动态地抑制不需要的注意力调整,其中权重根据输入特征自适应地改变。这允许注意模块专门从事惩罚的有益实例,从而大大提高了注意力网络的能力,即几个参数开销。实验结果表明,我们的最终模型A $ ^ 2 $ n可以实现与类似尺寸的最先进网络相比的卓越的权衡性能。代码可以在https://github.com/haoyuc/a2n获得。
translated by 谷歌翻译
单像超分辨率(SISR),作为传统的不良反对问题,通过最近的卷积神经网络(CNN)的发展得到了极大的振兴。这些基于CNN的方法通常将低分辨率图像映射到其相应的高分辨率版本,具有复杂的网络结构和损耗功能,显示出令人印象深刻的性能。本文对传统的SISR算法提供了新的洞察力,并提出了一种基本上不同的方法,依赖于迭代优化。提出了一种新颖的迭代超分辨率网络(ISRN),顶部是迭代优化。我们首先分析图像SR问题的观察模型,通过以更一般和有效的方式模仿和融合每次迭代来激发可行的解决方案。考虑到批量归一化的缺点,我们提出了一种特征归一化(F-NOM,FN)方法来调节网络中的功能。此外,开发了一种具有FN的新颖块以改善作为FNB称为FNB的网络表示。剩余剩余结构被提出形成一个非常深的网络,其中FNBS与长时间跳过连接,以获得更好的信息传递和稳定训练阶段。对BICUBIC(BI)降解的测试基准的广泛实验结果表明我们的ISRN不仅可以恢复更多的结构信息,而且还可以获得竞争或更好的PSNR / SSIM结果,与其他作品相比,参数更少。除BI之外,我们除了模拟模糊(BD)和低级噪声(DN)的实际降级。 ISRN及其延伸ISRN +两者都比使用BD和DN降级模型的其他产品更好。
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
作为一个严重的问题,近年来已经广泛研究了单图超分辨率(SISR)。 SISR的主要任务是恢复由退化程序引起的信息损失。根据Nyquist抽样理论,降解会导致混叠效应,并使低分辨率(LR)图像的正确纹理很难恢复。实际上,自然图像中相邻斑块之间存在相关性和自相似性。本文考虑了自相似性,并提出了一个分层图像超分辨率网络(HSRNET)来抑制混叠的影响。我们从优化的角度考虑SISR问题,并根据半季节分裂(HQS)方法提出了迭代解决方案模式。为了先验探索本地图像的质地,我们设计了一个分层探索块(HEB)并进行性增加了接受场。此外,设计多级空间注意力(MSA)是为了获得相邻特征的关系并增强了高频信息,这是视觉体验的关键作用。实验结果表明,与其他作品相比,HSRNET实现了更好的定量和视觉性能,并更有效地释放了别名。
translated by 谷歌翻译
已经证明了深度卷积神经网络近年来对SISR有效。一方面,已经广泛使用了残余连接和密集连接,以便于前向信息和向后梯度流动以提高性能。然而,当前方法以次优的方式在大多数网络层中单独使用残留连接和密集连接。另一方面,虽然各种网络和方法旨在改善计算效率,节省参数或利用彼此的多种比例因子的训练数据来提升性能,但它可以在人力资源空间中进行超级分辨率来具有高计算成本或不能在不同尺度因子的模型之间共享参数以节省参数和推理时间。为了解决这些挑战,我们提出了一种使用双路径连接的高效单图像超分辨率网络,其多种规模学习命名为EMSRDPN。通过将双路径的双路径连接引入EMSRDPN,它在大多数网络层中以综合方式使用残差连接和密集连接。双路径连接具有重用残余连接的共同特征和探索密集连接的新功能,以了解SISR的良好代表。要利用多种比例因子的特征相关性,EMSRDPN在不同缩放因子之间共享LR空间中的所有网络单元,以学习共享功能,并且仅使用单独的重建单元进行每个比例因子,这可以利用多种规模因子的培训数据来帮助各个规模因素另外提高性能,同时可以节省参数并支持共享推理,以提高效率的多种规模因素。实验显示EMSRDPN通过SOTA方法实现更好的性能和比较或更好的参数和推理效率。
translated by 谷歌翻译
具有强大学习能力的CNN被广泛选择以解决超分辨率问题。但是,CNN依靠更深的网络体系结构来提高图像超分辨率的性能,这可能会增加计算成本。在本文中,我们提出了一个增强的超分辨率组CNN(ESRGCNN),具有浅层架构,通过完全融合了深层和宽的通道特征,以在单图超级分辨率中的不同通道的相关性提取更准确的低频信息( SISR)。同样,ESRGCNN中的信号增强操作对于继承更长途上下文信息以解决长期依赖性也很有用。将自适应上采样操作收集到CNN中,以获得具有不同大小的低分辨率图像的图像超分辨率模型。广泛的实验报告说,我们的ESRGCNN在SISR中的SISR性能,复杂性,执行速度,图像质量评估和SISR的视觉效果方面超过了最先进的实验。代码可在https://github.com/hellloxiaotian/esrgcnn上找到。
translated by 谷歌翻译
基于变压器的方法与基于CNN的方法相比,由于其对远程依赖性的模型,因此获得了令人印象深刻的图像恢复性能。但是,像Swinir这样的进步采用了基于窗口的和本地注意力的策略来平衡性能和计算开销,这限制了采用大型接收领域来捕获全球信息并在早期层中建立长期依赖性。为了进一步提高捕获全球信息的效率,在这项工作中,我们建议Swinfir通过更换具有整个图像范围的接收场的快速傅立叶卷积(FFC)组件来扩展Swinir。我们还重新访问其他先进技术,即数据增强,预训练和功能集合,以改善图像重建的效果。并且我们的功能合奏方法使模型的性能得以大大增强,而无需增加训练和测试时间。与现有方法相比,我们将算法应用于多个流行的大规模基准,并实现了最先进的性能。例如,我们的Swinfir在漫画109数据集上达到了32.83 dB的PSNR,该PSNR比最先进的Swinir方法高0.8 dB。
translated by 谷歌翻译
最近的卷积神经网络(CNN)的改进 - 基于单图像超分辨率(SISR)方法严重依赖于制造网络架构,而不是发现除了简单地降低回归损耗之外的合适的培训算法。调整知识蒸馏(KD)可以开辟一种方法,以便对SISR进行进一步改进,并且在模型效率方面也是有益的。 KD是一种模型压缩方法,可提高深神经网络(DNN)的性能而不使用其他参数进行测试。它最近越来越敏捷,以提供更好的能力性能权衡。在本文中,我们提出了一种适用于SISR的新型特征蒸馏(FD)方法。我们展示了基于FITNET的FD方法的局限性,它在SISR任务中受到影响,并建议修改现有的FD算法以专注于本地特征信息。此外,我们提出了一种基于教师 - 学生差异的软特征注意方法,其选择性地专注于特定的像素位置以提取特征信息。我们致电我们的方法本地选择性特征蒸馏(LSFD)并验证我们的方法在SISR问题中优于传统的FD方法。
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
最近,基于深度学习的超分辨率方法取得了良好的性能,但主要关注通过喂养许多样品来训练单个广义的深网络。但是直观地,每个图像都具有其表示,并且预计将获得自适应模型。对于此问题,我们通过利用图像或特征的全局上下文信息来提出一种新颖的图像特异性卷积核调制(IKM),以产生适当地调制卷积核的注意重量,这越优于Vanilla卷积和几个现有的注意机制在没有任何其他参数的情况下嵌入最先进的架构。特别是,为了优化我们在迷你批量培训中的IKM,我们引入了一种特定于图像的优化(ISO)算法,比传统的迷你批量SGD优化更有效。此外,我们调查IKM对最先进的架构的影响,并利用一个带有U风格的残差学习和沙漏密集的块学习的新骨干,术语U-HOLGLASS密集网络(U-HDN),这是一个理论上和实验,最大限度地提高IKM的效力。单图像超分辨率的广泛实验表明,该方法实现了优异的现有方法性能。代码可在github.com/yuanfeihuang/ikm获得。
translated by 谷歌翻译
Image restoration is a long-standing low-level vision problem that aims to restore high-quality images from lowquality images (e.g., downscaled, noisy and compressed images). While state-of-the-art image restoration methods are based on convolutional neural networks, few attempts have been made with Transformers which show impressive performance on high-level vision tasks. In this paper, we propose a strong baseline model SwinIR for image restoration based on the Swin Transformer. SwinIR consists of three parts: shallow feature extraction, deep feature extraction and high-quality image reconstruction. In particular, the deep feature extraction module is composed of several residual Swin Transformer blocks (RSTB), each of which has several Swin Transformer layers together with a residual connection. We conduct experiments on three representative tasks: image super-resolution (including classical, lightweight and real-world image super-resolution), image denoising (including grayscale and color image denoising) and JPEG compression artifact reduction. Experimental results demonstrate that SwinIR outperforms state-of-the-art methods on different tasks by up to 0.14∼0.45dB, while the total number of parameters can be reduced by up to 67%.
translated by 谷歌翻译
Recently, Transformer-based image restoration networks have achieved promising improvements over convolutional neural networks due to parameter-independent global interactions. To lower computational cost, existing works generally limit self-attention computation within non-overlapping windows. However, each group of tokens are always from a dense area of the image. This is considered as a dense attention strategy since the interactions of tokens are restrained in dense regions. Obviously, this strategy could result in restricted receptive fields. To address this issue, we propose Attention Retractable Transformer (ART) for image restoration, which presents both dense and sparse attention modules in the network. The sparse attention module allows tokens from sparse areas to interact and thus provides a wider receptive field. Furthermore, the alternating application of dense and sparse attention modules greatly enhances representation ability of Transformer while providing retractable attention on the input image.We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks. Experimental results validate that our proposed ART outperforms state-of-the-art methods on various benchmark datasets both quantitatively and visually. We also provide code and models at the website https://github.com/gladzhang/ART.
translated by 谷歌翻译
Existing convolutional neural networks (CNN) based image super-resolution (SR) methods have achieved impressive performance on bicubic kernel, which is not valid to handle unknown degradations in real-world applications. Recent blind SR methods suggest to reconstruct SR images relying on blur kernel estimation. However, their results still remain visible artifacts and detail distortion due to the estimation errors. To alleviate these problems, in this paper, we propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR. Specifically, in our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures. The DSMM consists of two components: a detail restoration unit (DRU) and a structure modulation unit (SMU). The former aims at regressing the intermediate HR detail reconstruction from LR structural contexts, and the latter performs structural contexts modulation conditioned on the learned detail maps at both HR and LR spaces. Besides, we use the output of DSMM as the hidden state and design our DSSR architecture from a recurrent convolutional neural network (RCNN) view. In this way, the network can alternatively optimize the image details and structural contexts, achieving co-optimization across time. Moreover, equipped with the recurrent connection, our DSSR allows low- and high-level feature representations complementary by observing previous HR details and contexts at every unrolling time. Extensive experiments on synthetic datasets and real-world images demonstrate that our method achieves the state-of-the-art against existing methods. The source code can be found at https://github.com/Arcananana/DSSR.
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译