最近,基于深度学习的超分辨率方法取得了良好的性能,但主要关注通过喂养许多样品来训练单个广义的深网络。但是直观地,每个图像都具有其表示,并且预计将获得自适应模型。对于此问题,我们通过利用图像或特征的全局上下文信息来提出一种新颖的图像特异性卷积核调制(IKM),以产生适当地调制卷积核的注意重量,这越优于Vanilla卷积和几个现有的注意机制在没有任何其他参数的情况下嵌入最先进的架构。特别是,为了优化我们在迷你批量培训中的IKM,我们引入了一种特定于图像的优化(ISO)算法,比传统的迷你批量SGD优化更有效。此外,我们调查IKM对最先进的架构的影响,并利用一个带有U风格的残差学习和沙漏密集的块学习的新骨干,术语U-HOLGLASS密集网络(U-HDN),这是一个理论上和实验,最大限度地提高IKM的效力。单图像超分辨率的广泛实验表明,该方法实现了优异的现有方法性能。代码可在github.com/yuanfeihuang/ikm获得。
translated by 谷歌翻译
极度依赖于从划痕的模型的降级或优化的降解或优化的迭代估计,现有的盲超分辨率(SR)方法通常是耗时和效率较低,因为退化的估计从盲初始化进行并且缺乏可解释降解前沿。为了解决它,本文提出了一种使用端到端网络的盲SR的过渡学习方法,没有任何额外的推断中的额外迭代,并探讨了未知降级的有效表示。首先,我们分析并证明降解的过渡性作为可解释的先前信息,以间接推断出未知的降解模型,包括广泛使用的添加剂和卷曲降解。然后,我们提出了一种新颖的过渡性学习方法,用于盲目超分辨率(TLSR),通过自适应地推断过渡转换功能来解决未知的降级而没有推断的任何迭代操作。具体地,端到端TLSR网络包括一定程度的过渡性(点)估计网络,同一性特征提取网络和过渡学习模块。对盲人SR任务的定量和定性评估表明,拟议的TLSR实现了优异的性能,并且对最先进的盲人SR方法的复杂性较少。该代码可在github.com/yuanfeihuang/tlsr获得。
translated by 谷歌翻译
Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The lowresolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.
translated by 谷歌翻译
A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.
translated by 谷歌翻译
通过利用大型内核分解和注意机制,卷积神经网络(CNN)可以在许多高级计算机视觉任务中与基于变压器的方法竞争。但是,由于远程建模的优势,具有自我注意力的变压器仍然主导着低级视野,包括超分辨率任务。在本文中,我们提出了一个基于CNN的多尺度注意网络(MAN),该网络由多尺度的大内核注意力(MLKA)和一个封闭式的空间注意单元(GSAU)组成,以提高卷积SR网络的性能。在我们的MLKA中,我们使用多尺度和栅极方案纠正LKA,以在各种粒度水平上获得丰富的注意图,从而共同汇总了全局和局部信息,并避免了潜在的阻塞伪像。在GSAU中,我们集成了栅极机制和空间注意力,以消除不必要的线性层和汇总信息丰富的空间环境。为了确认我们的设计的有效性,我们通过简单地堆叠不同数量的MLKA和GSAU来评估具有多种复杂性的人。实验结果表明,我们的人可以在最先进的绩效和计算之间实现各种权衡。代码可从https://github.com/icandle/man获得。
translated by 谷歌翻译
卷积神经网络在过去十年中允许在单个图像超分辨率(SISR)中的显着进展。在SISR最近的进展中,关注机制对于高性能SR模型至关重要。但是,注意机制仍然不清楚为什么它在SISR中的工作原理。在这项工作中,我们试图量化和可视化SISR中的注意力机制,并表明并非所有关注模块都同样有益。然后,我们提出了关注网络(A $ ^ 2 $ n)的注意力,以获得更高效和准确的SISR。具体来说,$ ^ 2 $ n包括非关注分支和耦合注意力分支。提出了一种动态注意力模块,为这两个分支产生权重,以动态地抑制不需要的注意力调整,其中权重根据输入特征自适应地改变。这允许注意模块专门从事惩罚的有益实例,从而大大提高了注意力网络的能力,即几个参数开销。实验结果表明,我们的最终模型A $ ^ 2 $ n可以实现与类似尺寸的最先进网络相比的卓越的权衡性能。代码可以在https://github.com/haoyuc/a2n获得。
translated by 谷歌翻译
将低分辨率(LR)图像恢复到超分辨率(SR)图像具有正确和清晰的细节是挑战。现有的深度学习工作几乎忽略了图像的固有结构信息,这是对SR结果的视觉感知的重要作用。在本文中,我们将分层特征开发网络设计为探测并以多尺度特征融合方式保持结构信息。首先,我们提出了在传统边缘探测器上的交叉卷积,以定位和代表边缘特征。然后,交叉卷积块(CCBS)设计有功能归一化和渠道注意,以考虑特征的固有相关性。最后,我们利用多尺度特征融合组(MFFG)来嵌入交叉卷积块,并在层次的层次上开发不同尺度的结构特征的关系,调用名为Cross-SRN的轻量级结构保护网络。实验结果表明,交叉SRN通过准确且清晰的结构细节实现了对最先进的方法的竞争或卓越的恢复性能。此外,我们设置了一个标准,以选择具有丰富的结构纹理的图像。所提出的跨SRN优于所选择的基准测试的最先进的方法,这表明我们的网络在保存边缘具有显着的优势。
translated by 谷歌翻译
Recently, deep convolutional neural networks (CNNs) have been widely explored in single image super-resolution (SISR) and obtained remarkable performance. However, most of the existing CNN-based SISR methods mainly focus on wider or deeper architecture design, neglecting to explore the feature correlations of intermediate layers, hence hindering the representational power of CNNs. To address this issue, in this paper, we propose a second-order attention network (SAN) for more powerful feature expression and feature correlation learning. Specifically, a novel trainable second-order channel attention (SOCA) module is developed to adaptively rescale the channel-wise features by using second-order feature statistics for more discriminative representations. Furthermore, we present a non-locally enhanced residual group (NLRG) structure, which not only incorporates non-local operations to capture long-distance spatial contextual information, but also contains repeated local-source residual attention groups (LSRAG) to learn increasingly abstract feature representations. Experimental results demonstrate the superiority of our SAN network over state-of-the-art SISR methods in terms of both quantitative metrics and visual quality.
translated by 谷歌翻译
随着卷积神经网络最近的大规模发展,已经提出了用于边缘设备上实用部署的大量基于CNN的显着图像超分辨率方法。但是,大多数现有方法都集中在一个特定方面:网络或损失设计,这导致难以最大程度地减少模型大小。为了解决这个问题,我们得出结论,设计,架构搜索和损失设计,以获得更有效的SR结构。在本文中,我们提出了一个名为EFDN的边缘增强功能蒸馏网络,以保留在约束资源下的高频信息。详细说明,我们基于现有的重新处理方法构建了一个边缘增强卷积块。同时,我们提出了边缘增强的梯度损失,以校准重新分配的路径训练。实验结果表明,我们的边缘增强策略可以保持边缘并显着提高最终恢复质量。代码可在https://github.com/icandle/efdn上找到。
translated by 谷歌翻译
随着深度学习的发展,单图像超分辨率(SISR)取得了重大突破。最近,已经提出了基于全局特征交互的SISR网络性能的方法。但是,需要动态地忽略对上下文的响应的神经元的功能。为了解决这个问题,我们提出了一个轻巧的交叉障碍性推理网络(CFIN),这是一个由卷积神经网络(CNN)和变压器组成的混合网络。具体而言,一种新型的交叉磁场导向变压器(CFGT)旨在通过使用调制卷积内核与局部代表性语义信息结合来自适应修改网络权重。此外,提出了基于CNN的跨尺度信息聚合模块(CIAM),以使模型更好地专注于潜在的实用信息并提高变压器阶段的效率。广泛的实验表明,我们提出的CFIN是一种轻巧有效的SISR模型,可以在计算成本和模型性能之间达到良好的平衡。
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
Image super-resolution (SR) serves as a fundamental tool for the processing and transmission of multimedia data. Recently, Transformer-based models have achieved competitive performances in image SR. They divide images into fixed-size patches and apply self-attention on these patches to model long-range dependencies among pixels. However, this architecture design is originated for high-level vision tasks, which lacks design guideline from SR knowledge. In this paper, we aim to design a new attention block whose insights are from the interpretation of Local Attribution Map (LAM) for SR networks. Specifically, LAM presents a hierarchical importance map where the most important pixels are located in a fine area of a patch and some less important pixels are spread in a coarse area of the whole image. To access pixels in the coarse area, instead of using a very large patch size, we propose a lightweight Global Pixel Access (GPA) module that applies cross-attention with the most similar patch in an image. In the fine area, we use an Intra-Patch Self-Attention (IPSA) module to model long-range pixel dependencies in a local patch, and then a $3\times3$ convolution is applied to process the finest details. In addition, a Cascaded Patch Division (CPD) strategy is proposed to enhance perceptual quality of recovered images. Extensive experiments suggest that our method outperforms state-of-the-art lightweight SR methods by a large margin. Code is available at https://github.com/passerer/HPINet.
translated by 谷歌翻译
卷积神经网络(CNN)通过深度体系结构获得了出色的性能。但是,这些CNN在复杂的场景下通常对图像超分辨率(SR)实现较差的鲁棒性。在本文中,我们通过利用不同类型的结构信息来获得高质量图像,提出了异质组SR CNN(HGSRCNN)。具体而言,HGSRCNN的每个异质组块(HGB)都采用含有对称组卷积块和互补的卷积块的异质体系结构,并以平行方式增强不同渠道的内部和外部关系,以促进富裕类型的较富裕类型的信息, 。为了防止出现获得的冗余功能,以串行方式具有信号增强功能的完善块旨在过滤无用的信息。为了防止原始信息的丢失,多级增强机制指导CNN获得对称架构,以促进HGSRCNN的表达能力。此外,开发了一种平行的向上采样机制来训练盲目的SR模型。广泛的实验表明,在定量和定性分析方面,提出的HGSRCNN获得了出色的SR性能。可以在https://github.com/hellloxiaotian/hgsrcnn上访问代码。
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
具有强大学习能力的CNN被广泛选择以解决超分辨率问题。但是,CNN依靠更深的网络体系结构来提高图像超分辨率的性能,这可能会增加计算成本。在本文中,我们提出了一个增强的超分辨率组CNN(ESRGCNN),具有浅层架构,通过完全融合了深层和宽的通道特征,以在单图超级分辨率中的不同通道的相关性提取更准确的低频信息( SISR)。同样,ESRGCNN中的信号增强操作对于继承更长途上下文信息以解决长期依赖性也很有用。将自适应上采样操作收集到CNN中,以获得具有不同大小的低分辨率图像的图像超分辨率模型。广泛的实验报告说,我们的ESRGCNN在SISR中的SISR性能,复杂性,执行速度,图像质量评估和SISR的视觉效果方面超过了最先进的实验。代码可在https://github.com/hellloxiaotian/esrgcnn上找到。
translated by 谷歌翻译
已经证明了深度卷积神经网络近年来对SISR有效。一方面,已经广泛使用了残余连接和密集连接,以便于前向信息和向后梯度流动以提高性能。然而,当前方法以次优的方式在大多数网络层中单独使用残留连接和密集连接。另一方面,虽然各种网络和方法旨在改善计算效率,节省参数或利用彼此的多种比例因子的训练数据来提升性能,但它可以在人力资源空间中进行超级分辨率来具有高计算成本或不能在不同尺度因子的模型之间共享参数以节省参数和推理时间。为了解决这些挑战,我们提出了一种使用双路径连接的高效单图像超分辨率网络,其多种规模学习命名为EMSRDPN。通过将双路径的双路径连接引入EMSRDPN,它在大多数网络层中以综合方式使用残差连接和密集连接。双路径连接具有重用残余连接的共同特征和探索密集连接的新功能,以了解SISR的良好代表。要利用多种比例因子的特征相关性,EMSRDPN在不同缩放因子之间共享LR空间中的所有网络单元,以学习共享功能,并且仅使用单独的重建单元进行每个比例因子,这可以利用多种规模因子的培训数据来帮助各个规模因素另外提高性能,同时可以节省参数并支持共享推理,以提高效率的多种规模因素。实验显示EMSRDPN通过SOTA方法实现更好的性能和比较或更好的参数和推理效率。
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
translated by 谷歌翻译
联合超分辨率和反音调映射(联合SR-ITM)旨在增加低分辨率和标准动态范围图像的分辨率和动态范围。重点方法主要是诉诸图像分解技术,使用多支化的网络体系结构。 ,这些方法采用的刚性分解在很大程度上将其力量限制在各种图像上。为了利用其潜在能力,在本文中,我们将分解机制从图像域概括为更广泛的特征域。为此,我们提出了一个轻巧的特征分解聚合网络(FDAN)。特别是,我们设计了一个功能分解块(FDB),可以实现功能细节和对比度的可学习分离。通过级联FDB,我们可以建立一个用于强大的多级特征分解的分层功能分解组。联合SR-ITM,\ ie,SRITM-4K的新基准数据集,该数据集是大规模的,为足够的模型培训和评估提供了多功能方案。两个基准数据集的实验结果表明,我们的FDAN表明我们的FDAN有效,并且胜过了以前的方法sr-itm.ar代码和数据集将公开发布。
translated by 谷歌翻译
Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep net-works; recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo /DRRN CVPR17.
translated by 谷歌翻译