将低分辨率(LR)图像恢复到超分辨率(SR)图像具有正确和清晰的细节是挑战。现有的深度学习工作几乎忽略了图像的固有结构信息,这是对SR结果的视觉感知的重要作用。在本文中,我们将分层特征开发网络设计为探测并以多尺度特征融合方式保持结构信息。首先,我们提出了在传统边缘探测器上的交叉卷积,以定位和代表边缘特征。然后,交叉卷积块(CCBS)设计有功能归一化和渠道注意,以考虑特征的固有相关性。最后,我们利用多尺度特征融合组(MFFG)来嵌入交叉卷积块,并在层次的层次上开发不同尺度的结构特征的关系,调用名为Cross-SRN的轻量级结构保护网络。实验结果表明,交叉SRN通过准确且清晰的结构细节实现了对最先进的方法的竞争或卓越的恢复性能。此外,我们设置了一个标准,以选择具有丰富的结构纹理的图像。所提出的跨SRN优于所选择的基准测试的最先进的方法,这表明我们的网络在保存边缘具有显着的优势。
translated by 谷歌翻译
作为一个严重的问题,近年来已经广泛研究了单图超分辨率(SISR)。 SISR的主要任务是恢复由退化程序引起的信息损失。根据Nyquist抽样理论,降解会导致混叠效应,并使低分辨率(LR)图像的正确纹理很难恢复。实际上,自然图像中相邻斑块之间存在相关性和自相似性。本文考虑了自相似性,并提出了一个分层图像超分辨率网络(HSRNET)来抑制混叠的影响。我们从优化的角度考虑SISR问题,并根据半季节分裂(HQS)方法提出了迭代解决方案模式。为了先验探索本地图像的质地,我们设计了一个分层探索块(HEB)并进行性增加了接受场。此外,设计多级空间注意力(MSA)是为了获得相邻特征的关系并增强了高频信息,这是视觉体验的关键作用。实验结果表明,与其他作品相比,HSRNET实现了更好的定量和视觉性能,并更有效地释放了别名。
translated by 谷歌翻译
单像超分辨率(SISR),作为传统的不良反对问题,通过最近的卷积神经网络(CNN)的发展得到了极大的振兴。这些基于CNN的方法通常将低分辨率图像映射到其相应的高分辨率版本,具有复杂的网络结构和损耗功能,显示出令人印象深刻的性能。本文对传统的SISR算法提供了新的洞察力,并提出了一种基本上不同的方法,依赖于迭代优化。提出了一种新颖的迭代超分辨率网络(ISRN),顶部是迭代优化。我们首先分析图像SR问题的观察模型,通过以更一般和有效的方式模仿和融合每次迭代来激发可行的解决方案。考虑到批量归一化的缺点,我们提出了一种特征归一化(F-NOM,FN)方法来调节网络中的功能。此外,开发了一种具有FN的新颖块以改善作为FNB称为FNB的网络表示。剩余剩余结构被提出形成一个非常深的网络,其中FNBS与长时间跳过连接,以获得更好的信息传递和稳定训练阶段。对BICUBIC(BI)降解的测试基准的广泛实验结果表明我们的ISRN不仅可以恢复更多的结构信息,而且还可以获得竞争或更好的PSNR / SSIM结果,与其他作品相比,参数更少。除BI之外,我们除了模拟模糊(BD)和低级噪声(DN)的实际降级。 ISRN及其延伸ISRN +两者都比使用BD和DN降级模型的其他产品更好。
translated by 谷歌翻译
Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The lowresolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.
translated by 谷歌翻译
卷积神经网络在过去十年中允许在单个图像超分辨率(SISR)中的显着进展。在SISR最近的进展中,关注机制对于高性能SR模型至关重要。但是,注意机制仍然不清楚为什么它在SISR中的工作原理。在这项工作中,我们试图量化和可视化SISR中的注意力机制,并表明并非所有关注模块都同样有益。然后,我们提出了关注网络(A $ ^ 2 $ n)的注意力,以获得更高效和准确的SISR。具体来说,$ ^ 2 $ n包括非关注分支和耦合注意力分支。提出了一种动态注意力模块,为这两个分支产生权重,以动态地抑制不需要的注意力调整,其中权重根据输入特征自适应地改变。这允许注意模块专门从事惩罚的有益实例,从而大大提高了注意力网络的能力,即几个参数开销。实验结果表明,我们的最终模型A $ ^ 2 $ n可以实现与类似尺寸的最先进网络相比的卓越的权衡性能。代码可以在https://github.com/haoyuc/a2n获得。
translated by 谷歌翻译
随着深度学习的发展,单图像超分辨率(SISR)取得了重大突破。最近,已经提出了基于全局特征交互的SISR网络性能的方法。但是,需要动态地忽略对上下文的响应的神经元的功能。为了解决这个问题,我们提出了一个轻巧的交叉障碍性推理网络(CFIN),这是一个由卷积神经网络(CNN)和变压器组成的混合网络。具体而言,一种新型的交叉磁场导向变压器(CFGT)旨在通过使用调制卷积内核与局部代表性语义信息结合来自适应修改网络权重。此外,提出了基于CNN的跨尺度信息聚合模块(CIAM),以使模型更好地专注于潜在的实用信息并提高变压器阶段的效率。广泛的实验表明,我们提出的CFIN是一种轻巧有效的SISR模型,可以在计算成本和模型性能之间达到良好的平衡。
translated by 谷歌翻译
通过利用大型内核分解和注意机制,卷积神经网络(CNN)可以在许多高级计算机视觉任务中与基于变压器的方法竞争。但是,由于远程建模的优势,具有自我注意力的变压器仍然主导着低级视野,包括超分辨率任务。在本文中,我们提出了一个基于CNN的多尺度注意网络(MAN),该网络由多尺度的大内核注意力(MLKA)和一个封闭式的空间注意单元(GSAU)组成,以提高卷积SR网络的性能。在我们的MLKA中,我们使用多尺度和栅极方案纠正LKA,以在各种粒度水平上获得丰富的注意图,从而共同汇总了全局和局部信息,并避免了潜在的阻塞伪像。在GSAU中,我们集成了栅极机制和空间注意力,以消除不必要的线性层和汇总信息丰富的空间环境。为了确认我们的设计的有效性,我们通过简单地堆叠不同数量的MLKA和GSAU来评估具有多种复杂性的人。实验结果表明,我们的人可以在最先进的绩效和计算之间实现各种权衡。代码可从https://github.com/icandle/man获得。
translated by 谷歌翻译
联合超分辨率和反音调映射(SR-ITM)旨在提高具有分辨率和动态范围具有质量缺陷的视频的视觉质量。当使用4K高动态范围(HDR)电视来观看低分辨率标准动态范围(LR SDR)视频时,就会出现此问题。以前依赖于学习本地信息的方法通常在保留颜色合规性和远程结构相似性方面做得很好,从而导致了不自然的色彩过渡和纹理伪像。为了应对这些挑战,我们建议联合SR-ITM的全球先验指导的调制网络(GPGMNET)。特别是,我们设计了一个全球先验提取模块(GPEM),以提取颜色合规性和结构相似性,分别对ITM和SR任务有益。为了进一步利用全球先验并保留空间信息,我们使用一些用于中间特征调制的参数,设计多个全球先验的指导空间调制块(GSMB),其中调制参数由共享的全局先验和空间特征生成来自空间金字塔卷积块(SPCB)的地图。通过这些精心设计的设计,GPGMNET可以通过较低的计算复杂性实现更高的视觉质量。广泛的实验表明,我们提出的GPGMNET优于最新方法。具体而言,我们提出的模型在PSNR中超过了0.64 dB的最新模型,其中69 $ \%$ $ $较少,3.1 $ \ times $ speedup。该代码将很快发布。
translated by 谷歌翻译
联合超分辨率和反音调映射(联合SR-ITM)旨在增加低分辨率和标准动态范围图像的分辨率和动态范围。重点方法主要是诉诸图像分解技术,使用多支化的网络体系结构。 ,这些方法采用的刚性分解在很大程度上将其力量限制在各种图像上。为了利用其潜在能力,在本文中,我们将分解机制从图像域概括为更广泛的特征域。为此,我们提出了一个轻巧的特征分解聚合网络(FDAN)。特别是,我们设计了一个功能分解块(FDB),可以实现功能细节和对比度的可学习分离。通过级联FDB,我们可以建立一个用于强大的多级特征分解的分层功能分解组。联合SR-ITM,\ ie,SRITM-4K的新基准数据集,该数据集是大规模的,为足够的模型培训和评估提供了多功能方案。两个基准数据集的实验结果表明,我们的FDAN表明我们的FDAN有效,并且胜过了以前的方法sr-itm.ar代码和数据集将公开发布。
translated by 谷歌翻译
随着卷积神经网络最近的大规模发展,已经提出了用于边缘设备上实用部署的大量基于CNN的显着图像超分辨率方法。但是,大多数现有方法都集中在一个特定方面:网络或损失设计,这导致难以最大程度地减少模型大小。为了解决这个问题,我们得出结论,设计,架构搜索和损失设计,以获得更有效的SR结构。在本文中,我们提出了一个名为EFDN的边缘增强功能蒸馏网络,以保留在约束资源下的高频信息。详细说明,我们基于现有的重新处理方法构建了一个边缘增强卷积块。同时,我们提出了边缘增强的梯度损失,以校准重新分配的路径训练。实验结果表明,我们的边缘增强策略可以保持边缘并显着提高最终恢复质量。代码可在https://github.com/icandle/efdn上找到。
translated by 谷歌翻译
Image super-resolution (SR) serves as a fundamental tool for the processing and transmission of multimedia data. Recently, Transformer-based models have achieved competitive performances in image SR. They divide images into fixed-size patches and apply self-attention on these patches to model long-range dependencies among pixels. However, this architecture design is originated for high-level vision tasks, which lacks design guideline from SR knowledge. In this paper, we aim to design a new attention block whose insights are from the interpretation of Local Attribution Map (LAM) for SR networks. Specifically, LAM presents a hierarchical importance map where the most important pixels are located in a fine area of a patch and some less important pixels are spread in a coarse area of the whole image. To access pixels in the coarse area, instead of using a very large patch size, we propose a lightweight Global Pixel Access (GPA) module that applies cross-attention with the most similar patch in an image. In the fine area, we use an Intra-Patch Self-Attention (IPSA) module to model long-range pixel dependencies in a local patch, and then a $3\times3$ convolution is applied to process the finest details. In addition, a Cascaded Patch Division (CPD) strategy is proposed to enhance perceptual quality of recovered images. Extensive experiments suggest that our method outperforms state-of-the-art lightweight SR methods by a large margin. Code is available at https://github.com/passerer/HPINet.
translated by 谷歌翻译
Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
translated by 谷歌翻译
具有强大学习能力的CNN被广泛选择以解决超分辨率问题。但是,CNN依靠更深的网络体系结构来提高图像超分辨率的性能,这可能会增加计算成本。在本文中,我们提出了一个增强的超分辨率组CNN(ESRGCNN),具有浅层架构,通过完全融合了深层和宽的通道特征,以在单图超级分辨率中的不同通道的相关性提取更准确的低频信息( SISR)。同样,ESRGCNN中的信号增强操作对于继承更长途上下文信息以解决长期依赖性也很有用。将自适应上采样操作收集到CNN中,以获得具有不同大小的低分辨率图像的图像超分辨率模型。广泛的实验报告说,我们的ESRGCNN在SISR中的SISR性能,复杂性,执行速度,图像质量评估和SISR的视觉效果方面超过了最先进的实验。代码可在https://github.com/hellloxiaotian/esrgcnn上找到。
translated by 谷歌翻译
最近,基于深度学习的超分辨率方法取得了良好的性能,但主要关注通过喂养许多样品来训练单个广义的深网络。但是直观地,每个图像都具有其表示,并且预计将获得自适应模型。对于此问题,我们通过利用图像或特征的全局上下文信息来提出一种新颖的图像特异性卷积核调制(IKM),以产生适当地调制卷积核的注意重量,这越优于Vanilla卷积和几个现有的注意机制在没有任何其他参数的情况下嵌入最先进的架构。特别是,为了优化我们在迷你批量培训中的IKM,我们引入了一种特定于图像的优化(ISO)算法,比传统的迷你批量SGD优化更有效。此外,我们调查IKM对最先进的架构的影响,并利用一个带有U风格的残差学习和沙漏密集的块学习的新骨干,术语U-HOLGLASS密集网络(U-HDN),这是一个理论上和实验,最大限度地提高IKM的效力。单图像超分辨率的广泛实验表明,该方法实现了优异的现有方法性能。代码可在github.com/yuanfeihuang/ikm获得。
translated by 谷歌翻译
Image super-resolution (SR) is a technique to recover lost high-frequency information in low-resolution (LR) images. Spatial-domain information has been widely exploited to implement image SR, so a new trend is to involve frequency-domain information in SR tasks. Besides, image SR is typically application-oriented and various computer vision tasks call for image arbitrary magnification. Therefore, in this paper, we study image features in the frequency domain to design a novel scale-arbitrary image SR network. First, we statistically analyze LR-HR image pairs of several datasets under different scale factors and find that the high-frequency spectra of different images under different scale factors suffer from different degrees of degradation, but the valid low-frequency spectra tend to be retained within a certain distribution range. Then, based on this finding, we devise an adaptive scale-aware feature division mechanism using deep reinforcement learning, which can accurately and adaptively divide the frequency spectrum into the low-frequency part to be retained and the high-frequency one to be recovered. Finally, we design a scale-aware feature recovery module to capture and fuse multi-level features for reconstructing the high-frequency spectrum at arbitrary scale factors. Extensive experiments on public datasets show the superiority of our method compared with state-of-the-art methods.
translated by 谷歌翻译
卷积神经网络(CNN)通过深度体系结构获得了出色的性能。但是,这些CNN在复杂的场景下通常对图像超分辨率(SR)实现较差的鲁棒性。在本文中,我们通过利用不同类型的结构信息来获得高质量图像,提出了异质组SR CNN(HGSRCNN)。具体而言,HGSRCNN的每个异质组块(HGB)都采用含有对称组卷积块和互补的卷积块的异质体系结构,并以平行方式增强不同渠道的内部和外部关系,以促进富裕类型的较富裕类型的信息, 。为了防止出现获得的冗余功能,以串行方式具有信号增强功能的完善块旨在过滤无用的信息。为了防止原始信息的丢失,多级增强机制指导CNN获得对称架构,以促进HGSRCNN的表达能力。此外,开发了一种平行的向上采样机制来训练盲目的SR模型。广泛的实验表明,在定量和定性分析方面,提出的HGSRCNN获得了出色的SR性能。可以在https://github.com/hellloxiaotian/hgsrcnn上访问代码。
translated by 谷歌翻译
在相应的辅助对比的指导下,目标对比度的超级分辨磁共振(MR)图像(提供了其他解剖信息)是快速MR成像的新解决方案。但是,当前的多对比超分辨率(SR)方法倾向于直接连接不同的对比度,从而忽略了它们在不同的线索中的关系,例如在高强度和低强度区域中。在这项研究中,我们提出了一个可分离的注意网络(包括高强度的优先注意力和低强度分离注意力),名为SANET。我们的卫生网可以借助辅助对比度探索“正向”和“反向”方向中高强度和低强度区域的区域,同时学习目标对比MR的SR的更清晰的解剖结构和边缘信息图片。 SANET提供了三个吸引人的好处:(1)这是第一个探索可分离的注意机制的模型,该机制使用辅助对比来预测高强度和低强度区域,将更多的注意力转移到精炼这些区域和这些区域之间的任何不确定细节和纠正重建结果中的细小区域。 (2)提出了一个多阶段集成模块,以学习多个阶段的多对比度融合的响应,获得融合表示之间的依赖性,并提高其表示能力。 (3)在FastMRI和Clinical \ textit {in Vivo}数据集上进行了各种最先进的多对比度SR方法的广泛实验,证明了我们模型的优势。
translated by 谷歌翻译