最近,深度神经网络(DNNS)在现实世界图像超分辨率(SR)方面取得了重大成功。但是,具有准侵蚀噪声的对抗图像样本可能威胁到深度学习的SR模型。在本文中,我们为现实世界SR提出了一个强大的深度学习框架,该框架随机消除了输入图像或功能的频域中潜在的对抗噪声。理由是,在SR任务上,清洁图像或功能与频域中受攻击的图案不同。观察到现有的对抗攻击通常会为输入图像增加高频噪声,我们引入了一个新型的随机频率掩码模块,该模块可以以随机方式阻止可能包含有害扰动的高频组件。由于频率掩蔽不仅可能会破坏对抗性扰动,而且还会影响干净的图像中的尖锐细节,我们进一步基于图像的频域开发了对抗性样品分类器,以确定是否应用了提出的掩码模块。基于上述想法,我们设计了一个新颖的现实世界图像SR框架,该框架结合了建议的频率掩盖模块和所提出的对抗分类器与现有的超分辨率骨干网络。实验表明,我们所提出的方法对对抗性攻击更加不敏感,并且比现有模型和防御能力更稳定。
translated by 谷歌翻译
当前的深层图像超分辨率(SR)方法试图从下采样的图像或假设简单高斯内核和添加噪声中降解来恢复高分辨率图像。但是,这种简单的图像处理技术代表了降低图像分辨率的现实世界过程的粗略近似。在本文中,我们提出了一个更现实的过程,通过引入新的内核对抗学习超分辨率(KASR)框架来处理现实世界图像SR问题,以降低图像分辨率。在提议的框架中,降解内核和噪声是自适应建模的,而不是明确指定的。此外,我们还提出了一个迭代监督过程和高频选择性目标,以进一步提高模型SR重建精度。广泛的实验验证了对现实数据集中提出的框架的有效性。
translated by 谷歌翻译
突发超级分辨率(SR)提供了从低质量图像恢复丰富细节的可能性。然而,由于实际应用中的低分辨率(LR)图像具有多种复杂和未知的降级,所以现有的非盲(例如,双臂)设计的网络通常导致恢复高分辨率(HR)图像的严重性能下降。此外,处理多重未对准的嘈杂的原始输入也是具有挑战性的。在本文中,我们解决了从现代手持设备获取的原始突发序列重建HR图像的问题。中央观点是一个内核引导策略,可以用两个步骤解决突发SR:内核建模和HR恢复。前者估计来自原始输入的突发内核,而后者基于估计的内核预测超分辨图像。此外,我们引入了内核感知可变形对准模块,其可以通过考虑模糊的前沿而有效地对准原始图像。对综合和现实世界数据集的广泛实验表明,所提出的方法可以在爆发SR问题中对最先进的性能进行。
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
Reference-based Super-resolution (RefSR) approaches have recently been proposed to overcome the ill-posed problem of image super-resolution by providing additional information from a high-resolution image. Multi-reference super-resolution extends this approach by allowing more information to be incorporated. This paper proposes a 2-step-weighting posterior fusion approach to combine the outputs of RefSR models with multiple references. Extensive experiments on the CUFED5 dataset demonstrate that the proposed methods can be applied to various state-of-the-art RefSR models to get a consistent improvement in image quality.
translated by 谷歌翻译
盲级超分辨率(SR)旨在从低分辨率(LR)图像中恢复高质量的视觉纹理,通常通过下采样模糊内核和添加剂噪声来降解。由于现实世界中复杂的图像降解的挑战,此任务非常困难。现有的SR方法要么假定预定义的模糊内核或固定噪声,这限制了这些方法在具有挑战性的情况下。在本文中,我们提出了一个用于盲目超级分辨率(DMSR)的降解引导的元修复网络,该网络促进了真实病例的图像恢复。 DMSR由降解提取器和元修复模块组成。萃取器估计LR输入中的降解,并指导元恢复模块以预测恢复参数的恢复参数。 DMSR通过新颖的降解一致性损失和重建损失共同优化。通过这样的优化,DMSR在三个广泛使用的基准上以很大的边距优于SOTA。一项包括16个受试者的用户研究进一步验证了现实世界中的盲目SR任务中DMSR的优势。
translated by 谷歌翻译
Video Super-Resolution (VSR) aims to restore high-resolution (HR) videos from low-resolution (LR) videos. Existing VSR techniques usually recover HR frames by extracting pertinent textures from nearby frames with known degradation processes. Despite significant progress, grand challenges are remained to effectively extract and transmit high-quality textures from high-degraded low-quality sequences, such as blur, additive noises, and compression artifacts. In this work, a novel Frequency-Transformer (FTVSR) is proposed for handling low-quality videos that carry out self-attention in a combined space-time-frequency domain. First, video frames are split into patches and each patch is transformed into spectral maps in which each channel represents a frequency band. It permits a fine-grained self-attention on each frequency band, so that real visual texture can be distinguished from artifacts. Second, a novel dual frequency attention (DFA) mechanism is proposed to capture the global frequency relations and local frequency relations, which can handle different complicated degradation processes in real-world scenarios. Third, we explore different self-attention schemes for video processing in the frequency domain and discover that a ``divided attention'' which conducts a joint space-frequency attention before applying temporal-frequency attention, leads to the best video enhancement quality. Extensive experiments on three widely-used VSR datasets show that FTVSR outperforms state-of-the-art methods on different low-quality videos with clear visual margins. Code and pre-trained models are available at https://github.com/researchmm/FTVSR.
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
Deep Convolutional Neural Networks (DCNNs) have exhibited impressive performance on image super-resolution tasks. However, these deep learning-based super-resolution methods perform poorly in real-world super-resolution tasks, where the paired high-resolution and low-resolution images are unavailable and the low-resolution images are degraded by complicated and unknown kernels. To break these limitations, we propose the Unsupervised Bi-directional Cycle Domain Transfer Learning-based Generative Adversarial Network (UBCDTL-GAN), which consists of an Unsupervised Bi-directional Cycle Domain Transfer Network (UBCDTN) and the Semantic Encoder guided Super Resolution Network (SESRN). First, the UBCDTN is able to produce an approximated real-like LR image through transferring the LR image from an artificially degraded domain to the real-world LR image domain. Second, the SESRN has the ability to super-resolve the approximated real-like LR image to a photo-realistic HR image. Extensive experiments on unpaired real-world image benchmark datasets demonstrate that the proposed method achieves superior performance compared to state-of-the-art methods.
translated by 谷歌翻译
近年来,在光场(LF)图像超分辨率(SR)中,深度神经网络(DNN)的巨大进展。但是,现有的基于DNN的LF图像SR方法是在单个固定降解(例如,双学的下采样)上开发的,因此不能应用于具有不同降解的超级溶解实际LF图像。在本文中,我们提出了第一种处理具有多个降解的LF图像SR的方法。在我们的方法中,开发了一个实用的LF降解模型,以近似于真实LF图像的降解过程。然后,降解自适应网络(LF-DANET)旨在将降解之前纳入SR过程。通过对具有多种合成降解的LF图像进行训练,我们的方法可以学会适应不同的降解,同时结合了空间和角度信息。对合成降解和现实世界LFS的广泛实验证明了我们方法的有效性。与现有的最新单一和LF图像SR方法相比,我们的方法在广泛的降解范围内实现了出色的SR性能,并且可以更好地推广到真实的LF图像。代码和模型可在https://github.com/yingqianwang/lf-danet上找到。
translated by 谷歌翻译
最近,已经研究了深层图像分类模型对对抗攻击的脆弱性。但是,对于拍摄输入图像并生成输出图像的图像到图像的任务,尚未对此类问题进行彻底研究(例如,着色,denoing,Deblurring等)本文对深层图像的脆弱性进行了全面研究 - 对抗攻击的图像模型。对于五个流行的图像到图像任务,从各种角度分析了16个深层模型,例如由于攻击,跨不同任务的对抗性示例的转移性以及扰动的特征而引起的输出质量退化。我们表明,与图像分类任务不同,图像到图像任务上的性能下降在很大程度上取决于各种因素,例如攻击方法和任务目标。此外,我们分析了用于分类模型改善图像到图像模型的鲁棒性的常规防御方法的有效性。
translated by 谷歌翻译
Existing convolutional neural networks (CNN) based image super-resolution (SR) methods have achieved impressive performance on bicubic kernel, which is not valid to handle unknown degradations in real-world applications. Recent blind SR methods suggest to reconstruct SR images relying on blur kernel estimation. However, their results still remain visible artifacts and detail distortion due to the estimation errors. To alleviate these problems, in this paper, we propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR. Specifically, in our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures. The DSMM consists of two components: a detail restoration unit (DRU) and a structure modulation unit (SMU). The former aims at regressing the intermediate HR detail reconstruction from LR structural contexts, and the latter performs structural contexts modulation conditioned on the learned detail maps at both HR and LR spaces. Besides, we use the output of DSMM as the hidden state and design our DSSR architecture from a recurrent convolutional neural network (RCNN) view. In this way, the network can alternatively optimize the image details and structural contexts, achieving co-optimization across time. Moreover, equipped with the recurrent connection, our DSSR allows low- and high-level feature representations complementary by observing previous HR details and contexts at every unrolling time. Extensive experiments on synthetic datasets and real-world images demonstrate that our method achieves the state-of-the-art against existing methods. The source code can be found at https://github.com/Arcananana/DSSR.
translated by 谷歌翻译
目前基于学习的单图像超分辨率(SISR)算法由于假定的Daradada-Tion过程中的偏差而导致的实际数据up到实际数据。常规的劣化过程考虑在高分辨率(HR)图像上应用模糊,噪声和下采样(通常是较大的采样)以合成低分辨率(LR)对应物。然而,很少有用于退化建模的作品已经采取了光学成像系统的物理方面。在本文中,我们光学分析了成像系统,并探索了空间频域的实际LR-HR对的特征。通过考虑optiopticsandsordegration,我们制定真实的物理启发的退化模型;成像系统的物理劣化被建模为低通滤波器,其截止频率由物体距离,焦距的更焦距和图像传感器的像素尺寸。特别是,我们建议使用卷积神经网络(CNN)来学习现实世界劣化过程的截止频率。然后应用学习的网络从未配对的HR图像合成LR图像。稍后使用合成的HR-LR图像对培训SISR网络。我们评估所提出的不同成像系统捕获的现实世界图像中提出的退化模型的有效性和泛化能力。实验结果展示了通过使用传统的退化模型使用我们的合成数据训练的SISR网络通过传统的降级模型对网络进行了有利的。此外,我们的结果与通过使用现实世界LR-HR对训练的相同网络获得的结果相当,这是在真实场景中获得的具有挑战性。
translated by 谷歌翻译
高光谱图像(HSI)没有额外辅助图像的超分辨率仍然是由于其高维光谱图案的恒定挑战,其中学习有效的空间和光谱表示是基本问题。最近,隐式的神经表示(INR)正在进行进步,作为新颖且有效的代表,特别是在重建任务中。因此,在这项工作中,我们提出了一种基于INR的新颖的HSI重建模型,其通过将空间坐标映射到其对应的光谱辐射值值的连续函数来表示HSI。特别地,作为INR的特定实现,参数模型的参数是通过使用卷积网络在特征提取的超通知来预测的。它使连续功能以内容感知方式将空间坐标映射到像素值。此外,周期性空间编码与重建过程深度集成,这使得我们的模型能够恢复更高的频率细节。为了验证我们模型的功效,我们在三个HSI数据集(洞穴,NUS和NTIRE2018)上进行实验。实验结果表明,与最先进的方法相比,该建议的模型可以实现竞争重建性能。此外,我们提供了对我们模型各个组件的效果的消融研究。我们希望本文可以服务器作为未来研究的效率参考。
translated by 谷歌翻译
单个图像超分辨率(SISR)是一个不良问题,旨在获得从低分辨率(LR)输入的高分辨率(HR)输出,在此期间应该添加额外的高频信息以改善感知质量。现有的SISR工作主要通过最小化平均平方重建误差来在空间域中运行。尽管高峰峰值信噪比(PSNR)结果,但难以确定模型是否正确地添加所需的高频细节。提出了一些基于基于残余的结构,以指导模型暗示高频率特征。然而,由于空间域度量的解释是有限的,如何验证这些人为细节的保真度仍然是一个问题。在本文中,我们提出了频率域视角来的直观管道,解决了这个问题。由现有频域的工作启发,我们将图像转换为离散余弦变换(DCT)块,然后改革它们以获取DCT功能映射,它用作我们模型的输入和目标。设计了专门的管道,我们进一步提出了符合频域任务的性质的频率损失功能。我们的SISR方法在频域中可以明确地学习高频信息,为SR图像提供保真度和良好的感知质量。我们进一步观察到我们的模型可以与其他空间超分辨率模型合并,以提高原始SR输出的质量。
translated by 谷歌翻译
Reference-based image super-resolution (RefSR) is a promising SR branch and has shown great potential in overcoming the limitations of single image super-resolution. While previous state-of-the-art RefSR methods mainly focus on improving the efficacy and robustness of reference feature transfer, it is generally overlooked that a well reconstructed SR image should enable better SR reconstruction for its similar LR images when it is referred to as. Therefore, in this work, we propose a reciprocal learning framework that can appropriately leverage such a fact to reinforce the learning of a RefSR network. Besides, we deliberately design a progressive feature alignment and selection module for further improving the RefSR task. The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection in a progressive manner, thus more precise reference features can be transferred into the input features and the network capability is enhanced. Our reciprocal learning paradigm is model-agnostic and it can be applied to arbitrary RefSR models. We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm. Furthermore, our proposed model together with the reciprocal learning strategy sets new state-of-the-art performances on multiple benchmarks.
translated by 谷歌翻译
Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The lowresolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.
translated by 谷歌翻译
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation. Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels, which significantly restricts their practical deployments on resource-limited devices. In this paper, we propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR. Specifically, we first introduce the light degradation prediction network to regress the degradation vector to simulate the real-world degradations, upon which the channel splitting vector is generated as the input for an efficient SR model. Then, a learnable octave convolution block is proposed to adaptively decide the channel splitting scale for low- and high-frequency features at each block, reducing computation overhead and memory cost by offering the large scale to low-frequency features and the small scale to the high ones. To further improve the RISR performance, Non-local regularization is employed to supplement the knowledge of patches from LR and HR subspace with free-computation inference. Extensive experiments demonstrate the effectiveness of DCS-RISR on different benchmark datasets. Our DCS-RISR not only achieves the best trade-off between computation/parameter and PSNR/SSIM metric, and also effectively handles real-world images with different degradation levels.
translated by 谷歌翻译
在本文中,我们提出了D2C-SR,这是一个新颖的框架,用于实现现实世界图像超级分辨率的任务。作为一个不适的问题,超分辨率相关任务的关键挑战是给定的低分辨率输入可能会有多个预测。大多数基于经典的深度学习方法都忽略了基本事实,缺乏对基础高频分布的明确建模,从而导致结果模糊。最近,一些基于GAN或学习的超分辨率空间的方法可以生成模拟纹理,但不能保证具有低定量性能的纹理的准确性。重新思考这两者,我们以离散形式了解了基本高频细节的分布,并提出了两阶段的管道:分歧阶段到收敛阶段。在发散阶段,我们提出了一个基于树的结构深网作为差异骨干。提出了发散损失,以鼓励基于树的网络产生的结果,以分解可能的高频表示,这是我们对基本高频分布进行离散建模的方式。在收敛阶段,我们分配空间权重以融合这些不同的预测,以获得更准确的细节,以获取最终输出。我们的方法为推理提供了方便的端到端方式。我们对几个现实世界基准进行评估,包括具有X8缩放系数的新提出的D2CrealSR数据集。我们的实验表明,D2C-SR针对最先进的方法实现了更好的准确性和视觉改进,参数编号明显较少,并且我们的D2C结构也可以作为广义结构应用于其他一些方法以获得改进。我们的代码和数据集可在https://github.com/megvii-research/d2c-sr上找到
translated by 谷歌翻译
在本文中,我们提出了一项医疗措施,以赋予超级分辨率生成对抗网络(AID-SRGAN),以实现二线图像超分辨率。首先,我们提出了一种医学实践降解模型,该模型考虑了除了减少采样以外的各种退化因素。据我们所知,这是针对射线照相图像提出的第一个复合降解模型。此外,我们提出了AID-SRGAN,它可以同时降低并产生高分辨率(HR)X光片。在此模型中,我们将注意力机制引入了Denoising模块中,以使其对复杂的降解更加健壮。最后,SR模块使用“清洁”低分辨率(LR)X光片重建HR X光片。此外,我们提出了一种单独的接头训练方法来训练模型,并进行了广泛的实验,以表明所提出的方法优于其对应物。例如,我们提出的方法可实现$ 31.90 $的PSNR,比例为$ 4 \ times $,比最近的工作SPSR [16]高7.05美元\%$ $ $。我们的数据集和代码将在以下网址提供:https://github.com/yongsongh/aidsrgan-miccai2022。
translated by 谷歌翻译