Recent image degradation estimation methods have enabled single-image super-resolution (SR) approaches to better upsample real-world images. Among these methods, explicit kernel estimation approaches have demonstrated unprecedented performance at handling unknown degradations. Nonetheless, a number of limitations constrain their efficacy when used by downstream SR models. Specifically, this family of methods yields i) excessive inference time due to long per-image adaptation times and ii) inferior image fidelity due to kernel mismatch. In this work, we introduce a learning-to-learn approach that meta-learns from the information contained in a distribution of images, thereby enabling significantly faster adaptation to new images with substantially improved performance in both kernel estimation and image fidelity. Specifically, we meta-train a kernel-generating GAN, named MetaKernelGAN, on a range of tasks, such that when a new image is presented, the generator starts from an informed kernel estimate and the discriminator starts with a strong capability to distinguish between patch distributions. Compared with state-of-the-art methods, our experiments show that MetaKernelGAN better estimates the magnitude and covariance of the kernel, leading to state-of-the-art blind SR results within a similar computational regime when combined with a non-blind SR model. Through supervised learning of an unsupervised learner, our method maintains the generalizability of the unsupervised learner, improves the optimization stability of kernel estimation, and hence image adaptation, and leads to a faster inference with a speedup between 14.24 to 102.1x over existing methods.
translated by 谷歌翻译
虽然单图像超分辨率(SISR)方法在单次降级方面取得了巨大成功,但它们仍然在实际情况下具有多重降低效果的性能下降。最近,已经探索了一些盲人和非盲模范,已经探讨了多重降级。然而,这些方法通常在训练和测试数据之间的分布换档方面显着降低。为此,我们第一次提出了一个条件元网络框架(命名CMDSR),这有助于SR框架了解如何适应输入分布的变化。我们使用所提出的ConditionNet在任务级别提取劣化,该条件将用于调整基本SR网络(BaseNet)的参数。具体而言,我们的框架的ConditionNet首先从支撑集中学习劣化,该支持集由来自相同任务的一系列劣化图像补丁组成。然后,Adaptive BaseNet根据条件特征迅速移动其参数。此外,为了更好地提取劣化,我们提出了一个任务对比损失,以减少内部任务距离,并增加任务级别功能之间的交叉任务距离。在没有预定义的降级地图,我们的盲框可以进行一个参数更新,以产生相当大的SR结果。广泛的实验证明了CMDSR在各种盲,甚至是非盲方法上的有效性。柔性基座结构还揭示了CMDSR可以是大系列SISR模型的一般框架。
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
尽管目前基于深度学习的方法在盲目的单图像超分辨率(SISR)任务中已获得了有希望的表现,但其中大多数主要集中在启发式上构建多样化的网络体系结构,并更少强调对Blur之间的物理发电机制的明确嵌入内核和高分辨率(HR)图像。为了减轻这个问题,我们提出了一个模型驱动的深神经网络,称为blind SISR。具体而言,为了解决经典的SISR模型,我们提出了一种简单的效果迭代算法。然后,通过将所涉及的迭代步骤展开到相应的网络模块中,我们自然构建了KXNET。所提出的KXNET的主要特异性是整个学习过程与此SISR任务的固有物理机制完全合理地集成在一起。因此,学习的模糊内核具有清晰的物理模式,并且模糊内核和HR图像之间的相互迭代过程可以很好地指导KXNET沿正确的方向发展。关于合成和真实数据的广泛实验很好地证明了我们方法的卓越准确性和一般性超出了当前代表性的最先进的盲目SISR方法。代码可在:\ url {https://github.com/jiahong-fu/kxnet}中获得。
translated by 谷歌翻译
极度依赖于从划痕的模型的降级或优化的降解或优化的迭代估计,现有的盲超分辨率(SR)方法通常是耗时和效率较低,因为退化的估计从盲初始化进行并且缺乏可解释降解前沿。为了解决它,本文提出了一种使用端到端网络的盲SR的过渡学习方法,没有任何额外的推断中的额外迭代,并探讨了未知降级的有效表示。首先,我们分析并证明降解的过渡性作为可解释的先前信息,以间接推断出未知的降解模型,包括广泛使用的添加剂和卷曲降解。然后,我们提出了一种新颖的过渡性学习方法,用于盲目超分辨率(TLSR),通过自适应地推断过渡转换功能来解决未知的降级而没有推断的任何迭代操作。具体地,端到端TLSR网络包括一定程度的过渡性(点)估计网络,同一性特征提取网络和过渡学习模块。对盲人SR任务的定量和定性评估表明,拟议的TLSR实现了优异的性能,并且对最先进的盲人SR方法的复杂性较少。该代码可在github.com/yuanfeihuang/tlsr获得。
translated by 谷歌翻译
盲级超分辨率(SR)旨在从低分辨率(LR)图像中恢复高质量的视觉纹理,通常通过下采样模糊内核和添加剂噪声来降解。由于现实世界中复杂的图像降解的挑战,此任务非常困难。现有的SR方法要么假定预定义的模糊内核或固定噪声,这限制了这些方法在具有挑战性的情况下。在本文中,我们提出了一个用于盲目超级分辨率(DMSR)的降解引导的元修复网络,该网络促进了真实病例的图像恢复。 DMSR由降解提取器和元修复模块组成。萃取器估计LR输入中的降解,并指导元恢复模块以预测恢复参数的恢复参数。 DMSR通过新颖的降解一致性损失和重建损失共同优化。通过这样的优化,DMSR在三个广泛使用的基准上以很大的边距优于SOTA。一项包括16个受试者的用户研究进一步验证了现实世界中的盲目SR任务中DMSR的优势。
translated by 谷歌翻译
Existing convolutional neural networks (CNN) based image super-resolution (SR) methods have achieved impressive performance on bicubic kernel, which is not valid to handle unknown degradations in real-world applications. Recent blind SR methods suggest to reconstruct SR images relying on blur kernel estimation. However, their results still remain visible artifacts and detail distortion due to the estimation errors. To alleviate these problems, in this paper, we propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR. Specifically, in our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures. The DSMM consists of two components: a detail restoration unit (DRU) and a structure modulation unit (SMU). The former aims at regressing the intermediate HR detail reconstruction from LR structural contexts, and the latter performs structural contexts modulation conditioned on the learned detail maps at both HR and LR spaces. Besides, we use the output of DSMM as the hidden state and design our DSSR architecture from a recurrent convolutional neural network (RCNN) view. In this way, the network can alternatively optimize the image details and structural contexts, achieving co-optimization across time. Moreover, equipped with the recurrent connection, our DSSR allows low- and high-level feature representations complementary by observing previous HR details and contexts at every unrolling time. Extensive experiments on synthetic datasets and real-world images demonstrate that our method achieves the state-of-the-art against existing methods. The source code can be found at https://github.com/Arcananana/DSSR.
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译
基于CNN的大多数超分辨率(SR)方法假设降解是已知的(\ eg,bicubic)。当降解与假设不同时,这些方法将遭受严重的性能下降。因此,一些方法试图通过多种降解的复杂组合来培训SR网络,以涵盖实际的降解空间。为了适应多个未知降解,引入显式降解估计器实际上可以促进SR性能。然而,以前的显式降解估计方法通常可以通过对地面模糊内核的监督来预测高斯的模糊,并且估计错误可能导致SR失败。因此,有必要设计一种可以提取隐式歧视性降解表示的方法。为此,我们提出了一个基于元学习的区域退化意识SR网络(MRDA),包括元学习网络(MLN),降级提取网络(DEN)和区域退化意识SR Network(RDAN)。为了处理缺乏地面污染的降解,我们使用MLN在几次迭代后快速适应特定的复合物降解并提取隐式降解信息。随后,教师网络MRDA $ _ {T} $旨在进一步利用MLN为SR提取的降解信息。但是,MLN需要在配对的低分辨率(LR)和相应的高分辨率(HR)图像上进行迭代,这在推理阶段不可用。因此,我们采用知识蒸馏(KD)来使学生网络学会直接提取与LR图像的老师相同的隐式退化表示(IDR)。
translated by 谷歌翻译
盲目图像超分辨率(SR)的典型方法通过直接估算或学习潜在空间中的降解表示来处理未知的降解。这些方法的一个潜在局限性是,他们假设可以通过整合各种手工降解(例如,比科比克下采样)来模拟未知的降解,这不一定是正确的。现实世界中的降解可能超出了手工降解的模拟范围,这被称为新型降解。在这项工作中,我们建议学习一个潜在的降解空间,可以将其从手工制作的(基本)降解中推广到新的降解。然后将其在此潜在空间中获得的新型降解的表示形式被利用,以生成与新型降解一致的降级图像,以构成SR模型的配对训练数据。此外,我们执行各种推断,以使潜在表示空间中的降解后降解与先前的分布(例如高斯分布)相匹配。因此,我们能够采样更多的高质量表示以进行新的降级,以增加SR模型的训练数据。我们对合成数据集和现实数据集进行了广泛的实验,以验证我们在新型降解中盲目超分辨率的有效性和优势。
translated by 谷歌翻译
本文提出了图像恢复的新变异推理框架和一个卷积神经网络(CNN)结构,该结构可以解决所提出的框架所描述的恢复问题。较早的基于CNN的图像恢复方法主要集中在网络体系结构设计或培训策略上,具有非盲方案,其中已知或假定降解模型。为了更接近现实世界的应用程序,CNN还接受了整个数据集的盲目培训,包括各种降解。然而,给定有多样化的图像的高质量图像的条件分布太复杂了,无法通过单个CNN学习。因此,也有一些方法可以提供其他先验信息来培训CNN。与以前的方法不同,我们更多地专注于基于贝叶斯观点以及如何重新重新重构目标的恢复目标。具体而言,我们的方法放松了原始的后推理问题,以更好地管理子问题,因此表现得像分裂和互动方案。结果,与以前的框架相比,提出的框架提高了几个恢复问题的性能。具体而言,我们的方法在高斯denoising,现实世界中的降噪,盲图超级分辨率和JPEG压缩伪像减少方面提供了最先进的性能。
translated by 谷歌翻译
Deep Learning has led to a dramatic leap in Super-Resolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the lowresolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce "Zero-Shot" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method.
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
虽然最近基于模型的盲目单图像超分辨率(SISR)的研究已经取得了巨大的成功,但大多数人都不认为图像劣化。首先,它们总是假设图像噪声obeys独立和相同分布的(i.i.d.)高斯或拉普拉斯分布,这在很大程度上低估了真实噪音的复杂性。其次,以前的常用核前沿(例如,归一化,稀疏性)不足以保证理性内核解决方案,从而退化后续SISR任务的性能。为了解决上述问题,本文提出了一种基于模型的盲人SISR方法,该方法在概率框架下,从噪声和模糊内核的角度精心模仿图像劣化。具体而言,而不是传统的i.i.d.噪声假设,基于补丁的非i.i.d。提出噪声模型来解决复杂的真实噪声,期望增加噪声表示模型的自由度。至于模糊内核,我们新建构建一个简洁但有效的内核生成器,并将其插入所提出的盲人SISR方法作为明确的内核(EKP)。为了解决所提出的模型,专门设计了理论上接地的蒙特卡罗EM算法。综合实验证明了我们对综合性和实时数据集的最新技术的方法的优越性。
translated by 谷歌翻译
尽管基准数据集的成功,但大多数先进的面部超分辨率模型在真实情况下表现不佳,因为真实图像与合成训练对之间的显着域间隙。为了解决这个问题,我们提出了一种用于野外面部超分辨率的新型域 - 自适应降级网络。该降级网络预测流场以及中间低分辨率图像。然后,通过翘曲中间图像来生成降级的对应物。利用捕获运动模糊的偏好,这种模型在保护原始图像和劣化之间保持身份一致性更好地执行。我们进一步提出了超分辨率网络的自我调节块。该块将输入图像作为条件术语,以有效地利用面部结构信息,从而消除了对显式前沿的依赖性,例如,面部地标或边界。我们的模型在Celeba和真实世界的面部数据集上实现了最先进的性能。前者展示了我们所提出的建筑的强大生成能力,而后者展示了现实世界中的良好的身份一致性和感知品质。
translated by 谷歌翻译
现实世界图像超分辨率(SR)的关键挑战是在低分辨率(LR)图像中恢复具有复杂未知降解(例如,下采样,噪声和压缩)的缺失细节。大多数以前的作品还原图像空间中的此类缺失细节。为了应对自然图像的高度多样性,他们要么依靠难以训练和容易训练和伪影的不稳定的甘体,要么诉诸于通常不可用的高分辨率(HR)图像中的明确参考。在这项工作中,我们提出了匹配SR(FEMASR)的功能,该功能在更紧凑的特征空间中恢复了现实的HR图像。与图像空间方法不同,我们的FEMASR通过将扭曲的LR图像{\ IT特征}与我们预读的HR先验中的无失真性HR对应物匹配来恢复HR图像,并解码匹配的功能以获得现实的HR图像。具体而言,我们的人力资源先验包含一个离散的特征代码簿及其相关的解码器,它们在使用量化的生成对抗网络(VQGAN)的HR图像上预估计。值得注意的是,我们在VQGAN中结合了一种新型的语义正则化,以提高重建图像的质量。对于功能匹配,我们首先提取由LR编码器组成的LR编码器的LR功能,然后遵循简单的最近邻居策略,将其与预读的代码簿匹配。特别是,我们为LR编码器配备了与解码器的残留快捷方式连接,这对于优化功能匹配损耗至关重要,还有助于补充可能的功能匹配错误。实验结果表明,我们的方法比以前的方法产生更现实的HR图像。代码以\ url {https://github.com/chaofengc/femasr}发布。
translated by 谷歌翻译
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation. Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels, which significantly restricts their practical deployments on resource-limited devices. In this paper, we propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR. Specifically, we first introduce the light degradation prediction network to regress the degradation vector to simulate the real-world degradations, upon which the channel splitting vector is generated as the input for an efficient SR model. Then, a learnable octave convolution block is proposed to adaptively decide the channel splitting scale for low- and high-frequency features at each block, reducing computation overhead and memory cost by offering the large scale to low-frequency features and the small scale to the high ones. To further improve the RISR performance, Non-local regularization is employed to supplement the knowledge of patches from LR and HR subspace with free-computation inference. Extensive experiments demonstrate the effectiveness of DCS-RISR on different benchmark datasets. Our DCS-RISR not only achieves the best trade-off between computation/parameter and PSNR/SSIM metric, and also effectively handles real-world images with different degradation levels.
translated by 谷歌翻译
Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) in single image super-resolution (SISR). However, existing CNN-based SISR methods mostly assume that a low-resolution (LR) image is bicubicly downsampled from a high-resolution (HR) image, thus inevitably giving rise to poor performance when the true degradation does not follow this assumption. Moreover, they lack scalability in learning a single model to nonblindly deal with multiple degradations. To address these issues, we propose a general framework with dimensionality stretching strategy that enables a single convolutional super-resolution network to take two key factors of the SISR degradation process, i.e., blur kernel and noise level, as input. Consequently, the super-resolver can handle multiple and even spatially variant degradations, which significantly improves the practicability. Extensive experimental results on synthetic and real LR images show that the proposed convolutional super-resolution network not only can produce favorable results on multiple degradations but also is computationally efficient, providing a highly effective and scalable solution to practical SISR applications.
translated by 谷歌翻译
Blind image super-resolution (Blind-SR) aims to recover a high-resolution (HR) image from its corresponding low-resolution (LR) input image with unknown degradations. Most of the existing works design an explicit degradation estimator for each degradation to guide SR. However, it is infeasible to provide concrete labels of multiple degradation combinations (\eg, blur, noise, jpeg compression) to supervise the degradation estimator training. In addition, these special designs for certain degradation, such as blur, impedes the models from being generalized to handle different degradations. To this end, it is necessary to design an implicit degradation estimator that can extract discriminative degradation representation for all degradations without relying on the supervision of degradation ground-truth. In this paper, we propose a Knowledge Distillation based Blind-SR network (KDSR). It consists of a knowledge distillation based implicit degradation estimator network (KD-IDE) and an efficient SR network. To learn the KDSR model, we first train a teacher network: KD-IDE$_{T}$. It takes paired HR and LR patches as inputs and is optimized with the SR network jointly. Then, we further train a student network KD-IDE$_{S}$, which only takes LR images as input and learns to extract the same implicit degradation representation (IDR) as KD-IDE$_{T}$. In addition, to fully use extracted IDR, we design a simple, strong, and efficient IDR based dynamic convolution residual block (IDR-DCRB) to build an SR network. We conduct extensive experiments under classic and real-world degradation settings. The results show that KDSR achieves SOTA performance and can generalize to various degradation processes. The source codes and pre-trained models will be released.
translated by 谷歌翻译
超级分辨率是一个不良问题,其中基本真理的高分辨率图像仅代表合理解决方案的空间中的一种可能性。然而,主导范式是采用像素 - 明智的损失,例如L_1,其驱动预测模糊的平均值。当与对抗性损失相结合时,这导致了根本相互矛盾的目标,这降低了最终质量。我们通过重新审视L_1丢失来解决此问题,并表明它对应于单层条件流程。灵感来自这一关系,我们探讨了一般流动作为L_1目标的忠诚替代品。我们证明,在与对抗性损失结合时,更深流量的灵活性导致更好的视觉质量和一致性。我们对三个数据集和比例因子进行广泛的用户研究,其中我们的方法被证明了为光逼真的超分辨率优于最先进的方法。代码和培训的型号可在:git.io/adflow
translated by 谷歌翻译