Self-similarity based super-resolution (SR) algorithms are able to produce visually pleasing results without extensive training on external databases. Such algorithms exploit the statistical prior that patches in a natural image tend to recur within and across scales of the same image. However, the internal dictionary obtained from the given image may not always be sufficiently expressive to cover the textural appearance variations in the scene. In this paper, we extend self-similarity based SR to overcome this drawback. We expand the internal patch search space by allowing geometric variations. We do so by explicitly localizing planes in the scene and using the detected perspective geometry to guide the patch search process. We also incorporate additional affine transformations to accommodate local shape variations. We propose a compositional model to simultaneously handle both types of transformations. We extensively evaluate the performance in both urban and natural scenes. Even without using any external training databases, we achieve significantly superior results on urban scenes, while maintaining comparable performance on natural scenes as other state-of-the-art SR algorithms.
translated by 谷歌翻译
Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image superresolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.
translated by 谷歌翻译
单图超分辨率(SISR)的最新方法在从低分辨率(LR)图像产生高分辨率(HR)图像方面表现出了出色的性能。但是,这些方法中的大多数使用合成生成的LR图像显示出它们的优势,并且它们对现实世界图像的推广性通常并不令人满意。在本文中,我们注意针对可靠的超级分辨率(SR)开发的两种著名策略,即基于参考的SR(REFSR)和零摄影SR(ZSSR),并提出了一种综合解决方案,称为参考 - 基于零击SR(RZSR)。遵循ZSSR的原理,我们使用仅从输入图像本身提取的训练样本在测试时间训练特定于图像的SR网络。为了推进ZSSR,我们获得具有丰富纹理和高频细节的参考图像贴片,这些贴片也仅使用跨尺度匹配从输入图像中提取。为此,我们使用深度信息构建了一个内部参考数据集并从数据集中检索参考图像补丁。使用LR贴片及其相应的HR参考贴片,我们训练由非本地注意模块体现的REFSR网络。实验结果证明了与以前的ZSSR方法相比,与其他完全监督的SISR方法相比,所提出的RZSR的优越性与前所未有的图像相比。
translated by 谷歌翻译
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image superresolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4× upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.
translated by 谷歌翻译
This paper describes a single-image super-resolution (SR) algorithm based on nonnegative neighbor embedding. It belongs to the family of single-image example-based SR algorithms, since it uses a dictionary of low resolution (LR) and high resolution (HR) trained patch pairs to infer the unknown HR details. Each LR feature vector in the input image is expressed as the weighted combination of its K nearest neighbors in the dictionary; the corresponding HR feature vector is reconstructed under the assumption that the local LR embedding is preserved. Three key aspects are introduced in order to build a low-complexity competitive algorithm: (i) a compact but efficient representation of the patches (feature representation) (ii) an accurate estimation of the patches by their nearest neighbors (weight computation) (iii) a compact and already built (therefore external) dictionary, which allows a one-step upscaling. The neighbor embedding SR algorithm so designed is shown to give good visual results, comparable to other state-of-the-art methods, while presenting an appreciable reduction of the computational time.
translated by 谷歌翻译
在过去的几十年中,已经进行了许多尝试来解决从其相应的低分辨率(LR)对应物中恢复高分辨率(HR)面部形象的问题,这是通常被称为幻觉的任务。尽管通过位置补丁和基于深度学习的方法实现了令人印象深刻的性能,但大多数技术仍然无法恢复面孔的特定特定功能。前一组算法通常在存在更高水平的降解存在下产生模糊和过天气输出,而后者产生的面部有时绝不使得输入图像中的个体类似于个体。在本文中,将引入一种新的面部超分辨率方法,其中幻觉面被迫位于可用训练面跨越的子空间中。因此,与大多数现有面的幻觉技术相比,由于这种面部子空间之前,重建是为了回收特定人的面部特征,而不是仅仅增加图像定量分数。此外,通过最近的3D面部重建领域的进步启发,还呈现了一种有效的3D字典对齐方案,通过该方案,该算法能够处理在不受控制的条件下拍摄的低分辨率面。在几个众所周知的面部数据集上进行的广泛实验中,所提出的算法通过生成详细和接近地面真理结果来显示出色的性能,这在定量和定性评估中通过显着的边距来实现了最先进的面部幻觉算法。
translated by 谷歌翻译
尽管近年来取得了显着的进展,但开发了几个局限性的单像超分辨率方法。具体而言,它们在具有某些降解(合成还是真实)的固定内容域中进行了培训。他们所学的先验容易过度适应培训配置。因此,目前尚不清楚对新型领域(例如无人机顶视图数据以及跨海)的概括。尽管如此,将无人机与正确的图像超分辨率配对具有巨大的价值。这将使无人机能够飞行更高的覆盖范围,同时保持高图像质量。为了回答这些问题,并为无人机图像超级分辨率铺平了道路,我们探索了该应用程序,特别关注单像案例。我们提出了一个新颖的无人机图像数据集,其场景在低分辨率和高分辨率下捕获,并在高度范围内捕获。我们的结果表明,现成的最先进的网络见证了这个不同领域的性能下降。我们还表明了简单的微调,并将高度意识纳入网络的体系结构,都可以改善重建性能。
translated by 谷歌翻译
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses, including a high-resolution camera surrounded by multiple low-resolution cameras. The performance of existing methods is still limited, as they produce either blurry results on plain textured areas or distortions around depth discontinuous boundaries. To tackle this challenge, we propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input from two complementary and parallel perspectives. Specifically, one module regresses a spatially consistent intermediate estimation by learning a deep multidimensional and cross-domain feature representation, while the other module warps another intermediate estimation, which maintains the high-frequency textures, by propagating the information of the high-resolution view. We finally leverage the advantages of the two intermediate estimations adaptively via the learned attention maps, leading to the final high-resolution LF image with satisfactory results on both plain textured areas and depth discontinuous boundaries. Besides, to promote the effectiveness of our method trained with simulated hybrid data on real hybrid data captured by a hybrid LF imaging system, we carefully design the network architecture and the training strategy. Extensive experiments on both real and simulated hybrid data demonstrate the significant superiority of our approach over state-of-the-art ones. To the best of our knowledge, this is the first end-to-end deep learning method for LF reconstruction from a real hybrid input. We believe our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
translated by 谷歌翻译
在实践中,图像可以包含不同颜色通道的不同噪声,这不受现有的超分辨率方法确认。在本文中,我们通过关注颜色通道来提出超声噪音图像。噪声统计从输入的低分辨率图像盲目地估计,并且用于以数据成本为不同颜色信道分配不同权重。通过与自适应权重相关联的核规范最小化,通过核标准最小化强制强制执行视觉数据的隐式低秩结构,这将作为正则化术语添加到成本中。另外,通过涉及投影到PCA的另一个正则化术语将图像的多尺度细节添加到模型中,该术语是使用在输入图像的不同尺度上提取的类似斑块构造的。结果展示了在实际方案中的方法的超声解决能力。
translated by 谷歌翻译
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-ofthe-art single image super-resolution approaches.
translated by 谷歌翻译
Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.
translated by 谷歌翻译
Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge [26].
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
自从Dong等人的第一个成功以来,基于深度学习的方法已在单像超分辨率领域中占主导地位。这取代了使用深神经网络的传统基于稀疏编码方法的所有手工图像处理步骤。与明确创建高/低分辨率词典的基于稀疏编码的方法相反,基于深度学习的方法中的词典被隐式地作为多种卷积的非线性组合被隐式获取。基于深度学习方法的缺点是,它们的性能因与训练数据集(室外图像)不同的图像而降低。我们提出了一个带有深层字典(SRDD)的端到端超分辨率网络,在该网络中,高分辨率词典在不牺牲深度学习优势的情况下明确学习。广泛的实验表明,高分辨率词典的显式学习使网络在维持内域测试图像的性能的同时更加强大。
translated by 谷歌翻译
The feed-forward architectures of recently proposed deep super-resolution networks learn representations of low-resolution inputs, and the non-linear mapping from those to high-resolution output. However, this approach does not fully address the mutual dependencies of low-and high-resolution images. We propose Deep Back-Projection Networks (DBPN), that exploit iterative up-and downsampling layers, providing an error feedback mechanism for projection errors at each stage. We construct mutuallyconnected up-and down-sampling stages each of which represents different types of image degradation and highresolution components. We show that extending this idea to allow concatenation of features across up-and downsampling stages (Dense DBPN) allows us to reconstruct further improve super-resolution, yielding superior results and in particular establishing new state of the art results for large scaling factors such as 8× across multiple data sets.
translated by 谷歌翻译
Despite that convolutional neural networks (CNN) have recently demonstrated high-quality reconstruction for single-image super-resolution (SR), recovering natural and realistic texture remains a challenging problem. In this paper, we show that it is possible to recover textures faithful to semantic classes. In particular, we only need to modulate features of a few intermediate layers in a single network conditioned on semantic segmentation probability maps. This is made possible through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation. SFT layers can be trained end-to-end together with the SR network using the same loss function. During testing, it accepts an input image of arbitrary size and generates a high-resolution image with just a single forward pass conditioned on the categorical priors. Our final results show that an SR network equipped with SFT can generate more realistic and visually pleasing textures in comparison to state-of-the-art SRGAN [27] and EnhanceNet [38].
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
Existing convolutional neural networks (CNN) based image super-resolution (SR) methods have achieved impressive performance on bicubic kernel, which is not valid to handle unknown degradations in real-world applications. Recent blind SR methods suggest to reconstruct SR images relying on blur kernel estimation. However, their results still remain visible artifacts and detail distortion due to the estimation errors. To alleviate these problems, in this paper, we propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR. Specifically, in our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures. The DSMM consists of two components: a detail restoration unit (DRU) and a structure modulation unit (SMU). The former aims at regressing the intermediate HR detail reconstruction from LR structural contexts, and the latter performs structural contexts modulation conditioned on the learned detail maps at both HR and LR spaces. Besides, we use the output of DSMM as the hidden state and design our DSSR architecture from a recurrent convolutional neural network (RCNN) view. In this way, the network can alternatively optimize the image details and structural contexts, achieving co-optimization across time. Moreover, equipped with the recurrent connection, our DSSR allows low- and high-level feature representations complementary by observing previous HR details and contexts at every unrolling time. Extensive experiments on synthetic datasets and real-world images demonstrate that our method achieves the state-of-the-art against existing methods. The source code can be found at https://github.com/Arcananana/DSSR.
translated by 谷歌翻译
目前基于学习的单图像超分辨率(SISR)算法由于假定的Daradada-Tion过程中的偏差而导致的实际数据up到实际数据。常规的劣化过程考虑在高分辨率(HR)图像上应用模糊,噪声和下采样(通常是较大的采样)以合成低分辨率(LR)对应物。然而,很少有用于退化建模的作品已经采取了光学成像系统的物理方面。在本文中,我们光学分析了成像系统,并探索了空间频域的实际LR-HR对的特征。通过考虑optiopticsandsordegration,我们制定真实的物理启发的退化模型;成像系统的物理劣化被建模为低通滤波器,其截止频率由物体距离,焦距的更焦距和图像传感器的像素尺寸。特别是,我们建议使用卷积神经网络(CNN)来学习现实世界劣化过程的截止频率。然后应用学习的网络从未配对的HR图像合成LR图像。稍后使用合成的HR-LR图像对培训SISR网络。我们评估所提出的不同成像系统捕获的现实世界图像中提出的退化模型的有效性和泛化能力。实验结果展示了通过使用传统的退化模型使用我们的合成数据训练的SISR网络通过传统的降级模型对网络进行了有利的。此外,我们的结果与通过使用现实世界LR-HR对训练的相同网络获得的结果相当,这是在真实场景中获得的具有挑战性。
translated by 谷歌翻译
最新的多视图多媒体应用程序在高分辨率(HR)视觉体验与存储或带宽约束之间挣扎。因此,本文提出了一个多视图图像超分辨率(MVISR)任务。它旨在增加从同一场景捕获的多视图图像的分辨率。一种解决方案是将图像或视频超分辨率(SR)方法应用于低分辨率(LR)输入视图结果。但是,这些方法无法处理视图之间的大角度转换,并利用所有多视图图像中的信息。为了解决这些问题,我们提出了MVSRNET,该MVSRNET使用几何信息从所有LR多视图中提取尖锐的细节,以支持LR输入视图的SR。具体而言,MVSRNET中提出的几何感知参考合成模块使用几何信息和所有多视图LR图像来合成像素对齐的HR参考图像。然后,提出的动态高频搜索网络完全利用了SR参考图像中的高频纹理细节。关于几个基准测试的广泛实验表明,我们的方法在最新方法上有了显着改善。
translated by 谷歌翻译