高分辨率卫星图像已证明是可用于广泛的任务,包括衡量全球人口,当地经济生计和生物多样性,其中许多其他任务。不幸的是,高分辨率图像既不经常收集,购买昂贵,难以高效,有效地缩放这些下游任务在两次和空间。我们提出了一种新的条件像素综合模型,它使用丰富,低成本,低分辨率的图像,在位置和时间内产生准确的高分辨率图像。我们表明,我们的模型在钥匙下游任务 - 对象计数上达到了照片 - 现实的样本质量和竞争基线的竞争基线 - 特别是在地面上的条件正在快速变化的地理位置中。
translated by 谷歌翻译
高分辨率遥感图像用于广泛的任务,包括对象的检测和分类。然而,高分辨率图像昂贵,而较低的分辨率图像通常是可自由的可用的,并且可以由公众用于社会良好应用范围。为此,我们使用从Spacenet 7挑战的PlanetsCope图像策划多个频谱多图像超分辨率数据集作为高分辨率参考和与低分辨率图像相同的图像的多个Sentinel-2重新定位。我们介绍了将多图像超分辨率(MISR)应用于多光谱遥感图像的第一个结果。此外,我们还将辐射级一致性模块引入MISR模型,以保持哨声-2传感器的高辐射分辨率。我们表明MISR优于一系列图像保真度指标的单图像超分辨率和其他基线。此外,我们对建筑描绘的多图像超分辨率的效用进行了第一次评估,显示利用多个图像导致这些下游任务中的更好的性能。
translated by 谷歌翻译
我们呈现NERF-SR,一种用于高分辨率(HR)新型视图合成的解决方案,主要是低分辨率(LR)输入。我们的方法是基于神经辐射场(NERF)的内置,其预测每点密度和颜色,具有多层的射击。在在任意尺度上产生图像时,NERF与超越观察图像的分辨率努力。我们的关键识别是NERF具有本地之前的,这意味着可以在附近区域传播3D点的预测,并且保持准确。我们首先通过超级采样策略来利用它,该策略在每个图像像素处射击多个光线,这在子像素级别强制了多视图约束。然后,我们表明,NERF-SR可以通过改进网络进一步提高超级采样的性能,该细化网络利用估计的深度来实现HR参考图像上的相关补丁的幻觉。实验结果表明,NERF-SR在合成和现实世界数据集的HR上为新型视图合成产生高质量结果。
translated by 谷歌翻译
尽管应用于自然图像的大量成功的超分辨率重建(SRR)模型,但它们在遥感图像中的应用往往会产生差的结果。遥感图像通常比自然图像更复杂,并且具有较低分辨率的特殊性,它包含噪音,并且通常描绘了大质感表面。结果,将非专业的SRR模型应用于遥感图像,从而导致人工制品和不良的重建。为了解决这些问题,本文提出了一种受到先前研究工作启发的体系结构,引入了一种新的方法来迫使SRR模型输出现实的遥感图像:而不是依靠功能空间相似性作为感知损失,而是将其视为Pixel-从图像的归一化数字表面模型(NDSM)推断出的级别信息。该策略允许在训练模型期间应用更具信息的更新,该模型从任务(高程图推理)源中源,该模型与遥感密切相关。但是,在生产过程中不需要NDSM辅助信息,因此该模型除了其低分辨率对以外没有任何其他数据,因此该模型还没有任何其他数据。我们在两个远程感知的不同空间分辨率的数据集上评估了我们的模型,这些数据集也包含图像的DSM对:DFC2018数据集和包含卢森堡国家激光雷达飞行的数据集。根据视觉检查,推断的超分辨率图像表现出特别优越的质量。特别是,高分辨率DFC2018数据集的结果是现实的,几乎与地面真相图像没有区别。
translated by 谷歌翻译
即使自然图像有多种尺寸,生成模型也以固定分辨率运行。由于高分辨率的细节被删除并完全丢弃了低分辨率图像,因此丢失了宝贵的监督。我们认为,每个像素都很重要,并创建具有可变大小图像的数据集,该图像以本机分辨率收集。为了利用各种大小的数据,我们引入了连续尺度训练,该过程以随机尺度进行采样以训练具有可变输出分辨率的新发电机。首先,对生成器进行调节,可以使我们能够生成比以前更高的分辨率图像,而无需在模型中添加层。其次,通过对连续坐标进行调节,我们可以采样仍然遵守一致的全局布局的贴片,这也允许在更高分辨率下进行可扩展的训练。受控的FFHQ实验表明,与离散的多尺度方法相比,我们的方法可以更好地利用多分辨率培训数据,从而获得更好的FID分数和更清洁的高频细节。我们还训练包括教堂,山脉和鸟类在内的其他自然图像领域,并通过连贯的全球布局和现实的本地细节来展示任意量表的综合,超出了我们的实验中的2K分辨率。我们的项目页面可在以下网址找到:https://chail.github.io/anyres-gan/。
translated by 谷歌翻译
图像超分辨率(SR)是重要的图像处理方法之一,可改善计算机视野领域的图像分辨率。在过去的二十年中,在超级分辨率领域取得了重大进展,尤其是通过使用深度学习方法。这项调查是为了在深度学习的角度进行详细的调查,对单像超分辨率的最新进展进行详细的调查,同时还将告知图像超分辨率的初始经典方法。该调查将图像SR方法分类为四个类别,即经典方法,基于学习的方法,无监督学习的方法和特定领域的SR方法。我们还介绍了SR的问题,以提供有关图像质量指标,可用参考数据集和SR挑战的直觉。使用参考数据集评估基于深度学习的方法。一些审查的最先进的图像SR方法包括增强的深SR网络(EDSR),周期循环gan(Cincgan),多尺度残留网络(MSRN),Meta残留密度网络(META-RDN) ,反复反射网络(RBPN),二阶注意网络(SAN),SR反馈网络(SRFBN)和基于小波的残留注意网络(WRAN)。最后,这项调查以研究人员将解决SR的未来方向和趋势和开放问题的未来方向和趋势。
translated by 谷歌翻译
尽管深度学习使图像介绍方面取得了巨大的飞跃,但当前的方法通常无法综合现实的高频细节。在本文中,我们建议将超分辨率应用于粗糙的重建输出,以高分辨率进行精炼,然后将输出降低到原始分辨率。通过将高分辨率图像引入改进网络,我们的框架能够重建更多的细节,这些细节通常由于光谱偏置而被平滑 - 神经网络倾向于比高频更好地重建低频。为了协助培训大型高度孔洞的改进网络,我们提出了一种渐进的学习技术,其中缺失区域的大小随着培训的进行而增加。我们的缩放,完善和缩放策略,结合了高分辨率的监督和渐进学习,构成了一种框架 - 不合时宜的方法,用于增强高频细节,可应用于任何基于CNN的涂层方法。我们提供定性和定量评估以及消融分析,以显示我们方法的有效性。这种看似简单但功能强大的方法优于最先进的介绍方法。我们的代码可在https://github.com/google/zoom-to-inpaint中找到
translated by 谷歌翻译
面部超分辨率(FSR),也称为面部幻觉,其旨在增强低分辨率(LR)面部图像以产生高分辨率(HR)面部图像的分辨率,是特定于域的图像超分辨率问题。最近,FSR获得了相当大的关注,并目睹了深度学习技术的发展炫目。迄今为止,有很少有基于深入学习的FSR的研究摘要。在本次调查中,我们以系统的方式对基于深度学习的FSR方法进行了全面审查。首先,我们总结了FSR的问题制定,并引入了流行的评估度量和损失功能。其次,我们详细说明了FSR中使用的面部特征和流行数据集。第三,我们根据面部特征的利用大致分类了现有方法。在每个类别中,我们从设计原则的一般描述开始,然后概述代表方法,然后讨论其中的利弊。第四,我们评估了一些最先进的方法的表现。第五,联合FSR和其他任务以及与FSR相关的申请大致介绍。最后,我们设想了这一领域进一步的技术进步的前景。在\ URL {https://github.com/junjun-jiang/face-hallucination-benchmark}上有一个策划的文件和资源的策划文件和资源清单
translated by 谷歌翻译
随着深度学习(DL)的出现,超分辨率(SR)也已成为一个蓬勃发展的研究领域。然而,尽管结果有希望,但该领域仍然面临需要进一步研究的挑战,例如,允许灵活地采样,更有效的损失功能和更好的评估指标。我们根据最近的进步来回顾SR的域,并检查最新模型,例如扩散(DDPM)和基于变压器的SR模型。我们对SR中使用的当代策略进行了批判性讨论,并确定了有前途但未开发的研究方向。我们通过纳入该领域的最新发展,例如不确定性驱动的损失,小波网络,神经体系结构搜索,新颖的归一化方法和最新评估技术来补充先前的调查。我们还为整章中的模型和方法提供了几种可视化,以促进对该领域趋势的全球理解。最终,这篇综述旨在帮助研究人员推动DL应用于SR的界限。
translated by 谷歌翻译
Neural volumetric representations have become a widely adopted model for radiance fields in 3D scenes. These representations are fully implicit or hybrid function approximators of the instantaneous volumetric radiance in a scene, which are typically learned from multi-view captures of the scene. We investigate the new task of neural volume super-resolution - rendering high-resolution views corresponding to a scene captured at low resolution. To this end, we propose a neural super-resolution network that operates directly on the volumetric representation of the scene. This approach allows us to exploit an advantage of operating in the volumetric domain, namely the ability to guarantee consistent super-resolution across different viewing directions. To realize our method, we devise a novel 3D representation that hinges on multiple 2D feature planes. This allows us to super-resolve the 3D scene representation by applying 2D convolutional networks on the 2D feature planes. We validate the proposed method's capability of super-resolving multi-view consistent views both quantitatively and qualitatively on a diverse set of unseen 3D scenes, demonstrating a significant advantage over existing approaches.
translated by 谷歌翻译
Learning continuous image representations is recently gaining popularity for image super-resolution (SR) because of its ability to reconstruct high-resolution images with arbitrary scales from low-resolution inputs. Existing methods mostly ensemble nearby features to predict the new pixel at any queried coordinate in the SR image. Such a local ensemble suffers from some limitations: i) it has no learnable parameters and it neglects the similarity of the visual features; ii) it has a limited receptive field and cannot ensemble relevant features in a large field which are important in an image; iii) it inherently has a gap with real camera imaging since it only depends on the coordinate. To address these issues, this paper proposes a continuous implicit attention-in-attention network, called CiaoSR. We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features. Furthermore, we embed a scale-aware attention in this implicit attention network to exploit additional non-local information. Extensive experiments on benchmark datasets demonstrate CiaoSR significantly outperforms the existing single image super resolution (SISR) methods with the same backbone. In addition, the proposed method also achieves the state-of-the-art performance on the arbitrary-scale SR task. The effectiveness of the method is also demonstrated on the real-world SR setting. More importantly, CiaoSR can be flexibly integrated into any backbone to improve the SR performance.
translated by 谷歌翻译
最新的多视图多媒体应用程序在高分辨率(HR)视觉体验与存储或带宽约束之间挣扎。因此,本文提出了一个多视图图像超分辨率(MVISR)任务。它旨在增加从同一场景捕获的多视图图像的分辨率。一种解决方案是将图像或视频超分辨率(SR)方法应用于低分辨率(LR)输入视图结果。但是,这些方法无法处理视图之间的大角度转换,并利用所有多视图图像中的信息。为了解决这些问题,我们提出了MVSRNET,该MVSRNET使用几何信息从所有LR多视图中提取尖锐的细节,以支持LR输入视图的SR。具体而言,MVSRNET中提出的几何感知参考合成模块使用几何信息和所有多视图LR图像来合成像素对齐的HR参考图像。然后,提出的动态高频搜索网络完全利用了SR参考图像中的高频纹理细节。关于几个基准测试的广泛实验表明,我们的方法在最新方法上有了显着改善。
translated by 谷歌翻译
NERF和其他相关隐式神经表示方法的最新成功为连续图像表示打开了一条新的途径,其中不再需要从存储的离散2D阵列中查找像素值,但可以从连续空间域上的神经网络模型推断出来。尽管LIIF最近的工作表明,这种新颖的方法可以在任意尺度的超分辨率任务上实现良好的性能,但由于对高频纹理的预测不准确,它们的高尺度图像经常显示出结构性失真。在这项工作中,我们提出了UltraSR,这是一种基于隐式图像函数的简单而有效的新网络设计,在其中我们深入整合了空间坐标和与隐式神经表示的定期编码。通过广泛的实验和消融研究,我们表明空间编码是朝向下一个阶段高表现隐式图像函数的缺失钥匙。与以前的最先进的方法相比,我们的Ultrasr在所有超分辨率量表下在DIV2K基准测试中设定了新的最先进的性能。 Ultrasr还可以在其他标准基准数据集上实现卓越的性能,在这些数据集中,它在几乎所有实验中都优于先前的工作。
translated by 谷歌翻译
Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.
translated by 谷歌翻译
The primary aim of single-image super-resolution is to construct a high-resolution (HR) image from a corresponding low-resolution (LR) input. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present a novel super-resolution algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require training on databases of LR-HR image pairs for supervised learning). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the "downscaling loss," which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee that our outputs are realistic. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show extensive experimental results demonstrating the efficacy of our approach in the domain of face super-resolution (also known as face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously pos-sible.
translated by 谷歌翻译
In this paper, we present a novel and effective framework, named 4K-NeRF, to pursue high fidelity view synthesis on the challenging scenarios of ultra high resolutions, building on the methodology of neural radiance fields (NeRF). The rendering procedure of NeRF-based methods typically relies on a pixel wise manner in which rays (or pixels) are treated independently on both training and inference phases, limiting its representational ability on describing subtle details especially when lifting to a extremely high resolution. We address the issue by better exploring ray correlation for enhancing high-frequency details benefiting from the use of geometry-aware local context. Particularly, we use the view-consistent encoder to model geometric information effectively in a lower resolution space and recover fine details through the view-consistent decoder, conditioned on ray features and depths estimated by the encoder. Joint training with patch-based sampling further facilitates our method incorporating the supervision from perception oriented regularization beyond pixel wise loss. Quantitative and qualitative comparisons with modern NeRF methods demonstrate that our method can significantly boost rendering quality for retaining high-frequency details, achieving the state-of-the-art visual quality on 4K ultra-high-resolution scenario. Code Available at \url{https://github.com/frozoul/4K-NeRF}
translated by 谷歌翻译
具有高分辨率的视网膜光学相干断层扫描术(八八)对于视网膜脉管系统的定量和分析很重要。然而,八颗图像的分辨率与相同采样频率的视野成反比,这不利于临床医生分析较大的血管区域。在本文中,我们提出了一个新型的基于稀疏的域适应超分辨率网络(SASR),以重建现实的6x6 mm2/低分辨率/低分辨率(LR)八八粒图像,以重建高分辨率(HR)表示。更具体地说,我们首先对3x3 mm2/高分辨率(HR)图像进行简单降解,以获得合成的LR图像。然后,采用一种有效的注册方法在6x6 mm2图像中以其相应的3x3 mm2图像区域注册合成LR,以获得裁切的逼真的LR图像。然后,我们提出了一个多级超分辨率模型,用于对合成数据进行全面监督的重建,从而通过生成的对流策略指导现实的LR图像重建现实的LR图像,该策略允许合成和现实的LR图像可以在特征中统一。领域。最后,新型的稀疏边缘感知损失旨在动态优化容器边缘结构。在两个八八集中进行的广泛实验表明,我们的方法的性能优于最先进的超分辨率重建方法。此外,我们还研究了重建结果对视网膜结构分割的性能,这进一步验证了我们方法的有效性。
translated by 谷歌翻译
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses, including a high-resolution camera surrounded by multiple low-resolution cameras. The performance of existing methods is still limited, as they produce either blurry results on plain textured areas or distortions around depth discontinuous boundaries. To tackle this challenge, we propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input from two complementary and parallel perspectives. Specifically, one module regresses a spatially consistent intermediate estimation by learning a deep multidimensional and cross-domain feature representation, while the other module warps another intermediate estimation, which maintains the high-frequency textures, by propagating the information of the high-resolution view. We finally leverage the advantages of the two intermediate estimations adaptively via the learned attention maps, leading to the final high-resolution LF image with satisfactory results on both plain textured areas and depth discontinuous boundaries. Besides, to promote the effectiveness of our method trained with simulated hybrid data on real hybrid data captured by a hybrid LF imaging system, we carefully design the network architecture and the training strategy. Extensive experiments on both real and simulated hybrid data demonstrate the significant superiority of our approach over state-of-the-art ones. To the best of our knowledge, this is the first end-to-end deep learning method for LF reconstruction from a real hybrid input. We believe our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
translated by 谷歌翻译
最近,Deep Models已经建立了SOTA性能,用于低分辨率图像介绍,但它们缺乏与现代相机(如4K或更多相关的现代相机)以及大孔相关的分辨率的保真度。我们为4K及以上代表现代传感器的照片贡献了一个介绍的基准数据集。我们展示了一个新颖的框架,结合了深度学习和传统方法。我们使用现有的深入介质模型喇嘛合理地填充孔,建立三个由结构,分割,深度组成的指南图像,并应用多个引导的贴片amatch,以产生八个候选候选图像。接下来,我们通过一个新型的策划模块来喂食所有候选构图,该模块选择了8x8反对称成对偏好矩阵的列求和良好的介绍。我们框架的结果受到了8个强大基线的用户的压倒性优先,其定量指标的改进高达7.4,而不是最好的基线喇嘛,而我们的技术与4种不同的SOTA配对时,我们的技术都会改善每个座椅,以使我们的每个人都非常偏爱用户,而不是用户偏爱用户。强大的超级分子基线。
translated by 谷歌翻译
我们表明,诸如Stylegan和Biggan之类的预训练的生成对抗网络(GAN)可以用作潜在银行,以提高图像超分辨率的性能。尽管大多数现有面向感知的方法试图通过以对抗性损失学习来产生现实的产出,但我们的方法,即生成的潜在银行(GLEAN),通过直接利用预先训练的gan封装的丰富而多样的先验来超越现有实践。但是,与需要在运行时需要昂贵的图像特定优化的普遍的GAN反演方法不同,我们的方法只需要单个前向通行证才能修复。可以轻松地将Glean合并到具有多分辨率Skip连接的简单编码器银行decoder架构中。采用来自不同生成模型的先验,可以将收集到各种类别(例如人的面孔,猫,建筑物和汽车)。我们进一步提出了一个轻巧的Glean,名为Lightglean,该版本仅保留Glean中的关键组成部分。值得注意的是,Lightglean仅由21%的参数和35%的拖鞋组成,同时达到可比的图像质量。我们将方法扩展到不同的任务,包括图像着色和盲图恢复,广泛的实验表明,与现有方法相比,我们提出的模型表现出色。代码和模型可在https://github.com/open-mmlab/mmediting上找到。
translated by 谷歌翻译