我们呈现NERF-SR,一种用于高分辨率(HR)新型视图合成的解决方案,主要是低分辨率(LR)输入。我们的方法是基于神经辐射场(NERF)的内置,其预测每点密度和颜色,具有多层的射击。在在任意尺度上产生图像时,NERF与超越观察图像的分辨率努力。我们的关键识别是NERF具有本地之前的,这意味着可以在附近区域传播3D点的预测,并且保持准确。我们首先通过超级采样策略来利用它,该策略在每个图像像素处射击多个光线,这在子像素级别强制了多视图约束。然后,我们表明,NERF-SR可以通过改进网络进一步提高超级采样的性能,该细化网络利用估计的深度来实现HR参考图像上的相关补丁的幻觉。实验结果表明,NERF-SR在合成和现实世界数据集的HR上为新型视图合成产生高质量结果。
translated by 谷歌翻译
In this paper, we present a novel and effective framework, named 4K-NeRF, to pursue high fidelity view synthesis on the challenging scenarios of ultra high resolutions, building on the methodology of neural radiance fields (NeRF). The rendering procedure of NeRF-based methods typically relies on a pixel wise manner in which rays (or pixels) are treated independently on both training and inference phases, limiting its representational ability on describing subtle details especially when lifting to a extremely high resolution. We address the issue by better exploring ray correlation for enhancing high-frequency details benefiting from the use of geometry-aware local context. Particularly, we use the view-consistent encoder to model geometric information effectively in a lower resolution space and recover fine details through the view-consistent decoder, conditioned on ray features and depths estimated by the encoder. Joint training with patch-based sampling further facilitates our method incorporating the supervision from perception oriented regularization beyond pixel wise loss. Quantitative and qualitative comparisons with modern NeRF methods demonstrate that our method can significantly boost rendering quality for retaining high-frequency details, achieving the state-of-the-art visual quality on 4K ultra-high-resolution scenario. Code Available at \url{https://github.com/frozoul/4K-NeRF}
translated by 谷歌翻译
我们提出了HRF-NET,这是一种基于整体辐射场的新型视图合成方法,该方法使用一组稀疏输入来呈现新视图。最近的概括视图合成方法还利用了光辉场,但渲染速度不是实时的。现有的方法可以有效地训练和呈现新颖的观点,但它们无法概括地看不到场景。我们的方法解决了用于概括视图合成的实时渲染问题,并由两个主要阶段组成:整体辐射场预测指标和基于卷积的神经渲染器。该架构不仅基于隐式神经场的一致场景几何形状,而且还可以使用单个GPU有效地呈现新视图。我们首先在DTU数据集的多个3D场景上训练HRF-NET,并且网络只能仅使用光度损耗就看不见的真实和合成数据产生合理的新视图。此外,我们的方法可以利用单个场景的密集参考图像集来产生准确的新颖视图,而无需依赖其他明确表示,并且仍然保持了预训练模型的高速渲染。实验结果表明,HRF-NET优于各种合成和真实数据集的最先进的神经渲染方法。
translated by 谷歌翻译
Neural volumetric representations have become a widely adopted model for radiance fields in 3D scenes. These representations are fully implicit or hybrid function approximators of the instantaneous volumetric radiance in a scene, which are typically learned from multi-view captures of the scene. We investigate the new task of neural volume super-resolution - rendering high-resolution views corresponding to a scene captured at low resolution. To this end, we propose a neural super-resolution network that operates directly on the volumetric representation of the scene. This approach allows us to exploit an advantage of operating in the volumetric domain, namely the ability to guarantee consistent super-resolution across different viewing directions. To realize our method, we devise a novel 3D representation that hinges on multiple 2D feature planes. This allows us to super-resolve the 3D scene representation by applying 2D convolutional networks on the 2D feature planes. We validate the proposed method's capability of super-resolving multi-view consistent views both quantitatively and qualitatively on a diverse set of unseen 3D scenes, demonstrating a significant advantage over existing approaches.
translated by 谷歌翻译
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses, including a high-resolution camera surrounded by multiple low-resolution cameras. The performance of existing methods is still limited, as they produce either blurry results on plain textured areas or distortions around depth discontinuous boundaries. To tackle this challenge, we propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input from two complementary and parallel perspectives. Specifically, one module regresses a spatially consistent intermediate estimation by learning a deep multidimensional and cross-domain feature representation, while the other module warps another intermediate estimation, which maintains the high-frequency textures, by propagating the information of the high-resolution view. We finally leverage the advantages of the two intermediate estimations adaptively via the learned attention maps, leading to the final high-resolution LF image with satisfactory results on both plain textured areas and depth discontinuous boundaries. Besides, to promote the effectiveness of our method trained with simulated hybrid data on real hybrid data captured by a hybrid LF imaging system, we carefully design the network architecture and the training strategy. Extensive experiments on both real and simulated hybrid data demonstrate the significant superiority of our approach over state-of-the-art ones. To the best of our knowledge, this is the first end-to-end deep learning method for LF reconstruction from a real hybrid input. We believe our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
translated by 谷歌翻译
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multiview posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. 1
translated by 谷歌翻译
新型视图综合的古典光场渲染可以准确地再现视图依赖性效果,例如反射,折射和半透明,但需要一个致密的视图采样的场景。基于几何重建的方法只需要稀疏的视图,但不能准确地模拟非兰伯语的效果。我们介绍了一个模型,它结合了强度并减轻了这两个方向的局限性。通过在光场的四维表示上操作,我们的模型学会准确表示依赖视图效果。通过在训练和推理期间强制执行几何约束,从稀疏的视图集中毫无屏蔽地学习场景几何。具体地,我们介绍了一种基于两级变压器的模型,首先沿着ePipoll线汇总特征,然后沿参考视图聚合特征以产生目标射线的颜色。我们的模型在多个前进和360 {\ DEG}数据集中优于最先进的,具有较大的差别依赖变化的场景更大的边缘。
translated by 谷歌翻译
我们呈现高动态范围神经辐射字段(HDR-NERF),以从一组低动态范围(LDR)视图的HDR辐射率字段与不同的曝光。使用HDR-NERF,我们能够在不同的曝光下生成新的HDR视图和新型LDR视图。我们方法的关键是模拟物理成像过程,该过程决定了场景点的辐射与具有两个隐式功能的LDR图像中的像素值转换为:RADIACE字段和音调映射器。辐射场对场景辐射(值在0到+末端之间的值变化),其通过提供相应的射线源和光线方向来输出光线的密度和辐射。 TONE MAPPER模拟映射过程,即在相机传感器上击中的光线变为像素值。通过将辐射和相应的曝光时间送入音调映射器来预测光线的颜色。我们使用经典的卷渲染技术将输出辐射,颜色和密度投影为HDR和LDR图像,同时只使用输入的LDR图像作为监控。我们收集了一个新的前瞻性的HDR数据集,以评估所提出的方法。综合性和现实世界场景的实验结果验证了我们的方法不仅可以准确控制合成视图的曝光,还可以用高动态范围呈现视图。
translated by 谷歌翻译
由于其显着的合成质量,最近,神经辐射场(NERF)最近对3D场景重建和新颖的视图合成进行了相当大的关注。然而,由散焦或运动引起的图像模糊,这通常发生在野外的场景中,显着降低了其重建质量。为了解决这个问题,我们提出了DeBlur-nerf,这是一种可以从模糊输入恢复尖锐的nerf的第一种方法。我们采用逐合成方法来通过模拟模糊过程来重建模糊的视图,从而使NERF对模糊输入的鲁棒。该仿真的核心是一种新型可变形稀疏内核(DSK)模块,其通过在每个空间位置变形规范稀疏内核来模拟空间变形模糊内核。每个内核点的射线起源是共同优化的,受到物理模糊过程的启发。该模块作为MLP参数化,具有能够概括为各种模糊类型。联合优化NERF和DSK模块允许我们恢复尖锐的NERF。我们证明我们的方法可用于相机运动模糊和散焦模糊:真实场景中的两个最常见的模糊。合成和现实世界数据的评估结果表明,我们的方法优于几个基线。合成和真实数据集以及源代码将公开可用于促进未来的研究。
translated by 谷歌翻译
最新的多视图多媒体应用程序在高分辨率(HR)视觉体验与存储或带宽约束之间挣扎。因此,本文提出了一个多视图图像超分辨率(MVISR)任务。它旨在增加从同一场景捕获的多视图图像的分辨率。一种解决方案是将图像或视频超分辨率(SR)方法应用于低分辨率(LR)输入视图结果。但是,这些方法无法处理视图之间的大角度转换,并利用所有多视图图像中的信息。为了解决这些问题,我们提出了MVSRNET,该MVSRNET使用几何信息从所有LR多视图中提取尖锐的细节,以支持LR输入视图的SR。具体而言,MVSRNET中提出的几何感知参考合成模块使用几何信息和所有多视图LR图像来合成像素对齐的HR参考图像。然后,提出的动态高频搜索网络完全利用了SR参考图像中的高频纹理细节。关于几个基准测试的广泛实验表明,我们的方法在最新方法上有了显着改善。
translated by 谷歌翻译
尽管神经辐射场(NERF)迅速发展,但稠密的必要性在很大程度上禁止其更广泛的应用。尽管最近的一些作品试图解决这个问题,但它们要么以稀疏的视图(仍然是其中的一些)操作,要么在简单的对象/场景上运行。在这项工作中,我们考虑了一项更雄心勃勃的任务:通过“只看一次”,即仅使用单个视图来训练神经辐射场,而是在现实的复杂视觉场景上。为了实现这一目标,我们提出了一个视图NERF(SINNERF)框架,该框架由精心设计的语义和几何正规化组成。具体而言,Sinnerf构建了一个半监督的学习过程,我们在其中介绍并传播几何标签和语义伪标签,以指导渐进式训练过程。广泛的实验是在复杂的场景基准上进行的,包括NERF合成数据集,本地光场融合数据集和DTU数据集。我们表明,即使在多视图数据集上进行预训练,Sinnerf也可以产生照片现实的新型视图合成结果。在单个图像设置下,Sinnerf在所有情况下都显着胜过当前最新的NERF基线。项目页面:https://vita-group.github.io/sinnerf/
translated by 谷歌翻译
神经辐射场(NERF)具有密集捕获的输入图像实现光真实的视图合成。然而,鉴于稀疏的视图,NERF的几何形状极为严重,从而导致新观点合成质量的显着降解。受到自我监督的深度估计方法的启发,我们提出了structnerf,这是针对稀疏输入的室内场景的新型视图合成的解决方案。 structnerf利用自然嵌入多视图输入中的结构提示来处理NERF中无约束的几何问题。具体而言,它分别解决了纹理和非纹理区域:提出了基于贴片的多视图一致的光度损失来限制纹理区域的几何形状;对于非纹理的,我们明确地将它们限制为3D一致的平面。通过密集的自我监督深度约束,我们的方法可以改善NERF的几何形状和视图综合性能,而无需对外部数据进行任何其他培训。在几个现实世界数据集上进行的广泛实验表明,构造者超过了针对室内场景的最新方法,这些方法具有稀疏输入的定量和定性。
translated by 谷歌翻译
自从神经辐射场(NERF)出现以来,神经渲染引起了极大的关注,并且已经大大推动了新型视图合成的最新作品。最近的重点是在模型上过度适合单个场景,以及学习模型的一些尝试,这些模型可以综合看不见的场景的新型视图,主要包括将深度卷积特征与类似NERF的模型组合在一起。我们提出了一个不同的范式,不需要深层特征,也不需要类似NERF的体积渲染。我们的方法能够直接从现场采样的贴片集中直接预测目标射线的颜色。我们首先利用表现几何形状沿着每个参考视图的异性线提取斑块。每个贴片线性地投影到1D特征向量和一系列变压器处理集合中。对于位置编码,我们像在光场表示中一样对射线进行参数化,并且至关重要的差异是坐标是相对于目标射线的规范化的,这使我们的方法与参考帧无关并改善了概括。我们表明,即使接受比先前的工作要少得多的数据训练,我们的方法在新颖的综合综合方面都超出了最新的视图综合。
translated by 谷歌翻译
Recent efforts in Neural Rendering Fields (NeRF) have shown impressive results on novel view synthesis by utilizing implicit neural representation to represent 3D scenes. Due to the process of volumetric rendering, the inference speed for NeRF is extremely slow, limiting the application scenarios of utilizing NeRF on resource-constrained hardware, such as mobile devices. Many works have been conducted to reduce the latency of running NeRF models. However, most of them still require high-end GPU for acceleration or extra storage memory, which is all unavailable on mobile devices. Another emerging direction utilizes the neural light field (NeLF) for speedup, as only one forward pass is performed on a ray to predict the pixel color. Nevertheless, to reach a similar rendering quality as NeRF, the network in NeLF is designed with intensive computation, which is not mobile-friendly. In this work, we propose an efficient network that runs in real-time on mobile devices for neural rendering. We follow the setting of NeLF to train our network. Unlike existing works, we introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size, i.e., saving $15\times \sim 24\times$ storage compared with MobileNeRF. Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes on mobile devices, e.g., $18.04$ms (iPhone 13) for rendering one $1008\times756$ image of real 3D scenes. Additionally, we achieve similar image quality as NeRF and better quality than MobileNeRF (PSNR $26.15$ vs. $25.91$ on the real-world forward-facing dataset).
translated by 谷歌翻译
Point of View & TimeFigure 1: We propose D-NeRF, a method for synthesizing novel views, at an arbitrary point in time, of dynamic scenes with complex non-rigid geometries. We optimize an underlying deformable volumetric function from a sparse set of input monocular views without the need of ground-truth geometry nor multi-view images. The figure shows two scenes under variable points of view and time instances synthesised by the proposed model.
translated by 谷歌翻译
由于其简单性和最先进的性能,神经辐射场(NERF)被出现为新型视图综合任务的强大表示。虽然NERF可以在许多输入视图可用时产生看不见的观点的光静观渲染,但是当该数量减少时,其性能显着下降。我们观察到,稀疏输入方案中的大多数伪像是由估计场景几何中的错误引起的,并且在训练开始时通过不同的行为引起。我们通过规范从未观察的视点呈现的修补程序的几何和外观来解决这一点,并在训练期间退火光线采样空间。我们还使用规范化的流模型来规范未观察的视点的颜色。我们的车型不仅优于优化单个场景的其他方法,而是在许多情况下,还有条件模型,这些模型在大型多视图数据集上广泛预先培训。
translated by 谷歌翻译
我们探索了基于神经光场表示的几种新颖观点合成的新策略。给定目标摄像头姿势,隐式神经网络将每个射线映射到其目标像素的颜色。该网络的条件是根据来自显式3D特征量的粗量渲染产生的本地射线特征。该卷是由使用3D Convnet的输入图像构建的。我们的方法在基于最先进的神经辐射场竞争方面,在合成和真实MVS数据上实现了竞争性能,同时提供了100倍的渲染速度。
translated by 谷歌翻译
我们提出了一种基于神经辐射场(NERF)的单个$ 360^\ PANORAMA图像合成新视图的方法。在类似环境中的先前研究依赖于多层感知的邻居插值能力来完成由遮挡引起的丢失区域,这导致其预测中的伪像。我们提出了360Fusionnerf,这是一个半监督的学习框架,我们介绍几何监督和语义一致性,以指导渐进式培训过程。首先,将输入图像重新投影至$ 360^\ Circ $图像,并在其他相机位置提取辅助深度图。除NERF颜色指导外,深度监督还改善了合成视图的几何形状。此外,我们引入了语义一致性损失,鼓励新观点的现实渲染。我们使用预先训练的视觉编码器(例如剪辑)提取这些语义功能,这是一个视觉变压器,经过数以千计的不同2D照片,并通过自然语言监督从网络中挖掘出来。实验表明,我们提出的方法可以在保留场景的特征的同时产生未观察到的区域的合理完成。 360fusionnerf在各种场景中接受培训时,转移到合成结构3D数据集(PSNR〜5%,SSIM〜3%lpips〜13%)时,始终达到最先进的性能,SSIM〜3%LPIPS〜9%)和replica360数据集(PSNR〜8%,SSIM〜2%LPIPS〜18%)。
translated by 谷歌翻译
Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a diffentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF (Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering. Code and data are available at our website: https://github.com/facebookresearch/NSVF.
translated by 谷歌翻译
我们引入了一个可扩展的框架,用于从RGB-D图像中具有很大不完整的场景覆盖率的新型视图合成。尽管生成的神经方法在2D图像上表现出了惊人的结果,但它们尚未达到相似的影像学结果,并结合了场景完成,在这种情况下,空间3D场景的理解是必不可少的。为此,我们提出了一条在基于网格的神经场景表示上执行的生成管道,通过以2.5D-3D-2.5D方式进行场景的分布来完成未观察到的场景部分。我们在3D空间中处理编码的图像特征,并具有几何完整网络和随后的纹理镶嵌网络,以推断缺失区域。最终可以通过与一致性的可区分渲染获得感性图像序列。全面的实验表明,我们方法的图形输出优于最新技术,尤其是在未观察到的场景部分中。
translated by 谷歌翻译