使用神经领域的音量渲染在捕获和综合三维场景的新视图中表达了很大的希望。然而,这种类型的方法需要沿着每个观看光线在多个点处查询卷网络,以便呈现图像,从而导致非常慢的渲染时间。在本文中,我们提出了一种通过学习从相机光线到最有可能影响像素最终外观的光线的位置的直接映射来克服这种限制的方法。使用这种方法,我们能够渲染,培训和微调一个大量渲染的神经场模型,速度比标准方法快。与现有方法不同,我们的方法与一般卷一起工作,可以训练结束到底。
translated by 谷歌翻译
神经场景表示,例如神经辐射场(NERF),基于训练多层感知器(MLP),使用一组具有已知姿势的彩色图像。现在,越来越多的设备产生RGB-D(颜色 +深度)信息,这对于各种任务非常重要。因此,本文的目的是通过将深度信息与颜色图像结合在一起,研究这些有希望的隐式表示可以进行哪些改进。特别是,最近建议的MIP-NERF方法使用圆锥形的圆丝而不是射线进行音量渲染,它使人们可以考虑具有距离距离摄像头中心距离的像素的不同区域。所提出的方法还模拟了深度不确定性。这允许解决基于NERF的方法的主要局限性,包括提高几何形状的准确性,减少伪像,更快的训练时间和缩短预测时间。实验是在众所周知的基准场景上进行的,并且比较在场景几何形状和光度重建中的准确性提高,同时将训练时间减少了3-5次。
translated by 谷歌翻译
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects. Our method can render 800×800 images at more than 150 FPS, which is over 3000 times faster than conventional NeRFs. We do so without sacrificing quality while preserving the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects. Real-time performance is achieved by pre-tabulating the NeRF into a PlenOctree. In order to preserve viewdependent effects such as specularities, we factorize the appearance via closed-form spherical basis functions. Specifically, we show that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network. Furthermore, we show that PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods. Moreover, this octree optimization step can be used to reduce the training time, as we no longer need to wait for the NeRF training to converge fully. Our real-time neural rendering approach may potentially enable new applications such as 6-DOF industrial and product visualizations, as well as next generation AR/VR systems. PlenOctrees are amenable to in-browser rendering as well; please visit the project page for the interactive online demo, as well as video and code: https://alexyu. net/plenoctrees.
translated by 谷歌翻译
新观点的合成最近通过直接从稀疏观测中学习神经辐射场进行了革命。但是,使用这种新范式渲染图像的速度很慢,因为这样的事实是,该量渲染方程的准确正交需要为每个射线提供大量样品。先前的工作主要集中于加快与每个样本点相关的网络评估,例如,通过将辐射值的缓存到显式的空间数据结构中,但这是以模型紧凑性为代价的。在本文中,我们提出了一种新颖的双网络体系结构,该架构通过学习如何最好地减少所需的样品数量来实现正交方向。为此,我们将网络分为经过共同培训的采样和阴影网络。我们的培训计划采用沿每条射线的固定样品位置,并在整个训练中逐步引入稀疏性,即使在低样本计数下也可以达到高质量。对目标数量的数量进行微调后,可以实时渲染产生的紧凑神经表示。我们的实验表明,我们的方法在质量和框架速率方面超过同时紧凑的神经表示,并且与高效的混合表示相同。代码和补充材料可从https://thomasneff.github.io/adanerf获得。
translated by 谷歌翻译
NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep Multi-Layer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining this divide-and-conquer strategy with further optimizations, rendering is accelerated by three orders of magnitude compared to the original NeRF model without incurring high storage costs. Further, using teacher-student distillation for training, we show that this speed-up can be achieved without sacrificing visual quality.
translated by 谷歌翻译
神经辐射场(NERF)在代表3D场景和合成新颖视图中示出了很大的潜力,但是在推理阶段的NERF的计算开销仍然很重。为了减轻负担,我们进入了NERF的粗细分,分层采样过程,并指出粗阶段可以被我们命名神经样本场的轻量级模块代替。所提出的示例场地图光线进入样本分布,可以将其转换为点坐标并进料到radiance字段以进行体积渲染。整体框架被命名为Neusample。我们在现实合成360 $ ^ {\ circ} $和真正的前瞻性,两个流行的3D场景集上进行实验,并表明Neusample在享受更快推理速度时比NERF实现更好的渲染质量。Neusample进一步压缩,以提出的样品场提取方法朝向质量和速度之间的更好的权衡。
translated by 谷歌翻译
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (nonconvolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
translated by 谷歌翻译
虽然神经辐射场(NERF)已经证明了令人印象深刻的视图合成结果对物体和小型空间区域的结果,但它们在“无界”场景上挣扎,其中相机可以在任何方向上点,并且内容在任何距离处都存在。在此设置中,现有的形式的类似形式模型通常会产生模糊或低分辨率渲染(由于附近和远处物体的不平衡细节和规模),慢慢训练,并且由于任务的固有歧义而可能表现出伪影从一小部分图像重建大场景。我们介绍了MIP-NERF(一个NERF变体,用于解决采样和混叠的NERF变体),其使用非线性场景参数化,在线蒸馏和基于新的失真的常规程序来克服无限性场景所呈现的挑战。我们的模型,我们将“MIP-NERF 360”为瞄准相机围绕一点旋转360度的瞄准场景,与MIP NERF相比将平均平方误差减少54%,并且能够产生逼真的合成视图和用于高度复杂,无限性的现实景区的详细深度图。
translated by 谷歌翻译
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multiview posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. 1
translated by 谷歌翻译
Neural implicit 3D representations have emerged as a powerful paradigm for reconstructing surfaces from multiview images and synthesizing novel views. Unfortunately, existing methods such as DVR or IDR require accurate perpixel object masks as supervision. At the same time, neural radiance fields have revolutionized novel view synthesis. However, NeRF's estimated volume density does not admit accurate surface reconstruction. Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering using the same model. This unified perspective enables novel, more efficient sampling procedures and the ability to reconstruct accurate surfaces without input masks. We compare our method on the DTU, BlendedMVS, and a synthetic indoor dataset. Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
translated by 谷歌翻译
在本文中,我们为复杂场景进行了高效且强大的深度学习解决方案。在我们的方法中,3D场景表示为光场,即,一组光线,每组在到达图像平面时具有相应的颜色。对于高效的新颖视图渲染,我们采用了光场的双面参数化,其中每个光线的特征在于4D参数。然后,我们将光场配向作为4D函数,即将4D坐标映射到相应的颜色值。我们训练一个深度完全连接的网络以优化这种隐式功能并记住3D场景。然后,特定于场景的模型用于综合新颖视图。与以前需要密集的视野的方法不同,需要密集的视野采样来可靠地呈现新颖的视图,我们的方法可以通过采样光线来呈现新颖的视图并直接从网络查询每种光线的颜色,从而使高质量的灯场呈现稀疏集合训练图像。网络可以可选地预测每光深度,从而使诸如自动重新焦点的应用。我们的小说视图合成结果与最先进的综合结果相当,甚至在一些具有折射和反射的具有挑战性的场景中优越。我们在保持交互式帧速率和小的内存占地面积的同时实现这一点。
translated by 谷歌翻译
Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a diffentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF (Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering. Code and data are available at our website: https://github.com/facebookresearch/NSVF.
translated by 谷歌翻译
学习辐射场对新型视图综合显示出了显着的结果。学习过程通常会花费大量时间,这激发了最新方法,通过没有神经网络或使用更有效的数据结构来通过学习来加快学习过程。但是,这些专门设计的方法不适用于大多数基于辐射的方法的方法。为了解决此问题,我们引入了一项一般策略,以加快几乎所有基于辐射的方法的学习过程。我们的关键想法是通过在多视图卷渲染过程中拍摄较少的射线来减少冗余,这是几乎所有基于辐射的方法的基础。我们发现,在具有巨大色彩变化的像素上的射击不仅显着减轻了训练负担,而且几乎不会影响学到的辐射场的准确性。此外,我们还根据树中每个节点的平均渲染误差将每个视图自适应地细分为Quadtree,这使我们在更复杂的区域中动态射击更多的射线,并具有较大的渲染误差。我们在广泛使用的基准下使用不同的基于辐射的方法评估我们的方法。实验结果表明,我们的方法通过更快的训练获得了与最先进的可比精度。
translated by 谷歌翻译
我们建议使用以光源方向为条件的神经辐射场(NERF)的扩展来解决多视光度立体声问题。我们神经表示的几何部分预测表面正常方向,使我们能够理解局部表面反射率。我们的神经表示的外观部分被分解为神经双向反射率函数(BRDF),作为拟合过程的一部分学习,阴影预测网络(以光源方向为条件),使我们能够对明显的BRDF进行建模。基于物理图像形成模型的诱导偏差的学到的组件平衡使我们能够远离训练期间观察到的光源和查看器方向。我们证明了我们在多视光学立体基准基准上的方法,并表明可以通过NERF的神经密度表示可以获得竞争性能。
translated by 谷歌翻译
神经辐射场(NERF)最近在新型视图合成中取得了令人印象深刻的结果。但是,以前的NERF作品主要关注以对象为中心的方案。在这项工作中,我们提出了360ROAM,这是一种新颖的场景级NERF系统,可以实时合成大型室内场景的图像并支持VR漫游。我们的系统首先从多个输入$ 360^\ circ $图像构建全向神经辐射场360NERF。然后,我们逐步估算一个3D概率的占用图,该概率占用图代表了空间密度形式的场景几何形状。跳过空的空间和上采样占据的体素本质上可以使我们通过以几何学意识的方式使用360NERF加速量渲染。此外,我们使用自适应划分和扭曲策略来减少和调整辐射场,以进一步改进。从占用地图中提取的场景的平面图可以为射线采样提供指导,并促进现实的漫游体验。为了显示我们系统的功效,我们在各种场景中收集了$ 360^\ Circ $图像数据集并进行广泛的实验。基线之间的定量和定性比较说明了我们在复杂室内场景的新型视图合成中的主要表现。
translated by 谷歌翻译
神经辐射场(NERFS)表现出惊人的能力,可以从新颖的观点中综合3D场景的图像。但是,他们依赖于基于射线行进的专门体积渲染算法,这些算法与广泛部署的图形硬件的功能不匹配。本文介绍了基于纹理多边形的新的NERF表示形式,该表示可以有效地与标准渲染管道合成新型图像。 NERF表示为一组多边形,其纹理代表二进制不相处和特征向量。用Z-Buffer对多边形的传统渲染产生了每个像素的图像,该图像由在片段着色器中运行的小型,观点依赖的MLP来解释,以产生最终的像素颜色。这种方法使NERF可以使用传统的Polygon栅格化管道渲染,该管道提供了庞大的像素级并行性,从而在包括移动电话在内的各种计算平台上实现了交互式帧速率。
translated by 谷歌翻译
神经辐射字段(NERF)将场景编码为神经表示,使得能够实现新颖视图的照片逼真。然而,RGB图像的成功重建需要在静态条件下拍摄的大量输入视图 - 通常可以为房间尺寸场景的几百个图像。我们的方法旨在将整个房间的小说视图从数量级的图像中合成。为此,我们利用密集的深度前导者来限制NERF优化。首先,我们利用从用于估计相机姿势的运动(SFM)预处理步骤的结构自由提供的稀疏深度数据。其次,我们使用深度完成将这些稀疏点转换为密集的深度图和不确定性估计,用于指导NERF优化。我们的方法使数据有效的新颖观看综合在挑战室内场景中,使用少量为整个场景的18张图像。
translated by 谷歌翻译
最近,神经场景表征在视觉上为3D场景提供了令人印象深刻的结果,但是,他们的研究和进步主要仅限于计算机图形或计算机视觉中的虚拟模型的可视化,而无需明确考虑传感器和构成不确定性的情况。但是,在机器人技术应用程序中使用这种新颖的场景表示形式,需要考虑神经图中这种不确定性。因此,本文的目的是提出一种新的方法,用于使用不确定的培训数据训练{\ em概率的神经场景表示},这可以使这些表示形式纳入机器人技术应用中。使用相机或深度传感器获取图像包含固有的不确定性,此外,用于学习3D模型的相机姿势也不完美。如果这些测量值用于训练而无需考虑其不确定性,则结果模型是非最佳的,并且所得场景表示可能包含诸如Blur和Un-Cheven几何形状之类的伪影。在这项工作中,通过以概率方式专注于不确定信息的培训来研究与学习过程的不确定性整合问题。所提出的方法涉及以不确定性项的明确增加训练可能性,以使网络的学习概率分布相对于培训不确定性最小化。可以证明,除了更精确和一致的几何形状外,这还导致更准确的图像渲染质量。对合成数据集和真实数据集进行了验证,表明所提出的方法的表现优于最先进的方法。结果表明,即使训练数据受到限制,该提出的方法也能够呈现新颖的高质量视图。
translated by 谷歌翻译
虚拟内容创建和互动在现代3D应用中起着重要作用,例如AR和VR。从真实场景中恢复详细的3D模型可以显着扩大其应用程序的范围,并在计算机视觉和计算机图形社区中进行了数十年的研究。我们提出了基于体素的隐式表面表示Vox-Surf。我们的Vox-Surf将空间分为有限的体素。每个体素将几何形状和外观信息存储在其角顶点。 Vox-Surf得益于从体素表示继承的稀疏性,几乎适用于任何情况,并且可以轻松地从多个视图图像中训练。我们利用渐进式训练程序逐渐提取重要体素,以进一步优化,以便仅保留有效的体素,从而大大减少了采样点的数量并增加了渲染速度。细素还可以视为碰撞检测的边界量。该实验表明,与其他方法相比,Vox-Surf表示可以学习精致的表面细节和准确的颜色,并以更少的记忆力和更快的渲染速度来学习。我们还表明,Vox-Surf在场景编辑和AR应用中可能更实用。
translated by 谷歌翻译
Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism. The experiments on the DTU, the NeRF Synthetics , the ScanNet and the Tanks and Temples datasets demonstrate Point-NeRF can surpass the existing methods and achieve the state-of-the-art results.
translated by 谷歌翻译