我们介绍了Plenoxels(plenoptic voxels),是一种光电型观测合成系统。Plenoxels表示作为具有球形谐波的稀疏3D网格的场景。该表示可以通过梯度方法和正则化从校准图像进行优化,而没有任何神经元件。在标准,基准任务中,Plenoxels优化了比神经辐射场更快的两个数量级,无需视觉质量损失。
translated by 谷歌翻译
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects. Our method can render 800×800 images at more than 150 FPS, which is over 3000 times faster than conventional NeRFs. We do so without sacrificing quality while preserving the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects. Real-time performance is achieved by pre-tabulating the NeRF into a PlenOctree. In order to preserve viewdependent effects such as specularities, we factorize the appearance via closed-form spherical basis functions. Specifically, we show that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network. Furthermore, we show that PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods. Moreover, this octree optimization step can be used to reduce the training time, as we no longer need to wait for the NeRF training to converge fully. Our real-time neural rendering approach may potentially enable new applications such as 6-DOF industrial and product visualizations, as well as next generation AR/VR systems. PlenOctrees are amenable to in-browser rendering as well; please visit the project page for the interactive online demo, as well as video and code: https://alexyu. net/plenoctrees.
translated by 谷歌翻译
我们介绍了一种超快速的收敛方法来重建从一组图像中捕获具有已知姿势的场景的图像的每场辐射场。该任务通常适用于新颖的视图综合,最近是由神经辐射领域(NERF)彻底改革为其最先进的质量和灵活性。然而,NERF及其变体需要漫长的训练时间来为单个场景的数小时到几天。相比之下,我们的方法实现了NERF相当的质量,并通过单个GPU在不到15分钟内从划痕中迅速收敛。我们采用由密度体素网格组成的表示,用于场景几何形状和具有浅网络的特征体素网格,用于复杂的视图依赖性外观。用明确和离散化卷表示的建模并不是新的,但我们提出了两种简单而非琐碎的技术,有助于快速收敛速度和高质量的输出。首先,我们介绍了体素密度的激活后插值,其能够以较低的网格分辨率产生尖锐的表面。其次,直接体素密度优化容易发生次优几何解决方案,因此我们通过施加多个前沿来强制优化过程。最后,对五个内向的基准评估表明,我们的方法匹配,如果没有超越Nerf的质量,但它只需15分钟即可从头开始训练新场景。
translated by 谷歌翻译
神经辐射场(NERF)是数据驱动3D重建中的流行方法。鉴于其简单性和高质量的渲染,正在开发许多NERF应用程序。但是,NERF的大量的速度很大。许多尝试如何加速NERF培训和推理,包括复杂的代码级优化和缓存,使用复杂的数据结构以及通过多任务和元学习的摊销。在这项工作中,我们通过NERF之前通过经典技术镜头重新审视NERF的基本构建块。我们提出了Voxel-Accelated Nerf(VaxnerF),与Visual Hull集成了Nerf,一种经典的3D重建技术,只需要每张图像的二进制前景背景像素标签。可视船体,可在大约10秒内优化,可以提供粗略的现场分离,以省略NERF中的大量网络评估。我们在流行的JAXNERF Codebase提供了一个干净的全力验光,基于JAX的实现,其仅包括大约30行的代码更改和模块化视觉船体子程序,并在高度表现的JAXNERF之上实现了大约2-8倍的速度学习基线具有零劣化呈现质量。具有足够的计算,这有效地将单位训练从小时到30分钟缩小到30分钟。我们希望VAXNERF - 一种仔细组合具有深入方法的经典技术(可谓更换它) - 可以赋予并加速新的NERF扩展和应用,以其简单,可移植性和可靠的性能收益。代码在https://github.com/naruya/vaxnerf提供。
translated by 谷歌翻译
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (nonconvolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
translated by 谷歌翻译
神经辐射场(NERFS)表现出惊人的能力,可以从新颖的观点中综合3D场景的图像。但是,他们依赖于基于射线行进的专门体积渲染算法,这些算法与广泛部署的图形硬件的功能不匹配。本文介绍了基于纹理多边形的新的NERF表示形式,该表示可以有效地与标准渲染管道合成新型图像。 NERF表示为一组多边形,其纹理代表二进制不相处和特征向量。用Z-Buffer对多边形的传统渲染产生了每个像素的图像,该图像由在片段着色器中运行的小型,观点依赖的MLP来解释,以产生最终的像素颜色。这种方法使NERF可以使用传统的Polygon栅格化管道渲染,该管道提供了庞大的像素级并行性,从而在包括移动电话在内的各种计算平台上实现了交互式帧速率。
translated by 谷歌翻译
We present TensoRF, a novel approach to model and reconstruct radiance fields. Unlike NeRF that purely uses MLPs, we model the radiance field of a scene as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features. Our central idea is to factorize the 4D scene tensor into multiple compact low-rank tensor components. We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF. To further boost performance, we introduce a novel vector-matrix (VM) decomposition that relaxes the low-rank constraints for two modes of a tensor and factorizes tensors into compact vector and matrix factors. Beyond superior rendering quality, our models with CP and VM decompositions lead to a significantly lower memory footprint in comparison to previous and concurrent works that directly optimize per-voxel features. Experimentally, we demonstrate that TensoRF with CP decomposition achieves fast reconstruction (<30 min) with better rendering quality and even a smaller model size (<4 MB) compared to NeRF. Moreover, TensoRF with VM decomposition further boosts rendering quality and outperforms previous state-of-the-art methods, while reducing the reconstruction time (<10 min) and retaining a compact model size (<75 MB).
translated by 谷歌翻译
潜水员在NERF的关键思想和其变体 - 密度模型和体积渲染的关键思想中建立 - 学习可以从少量图像实际渲染的3D对象模型。与所有先前的NERF方法相比,潜水员使用确定性而不是体积渲染积分的随机估计。潜水员的表示是基于体素的功能领域。为了计算卷渲染积分,将光线分为间隔,每个体素;使用MLP的每个间隔的特征估计体渲染积分的组件,并且组件聚合。结果,潜水员可以呈现其他集成商错过的薄半透明结构。此外,潜水员的表示与其他这样的方法相比相对暴露的语义 - 在体素空间中的运动特征向量导致自然编辑。对当前最先进的方法的广泛定性和定量比较表明,潜水员产生(1)在最先进的质量或高于最先进的质量,(2)的情况下非常小而不会被烘烤,(3)在不被烘烤的情况下渲染非常快,并且(4)可以以自然方式编辑。
translated by 谷歌翻译
Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a diffentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF (Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering. Code and data are available at our website: https://github.com/facebookresearch/NSVF.
translated by 谷歌翻译
Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism. The experiments on the DTU, the NeRF Synthetics , the ScanNet and the Tanks and Temples datasets demonstrate Point-NeRF can surpass the existing methods and achieve the state-of-the-art results.
translated by 谷歌翻译
我们提出了高动态范围辐射(HDR)字段,HDR-PLENOXELS,它学习了3D HDR辐射场的肺化功能,几何信息和2D低动态范围(LDR)图像中固有的不同摄像机设置。我们基于体素的卷渲染管道可重建HDR辐射字段,仅以端到端的方式从不同的相机设置中拍摄的多视图LDR图像,并且具有快速的收敛速度。为了在现实世界中处理各种摄像机,我们引入了一个音调映射模块,该模块模拟了数字相机内成像管道(ISP)(ISP)和DISTANGLES辐射测定设置。我们的音调映射模块可以通过控制每个新型视图的辐射设置来渲染。最后,我们构建一个具有不同摄像机条件的多视图数据集,适合我们的问题设置。我们的实验表明,HDR-Plenoxels可以从具有各种相机的LDR图像中表达细节和高质量的HDR新型视图。
translated by 谷歌翻译
最近,神经辐射场(NERF)正在彻底改变新型视图合成(NVS)的卓越性能。但是,NERF及其变体通常需要进行冗长的每场训练程序,其中将多层感知器(MLP)拟合到捕获的图像中。为了解决挑战,已经提出了体素网格表示,以显着加快训练的速度。但是,这些现有方法只能处理静态场景。如何开发有效,准确的动态视图合成方法仍然是一个开放的问题。将静态场景的方法扩展到动态场景并不简单,因为场景几何形状和外观随时间变化。在本文中,基于素素网格优化的最新进展,我们提出了一种快速变形的辐射场方法来处理动态场景。我们的方法由两个模块组成。第一个模块采用变形网格来存储3D动态功能,以及使用插值功能将观测空间中的3D点映射到规范空间的变形的轻巧MLP。第二个模块包含密度和颜色网格,以建模场景的几何形状和密度。明确对阻塞进行了建模,以进一步提高渲染质量。实验结果表明,我们的方法仅使用20分钟的训练就可以实现与D-NERF相当的性能,该训练比D-NERF快70倍以上,这清楚地证明了我们提出的方法的效率。
translated by 谷歌翻译
神经表面重建旨在基于多视图图像重建准确的3D表面。基于神经量的先前方法主要训练完全隐式的模型,它们需要单个场景的数小时培训。最近的努力探讨了明确的体积表示,该表示通过记住可学习的素网格中的重要信息,从而大大加快了优化过程。但是,这些基于体素的方法通常在重建细粒几何形状方面遇到困难。通过实证研究,我们发现高质量的表面重建取决于两个关键因素:构建相干形状的能力和颜色几何依赖性的精确建模。特别是,后者是准确重建细节的关键。受这些发现的启发,我们开发了Voxurf,这是一种基于体素的方法,用于有效,准确的神经表面重建,该方法由两个阶段组成:1)利用可学习的特征网格来构建颜色场并获得连贯的粗糙形状,并且2)使用双色网络来完善详细的几何形状,可捕获精确的颜色几何依赖性。我们进一步引入了层次几何特征,以启用跨体素的信息共享。我们的实验表明,Voxurf同时达到了高效率和高质量。在DTU基准测试中,与最先进的方法相比,Voxurf获得了更高的重建质量,训练的加速度为20倍。
translated by 谷歌翻译
我们提出了一种可区分的渲染算法,以进行有效的新型视图合成。通过偏离基于音量的表示,支持学习点表示,我们在训练和推理方面的内存和运行时范围内改进了现有方法的数量级。该方法从均匀采样的随机点云开始,并使用基于可区分的SPLAT渲染器来发展模型以匹配一组输入图像,从而学习了每点位置和观看依赖性外观。在训练和推理中,我们的方法比NERF快300倍,质量只有边缘牺牲,而在静态场景中使用少于10 〜MB的记忆。对于动态场景,我们的方法比Stnerf训练两个数量级,并以接近互动速率渲染,同时即使在不施加任何时间固定的正则化合物的情况下保持较高的图像质量和时间连贯性。
translated by 谷歌翻译
学习辐射场对新型视图综合显示出了显着的结果。学习过程通常会花费大量时间,这激发了最新方法,通过没有神经网络或使用更有效的数据结构来通过学习来加快学习过程。但是,这些专门设计的方法不适用于大多数基于辐射的方法的方法。为了解决此问题,我们引入了一项一般策略,以加快几乎所有基于辐射的方法的学习过程。我们的关键想法是通过在多视图卷渲染过程中拍摄较少的射线来减少冗余,这是几乎所有基于辐射的方法的基础。我们发现,在具有巨大色彩变化的像素上的射击不仅显着减轻了训练负担,而且几乎不会影响学到的辐射场的准确性。此外,我们还根据树中每个节点的平均渲染误差将每个视图自适应地细分为Quadtree,这使我们在更复杂的区域中动态射击更多的射线,并具有较大的渲染误差。我们在广泛使用的基准下使用不同的基于辐射的方法评估我们的方法。实验结果表明,我们的方法通过更快的训练获得了与最先进的可比精度。
translated by 谷歌翻译
NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep Multi-Layer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining this divide-and-conquer strategy with further optimizations, rendering is accelerated by three orders of magnitude compared to the original NeRF model without incurring high storage costs. Further, using teacher-student distillation for training, we show that this speed-up can be achieved without sacrificing visual quality.
translated by 谷歌翻译
在本文中,我们为复杂场景进行了高效且强大的深度学习解决方案。在我们的方法中,3D场景表示为光场,即,一组光线,每组在到达图像平面时具有相应的颜色。对于高效的新颖视图渲染,我们采用了光场的双面参数化,其中每个光线的特征在于4D参数。然后,我们将光场配向作为4D函数,即将4D坐标映射到相应的颜色值。我们训练一个深度完全连接的网络以优化这种隐式功能并记住3D场景。然后,特定于场景的模型用于综合新颖视图。与以前需要密集的视野的方法不同,需要密集的视野采样来可靠地呈现新颖的视图,我们的方法可以通过采样光线来呈现新颖的视图并直接从网络查询每种光线的颜色,从而使高质量的灯场呈现稀疏集合训练图像。网络可以可选地预测每光深度,从而使诸如自动重新焦点的应用。我们的小说视图合成结果与最先进的综合结果相当,甚至在一些具有折射和反射的具有挑战性的场景中优越。我们在保持交互式帧速率和小的内存占地面积的同时实现这一点。
translated by 谷歌翻译
我们提出了逐渐变化的辐射场(PDRF),这是一种从模糊图像中有效重建高质量辐射场的新方法。虽然当前的最先进的(SOTA)场景重建方法实现了光真实的渲染,因此清洁源视图会导致其性能在源视图受模糊影响的影响时会受到影响,这通常是野外图像的观察。以前的脱毛方法要么不考虑3D几何形状,要么是计算强度。为了解决这些问题,PDRF是Radiance Field建模中逐渐消除的方案,通过合并3D场景上下文来准确地模拟模糊。 PDRF进一步使用了有效的重要性采样方案,从而导致快速场景优化。具体而言,PDRF提出了一个粗射线渲染器,以快速估计体素密度和特征。然后,使用精细的体素渲染器来实现高质量的射线追踪。我们执行广泛的实验,并表明PDRF比以前的SOTA快15倍,同时在合成场景和真实场景上都取得更好的性能。
translated by 谷歌翻译
Neural basis functionsReflectance coefficients Figure 1: (a) Each pixel in NeX multiplane image consists of an alpha transparency value, base color k 0 , and view-dependent reflectance coefficients k 1 ...k n . A linear combination of these coefficients and basis functions learned from a neural network produces the final color value. (b, c) show our synthesized images that can be rendered in real time with view-dependent effects such as the reflection on the silver spoon.
translated by 谷歌翻译
Recent advances in Neural Radiance Fields (NeRFs) treat the problem of novel view synthesis as Sparse Radiance Field (SRF) optimization using sparse voxels for efficient and fast rendering (plenoxels,InstantNGP). In order to leverage machine learning and adoption of SRFs as a 3D representation, we present SPARF, a large-scale ShapeNet-based synthetic dataset for novel view synthesis consisting of $\sim$ 17 million images rendered from nearly 40,000 shapes at high resolution (400 X 400 pixels). The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis and includes more than one million 3D-optimized radiance fields with multiple voxel resolutions. Furthermore, we propose a novel pipeline (SuRFNet) that learns to generate sparse voxel radiance fields from only few views. This is done by using the densely collected SPARF dataset and 3D sparse convolutions. SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate high-quality sparse voxel radiance fields that can be rendered from novel views. Our approach achieves state-of-the-art results in the task of unconstrained novel view synthesis based on few views on ShapeNet as compared to recent baselines. The SPARF dataset will be made public with the code and models on the project website https://abdullahamdi.com/sparf/ .
translated by 谷歌翻译