神经辐射场(NERF)是一种普遍的视图综合技术,其表示作为连续体积函数的场景,由多层的感知来参数化,其提供每个位置处的体积密度和视图相关的发射辐射。虽然基于NERF的技术在代表精细的几何结构时,具有平稳变化的视图依赖性外观,但它们通常无法精确地捕获和再现光泽表面的外观。我们通过引入Ref-nerf来解决这些限制,该ref-nerf替换了nerf的视图依赖性输出辐射的参数化,使用反射辐射的表示和使用空间不同场景属性的集合来构造该函数的表示。我们展示了与正常载体上的规范器一起,我们的模型显着提高了镜面反射的现实主义和准确性。此外,我们表明我们的模型的外向光线的内部表示是可解释的,可用于场景编辑。
translated by 谷歌翻译
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (nonconvolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
translated by 谷歌翻译
我们解决了从由一个未知照明条件照射的物体的多视图图像(及其相机姿势)从多视图图像(和它们的相机姿势)恢复物体的形状和空间变化的空间变化的问题。这使得能够在任意环境照明下呈现对象的新颖视图和对象的材料属性的编辑。我们呼叫神经辐射分解(NERFVERTOR)的方法的关键是蒸馏神经辐射场(NERF)的体积几何形状[MILDENHALL等人。 2020]将物体表示为表面表示,然后在求解空间改变的反射率和环境照明时共同细化几何形状。具体而言,Nerfactor仅使用重新渲染丢失,简单的光滑度Provers以及从真实学中学到的数据驱动的BRDF而无任何监督的表面法线,光可视性,Albedo和双向反射率和双向反射分布函数(BRDF)的3D神经领域-world brdf测量。通过显式建模光可视性,心脏请能够将来自Albedo的阴影分离,并在任意照明条件下合成现实的软或硬阴影。 Nerfactor能够在这场具有挑战性和实际场景的挑战和捕获的捕获设置中恢复令人信服的3D模型进行令人满意的3D模型。定性和定量实验表明,在各种任务中,内容越优于基于经典和基于深度的学习状态。我们的视频,代码和数据可在peoptom.csail.mit.edu/xiuming/projects/nerfactor/上获得。
translated by 谷歌翻译
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions. Our method represents the scene as a continuous volumetric function parameterized as MLPs whose inputs are a 3D location and whose outputs are the following scene properties at that input location: volume density, surface normal, material parameters, distance to the first surface intersection in any direction, and visibility of the external environment in any direction. Together, these allow us to render novel views of the object under arbitrary lighting, including indirect illumination effects. The predicted visibility and surface intersection fields are critical to our model's ability to simulate direct and indirect illumination during training, because the brute-force techniques used by prior work are intractable for lighting conditions outside of controlled setups with a single light. Our method outperforms alternative approaches for recovering relightable 3D scene representations, and performs well in complex lighting settings that have posed a significant challenge to prior work.
translated by 谷歌翻译
我们建议使用以光源方向为条件的神经辐射场(NERF)的扩展来解决多视光度立体声问题。我们神经表示的几何部分预测表面正常方向,使我们能够理解局部表面反射率。我们的神经表示的外观部分被分解为神经双向反射率函数(BRDF),作为拟合过程的一部分学习,阴影预测网络(以光源方向为条件),使我们能够对明显的BRDF进行建模。基于物理图像形成模型的诱导偏差的学到的组件平衡使我们能够远离训练期间观察到的光源和查看器方向。我们证明了我们在多视光学立体基准基准上的方法,并表明可以通过NERF的神经密度表示可以获得竞争性能。
translated by 谷歌翻译
Google Research Basecolor Metallic Roughness Normal Multi-View Images NeRD Volume Decomposed BRDF Relighting & View synthesis Textured MeshFigure 1: Neural Reflectance Decomposition for Relighting. We encode multiple views of an object under varying or fixed illumination into the NeRD volume.We decompose each given image into geometry, spatially-varying BRDF parameters and a rough approximation of the incident illumination in a globally consistent manner. We then extract a relightable textured mesh that can be re-rendered under novel illumination conditions in real-time.
translated by 谷歌翻译
We present a learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs. We build on Neural Radiance Fields (NeRF), which uses the weights of a multilayer perceptron to model the density and color of a scene as a function of 3D coordinates. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders. We introduce a series of extensions to NeRF to address these issues, thereby enabling accurate reconstructions from unstructured image collections taken from the internet. We apply our system, dubbed NeRF-W, to internet photo collections of famous landmarks, and demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.
translated by 谷歌翻译
The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. The straightforward solution of supersampling by rendering with multiple rays per pixel is impractical for NeRF, because rendering each ray requires querying a multilayer perceptron hundreds of times. Our solution, which we call "mip-NeRF" (à la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale. By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF's ability to represent fine details, while also being 7% faster than NeRF and half the size. Compared to NeRF, mip-NeRF reduces average error rates by 17% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset that we present. Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22× faster.
translated by 谷歌翻译
虽然神经辐射场(NERF)已经证明了令人印象深刻的视图合成结果对物体和小型空间区域的结果,但它们在“无界”场景上挣扎,其中相机可以在任何方向上点,并且内容在任何距离处都存在。在此设置中,现有的形式的类似形式模型通常会产生模糊或低分辨率渲染(由于附近和远处物体的不平衡细节和规模),慢慢训练,并且由于任务的固有歧义而可能表现出伪影从一小部分图像重建大场景。我们介绍了MIP-NERF(一个NERF变体,用于解决采样和混叠的NERF变体),其使用非线性场景参数化,在线蒸馏和基于新的失真的常规程序来克服无限性场景所呈现的挑战。我们的模型,我们将“MIP-NERF 360”为瞄准相机围绕一点旋转360度的瞄准场景,与MIP NERF相比将平均平方误差减少54%,并且能够产生逼真的合成视图和用于高度复杂,无限性的现实景区的详细深度图。
translated by 谷歌翻译
神经辐射场(NERFS)产生最先进的视图合成结果。然而,它们慢渲染,需要每像素数百个网络评估,以近似卷渲染积分。将nerfs烘烤到明确的数据结构中实现了有效的渲染,但导致内存占地面积的大幅增加,并且在许多情况下,质量降低。在本文中,我们提出了一种新的神经光场表示,相反,相反,紧凑,直接预测沿线的集成光线。我们的方法支持使用每个像素的单个网络评估,用于小基线光场数据集,也可以应用于每个像素的几个评估的较大基线。在我们的方法的核心,是一个光线空间嵌入网络,将4D射线空间歧管映射到中间可间可动子的潜在空间中。我们的方法在诸如斯坦福光场数据集等密集的前置数据集中实现了最先进的质量。此外,对于带有稀疏输入的面对面的场景,我们可以在质量方面实现对基于NERF的方法具有竞争力的结果,同时提供更好的速度/质量/内存权衡,网络评估较少。
translated by 谷歌翻译
我们提出了一种准确的3D重建方法的方法。我们基于神经重建和渲染(例如神经辐射场(NERF))的最新进展的优势。这种方法的一个主要缺点是,它们未能重建对象的任何部分,这些部分在训练图像中不明确可见,这通常是野外图像和视频的情况。当缺乏证据时,可以使用诸如对称的结构先验来完成缺失的信息。但是,在神经渲染中利用此类先验是高度不平凡的:虽然几何和非反射材料可能是对称的,但环境场景的阴影和反射通常不是对称的。为了解决这个问题,我们将软对称性约束应用于3D几何和材料特性,并将外观纳入照明,反照率和反射率。我们在最近引入的CO3D数据集上评估了我们的方法,这是由于重建高度反射材料的挑战,重点是汽车类别。我们表明,它可以用高保真度重建未观察到的区域,并渲染高质量的新型视图图像。
translated by 谷歌翻译
我们提出了一种有效的方法,用于从多视图图像观察中联合优化拓扑,材料和照明。与最近的多视图重建方法不同,通常在神经网络中产生纠缠的3D表示,我们将三角形网格输出具有空间不同的材料和环境照明,这些方法可以在任何传统的图形引擎中未修改。我们利用近期工作在可差异化的渲染中,基于坐标的网络紧凑地代表体积纹理,以及可微分的游行四边形,以便直接在表面网上直接实现基于梯度的优化。最后,我们介绍了环境照明的分流和近似的可分辨率配方,以有效地回收全频照明。实验表明我们的提取模型用于高级场景编辑,材料分解和高质量的视图插值,全部以三角形的渲染器(光栅化器和路径示踪剂)的交互式速率运行。
translated by 谷歌翻译
Neural basis functionsReflectance coefficients Figure 1: (a) Each pixel in NeX multiplane image consists of an alpha transparency value, base color k 0 , and view-dependent reflectance coefficients k 1 ...k n . A linear combination of these coefficients and basis functions learned from a neural network produces the final color value. (b, c) show our synthesized images that can be rendered in real time with view-dependent effects such as the reflection on the silver spoon.
translated by 谷歌翻译
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multiview posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. 1
translated by 谷歌翻译
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects. Our method can render 800×800 images at more than 150 FPS, which is over 3000 times faster than conventional NeRFs. We do so without sacrificing quality while preserving the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects. Real-time performance is achieved by pre-tabulating the NeRF into a PlenOctree. In order to preserve viewdependent effects such as specularities, we factorize the appearance via closed-form spherical basis functions. Specifically, we show that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network. Furthermore, we show that PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods. Moreover, this octree optimization step can be used to reduce the training time, as we no longer need to wait for the NeRF training to converge fully. Our real-time neural rendering approach may potentially enable new applications such as 6-DOF industrial and product visualizations, as well as next generation AR/VR systems. PlenOctrees are amenable to in-browser rendering as well; please visit the project page for the interactive online demo, as well as video and code: https://alexyu. net/plenoctrees.
translated by 谷歌翻译
新型视图综合的古典光场渲染可以准确地再现视图依赖性效果,例如反射,折射和半透明,但需要一个致密的视图采样的场景。基于几何重建的方法只需要稀疏的视图,但不能准确地模拟非兰伯语的效果。我们介绍了一个模型,它结合了强度并减轻了这两个方向的局限性。通过在光场的四维表示上操作,我们的模型学会准确表示依赖视图效果。通过在训练和推理期间强制执行几何约束,从稀疏的视图集中毫无屏蔽地学习场景几何。具体地,我们介绍了一种基于两级变压器的模型,首先沿着ePipoll线汇总特征,然后沿参考视图聚合特征以产生目标射线的颜色。我们的模型在多个前进和360 {\ DEG}数据集中优于最先进的,具有较大的差别依赖变化的场景更大的边缘。
translated by 谷歌翻译
可区分渲染的最新进展已实现了从多视图图像中对3D场景的高质量重建。大多数方法都依赖于简单渲染算法:预滤波的直接照明或学习的辐照度表示。我们表明,更现实的阴影模型,结合了射线追踪和蒙特卡洛整合,大大改善了形状,材料和照明的分解。不幸的是,即使在大型样本计数下,蒙特卡洛集成也能提供巨大的噪音,这使得基于梯度的逆渲染非常具有挑战性。为了解决这个问题,我们将多重重要性采样和降解纳入新的逆渲染管道中。这显着改善了收敛性,并在低样本计数下实现了基于梯度的优化。我们提出了一种有效的方法,可以共同重建几何形状(显式三角形网格),材料和照明,与以前的工作相比,它显着改善了材料和光分离。我们认为,Denoising可以成为高质量逆渲染管道的组成部分。
translated by 谷歌翻译
我们提出了一种新的方法来获取来自在线图像集合的对象表示,从具有不同摄像机,照明和背景的照片捕获任意物体的高质量几何形状和材料属性。这使得各种以各种对象渲染应用诸如新颖的综合,致密和协调的背景组合物,从疯狂的内部输入。使用多级方法延伸神经辐射场,首先推断表面几何形状并优化粗估计的初始相机参数,同时利用粗糙的前景对象掩模来提高训练效率和几何质量。我们还介绍了一种强大的正常估计技术,其消除了几何噪声的效果,同时保持了重要细节。最后,我们提取表面材料特性和环境照明,以球形谐波表示,具有处理瞬态元素的延伸部,例如,锋利的阴影。这些组件的结合导致高度模块化和有效的对象采集框架。广泛的评估和比较证明了我们在捕获高质量的几何形状和外观特性方面的方法,可用于渲染应用。
translated by 谷歌翻译
Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a diffentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF (Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering. Code and data are available at our website: https://github.com/facebookresearch/NSVF.
translated by 谷歌翻译
我们介绍了Plenoxels(plenoptic voxels),是一种光电型观测合成系统。Plenoxels表示作为具有球形谐波的稀疏3D网格的场景。该表示可以通过梯度方法和正则化从校准图像进行优化,而没有任何神经元件。在标准,基准任务中,Plenoxels优化了比神经辐射场更快的两个数量级,无需视觉质量损失。
translated by 谷歌翻译