Neural basis functionsReflectance coefficients Figure 1: (a) Each pixel in NeX multiplane image consists of an alpha transparency value, base color k 0 , and view-dependent reflectance coefficients k 1 ...k n . A linear combination of these coefficients and basis functions learned from a neural network produces the final color value. (b, c) show our synthesized images that can be rendered in real time with view-dependent effects such as the reflection on the silver spoon.
translated by 谷歌翻译
在本文中,我们为复杂场景进行了高效且强大的深度学习解决方案。在我们的方法中,3D场景表示为光场,即,一组光线,每组在到达图像平面时具有相应的颜色。对于高效的新颖视图渲染,我们采用了光场的双面参数化,其中每个光线的特征在于4D参数。然后,我们将光场配向作为4D函数,即将4D坐标映射到相应的颜色值。我们训练一个深度完全连接的网络以优化这种隐式功能并记住3D场景。然后,特定于场景的模型用于综合新颖视图。与以前需要密集的视野的方法不同,需要密集的视野采样来可靠地呈现新颖的视图,我们的方法可以通过采样光线来呈现新颖的视图并直接从网络查询每种光线的颜色,从而使高质量的灯场呈现稀疏集合训练图像。网络可以可选地预测每光深度,从而使诸如自动重新焦点的应用。我们的小说视图合成结果与最先进的综合结果相当,甚至在一些具有折射和反射的具有挑战性的场景中优越。我们在保持交互式帧速率和小的内存占地面积的同时实现这一点。
translated by 谷歌翻译
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (nonconvolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
translated by 谷歌翻译
新型视图综合的古典光场渲染可以准确地再现视图依赖性效果,例如反射,折射和半透明,但需要一个致密的视图采样的场景。基于几何重建的方法只需要稀疏的视图,但不能准确地模拟非兰伯语的效果。我们介绍了一个模型,它结合了强度并减轻了这两个方向的局限性。通过在光场的四维表示上操作,我们的模型学会准确表示依赖视图效果。通过在训练和推理期间强制执行几何约束,从稀疏的视图集中毫无屏蔽地学习场景几何。具体地,我们介绍了一种基于两级变压器的模型,首先沿着ePipoll线汇总特征,然后沿参考视图聚合特征以产生目标射线的颜色。我们的模型在多个前进和360 {\ DEG}数据集中优于最先进的,具有较大的差别依赖变化的场景更大的边缘。
translated by 谷歌翻译
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multiview posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. 1
translated by 谷歌翻译
神经辐射场(NERF)是一种普遍的视图综合技术,其表示作为连续体积函数的场景,由多层的感知来参数化,其提供每个位置处的体积密度和视图相关的发射辐射。虽然基于NERF的技术在代表精细的几何结构时,具有平稳变化的视图依赖性外观,但它们通常无法精确地捕获和再现光泽表面的外观。我们通过引入Ref-nerf来解决这些限制,该ref-nerf替换了nerf的视图依赖性输出辐射的参数化,使用反射辐射的表示和使用空间不同场景属性的集合来构造该函数的表示。我们展示了与正常载体上的规范器一起,我们的模型显着提高了镜面反射的现实主义和准确性。此外,我们表明我们的模型的外向光线的内部表示是可解释的,可用于场景编辑。
translated by 谷歌翻译
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects. Our method can render 800×800 images at more than 150 FPS, which is over 3000 times faster than conventional NeRFs. We do so without sacrificing quality while preserving the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects. Real-time performance is achieved by pre-tabulating the NeRF into a PlenOctree. In order to preserve viewdependent effects such as specularities, we factorize the appearance via closed-form spherical basis functions. Specifically, we show that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network. Furthermore, we show that PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods. Moreover, this octree optimization step can be used to reduce the training time, as we no longer need to wait for the NeRF training to converge fully. Our real-time neural rendering approach may potentially enable new applications such as 6-DOF industrial and product visualizations, as well as next generation AR/VR systems. PlenOctrees are amenable to in-browser rendering as well; please visit the project page for the interactive online demo, as well as video and code: https://alexyu. net/plenoctrees.
translated by 谷歌翻译
静态场景的新颖观看综合在生产照片逼真的结果方面取得了显着的进步。但是,对于动态内容的沉浸式渲染,仍然存在关键挑战。例如,一种基于精英图像的渲染框架之一,多平面图像(MPI)为静态场景产生高新的观看合成质量,但面临着建模动态部分的难度。此外,通过MPI建模动态变化可能需要庞大的存储空间和长期推理时间,其阻碍了其在实时方案中的应用。在本文中,我们提出了一种新的颞型MPI表示,其能够在整个视频中以紧凑的时间编码整个视频中的丰富的3D和动态变化信息。由于高度紧凑且表现力的潜在基础和共同学习的系数,任意时间实例的新颖 - 实例将能够实时具有高视觉质量。我们显示给定的可比内存消耗,我们提出的时间 - MPI框架能够生成时间实例MPI,只有0.002秒,速度快3000倍,与其他状态相比,3DB更高的平均视图合成PSNR - 艺术动态场景建模框架。
translated by 谷歌翻译
We present a learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs. We build on Neural Radiance Fields (NeRF), which uses the weights of a multilayer perceptron to model the density and color of a scene as a function of 3D coordinates. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders. We introduce a series of extensions to NeRF to address these issues, thereby enabling accurate reconstructions from unstructured image collections taken from the internet. We apply our system, dubbed NeRF-W, to internet photo collections of famous landmarks, and demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.
translated by 谷歌翻译
我们介绍了Plenoxels(plenoptic voxels),是一种光电型观测合成系统。Plenoxels表示作为具有球形谐波的稀疏3D网格的场景。该表示可以通过梯度方法和正则化从校准图像进行优化,而没有任何神经元件。在标准,基准任务中,Plenoxels优化了比神经辐射场更快的两个数量级,无需视觉质量损失。
translated by 谷歌翻译
Google Research Basecolor Metallic Roughness Normal Multi-View Images NeRD Volume Decomposed BRDF Relighting & View synthesis Textured MeshFigure 1: Neural Reflectance Decomposition for Relighting. We encode multiple views of an object under varying or fixed illumination into the NeRD volume.We decompose each given image into geometry, spatially-varying BRDF parameters and a rough approximation of the incident illumination in a globally consistent manner. We then extract a relightable textured mesh that can be re-rendered under novel illumination conditions in real-time.
translated by 谷歌翻译
我们解决了从由一个未知照明条件照射的物体的多视图图像(及其相机姿势)从多视图图像(和它们的相机姿势)恢复物体的形状和空间变化的空间变化的问题。这使得能够在任意环境照明下呈现对象的新颖视图和对象的材料属性的编辑。我们呼叫神经辐射分解(NERFVERTOR)的方法的关键是蒸馏神经辐射场(NERF)的体积几何形状[MILDENHALL等人。 2020]将物体表示为表面表示,然后在求解空间改变的反射率和环境照明时共同细化几何形状。具体而言,Nerfactor仅使用重新渲染丢失,简单的光滑度Provers以及从真实学中学到的数据驱动的BRDF而无任何监督的表面法线,光可视性,Albedo和双向反射率和双向反射分布函数(BRDF)的3D神经领域-world brdf测量。通过显式建模光可视性,心脏请能够将来自Albedo的阴影分离,并在任意照明条件下合成现实的软或硬阴影。 Nerfactor能够在这场具有挑战性和实际场景的挑战和捕获的捕获设置中恢复令人信服的3D模型进行令人满意的3D模型。定性和定量实验表明,在各种任务中,内容越优于基于经典和基于深度的学习状态。我们的视频,代码和数据可在peoptom.csail.mit.edu/xiuming/projects/nerfactor/上获得。
translated by 谷歌翻译
Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a diffentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF (Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering. Code and data are available at our website: https://github.com/facebookresearch/NSVF.
translated by 谷歌翻译
我们呈现高动态范围神经辐射字段(HDR-NERF),以从一组低动态范围(LDR)视图的HDR辐射率字段与不同的曝光。使用HDR-NERF,我们能够在不同的曝光下生成新的HDR视图和新型LDR视图。我们方法的关键是模拟物理成像过程,该过程决定了场景点的辐射与具有两个隐式功能的LDR图像中的像素值转换为:RADIACE字段和音调映射器。辐射场对场景辐射(值在0到+末端之间的值变化),其通过提供相应的射线源和光线方向来输出光线的密度和辐射。 TONE MAPPER模拟映射过程,即在相机传感器上击中的光线变为像素值。通过将辐射和相应的曝光时间送入音调映射器来预测光线的颜色。我们使用经典的卷渲染技术将输出辐射,颜色和密度投影为HDR和LDR图像,同时只使用输入的LDR图像作为监控。我们收集了一个新的前瞻性的HDR数据集,以评估所提出的方法。综合性和现实世界场景的实验结果验证了我们的方法不仅可以准确控制合成视图的曝光,还可以用高动态范围呈现视图。
translated by 谷歌翻译
我们提出了一种新的方法来获取来自在线图像集合的对象表示,从具有不同摄像机,照明和背景的照片捕获任意物体的高质量几何形状和材料属性。这使得各种以各种对象渲染应用诸如新颖的综合,致密和协调的背景组合物,从疯狂的内部输入。使用多级方法延伸神经辐射场,首先推断表面几何形状并优化粗估计的初始相机参数,同时利用粗糙的前景对象掩模来提高训练效率和几何质量。我们还介绍了一种强大的正常估计技术,其消除了几何噪声的效果,同时保持了重要细节。最后,我们提取表面材料特性和环境照明,以球形谐波表示,具有处理瞬态元素的延伸部,例如,锋利的阴影。这些组件的结合导致高度模块化和有效的对象采集框架。广泛的评估和比较证明了我们在捕获高质量的几何形状和外观特性方面的方法,可用于渲染应用。
translated by 谷歌翻译
神经辐射场(NERF)最近在新型视图合成中取得了令人印象深刻的结果。但是,以前的NERF作品主要关注以对象为中心的方案。在这项工作中,我们提出了360ROAM,这是一种新颖的场景级NERF系统,可以实时合成大型室内场景的图像并支持VR漫游。我们的系统首先从多个输入$ 360^\ circ $图像构建全向神经辐射场360NERF。然后,我们逐步估算一个3D概率的占用图,该概率占用图代表了空间密度形式的场景几何形状。跳过空的空间和上采样占据的体素本质上可以使我们通过以几何学意识的方式使用360NERF加速量渲染。此外,我们使用自适应划分和扭曲策略来减少和调整辐射场,以进一步改进。从占用地图中提取的场景的平面图可以为射线采样提供指导,并促进现实的漫游体验。为了显示我们系统的功效,我们在各种场景中收集了$ 360^\ Circ $图像数据集并进行广泛的实验。基线之间的定量和定性比较说明了我们在复杂室内场景的新型视图合成中的主要表现。
translated by 谷歌翻译
我们建议使用以光源方向为条件的神经辐射场(NERF)的扩展来解决多视光度立体声问题。我们神经表示的几何部分预测表面正常方向,使我们能够理解局部表面反射率。我们的神经表示的外观部分被分解为神经双向反射率函数(BRDF),作为拟合过程的一部分学习,阴影预测网络(以光源方向为条件),使我们能够对明显的BRDF进行建模。基于物理图像形成模型的诱导偏差的学到的组件平衡使我们能够远离训练期间观察到的光源和查看器方向。我们证明了我们在多视光学立体基准基准上的方法,并表明可以通过NERF的神经密度表示可以获得竞争性能。
translated by 谷歌翻译
神经辐射场(NERFS)产生最先进的视图合成结果。然而,它们慢渲染,需要每像素数百个网络评估,以近似卷渲染积分。将nerfs烘烤到明确的数据结构中实现了有效的渲染,但导致内存占地面积的大幅增加,并且在许多情况下,质量降低。在本文中,我们提出了一种新的神经光场表示,相反,相反,紧凑,直接预测沿线的集成光线。我们的方法支持使用每个像素的单个网络评估,用于小基线光场数据集,也可以应用于每个像素的几个评估的较大基线。在我们的方法的核心,是一个光线空间嵌入网络,将4D射线空间歧管映射到中间可间可动子的潜在空间中。我们的方法在诸如斯坦福光场数据集等密集的前置数据集中实现了最先进的质量。此外,对于带有稀疏输入的面对面的场景,我们可以在质量方面实现对基于NERF的方法具有竞争力的结果,同时提供更好的速度/质量/内存权衡,网络评估较少。
translated by 谷歌翻译
我们重新审视NPBG,这是一种流行的新型视图合成方法,引入了无处不在的点神经渲染范式。我们对具有快速视图合成的数据效率学习特别感兴趣。除前景/背景场景渲染分裂以及改善的损失外,我们还通过基于视图的网状点描述符栅格化来实现这一目标。通过仅在一个场景上训练,我们的表现就超过了在扫描仪上接受过培训的NPBG,然后进行了填充场景。我们还针对最先进的方法SVS进行了竞争性,该方法已在完整的数据集(DTU,坦克和寺庙)上进行了培训,然后进行了对现场的培训,尽管它们具有更深的神经渲染器。
translated by 谷歌翻译
我们探索了基于神经光场表示的几种新颖观点合成的新策略。给定目标摄像头姿势,隐式神经网络将每个射线映射到其目标像素的颜色。该网络的条件是根据来自显式3D特征量的粗量渲染产生的本地射线特征。该卷是由使用3D Convnet的输入图像构建的。我们的方法在基于最先进的神经辐射场竞争方面,在合成和真实MVS数据上实现了竞争性能,同时提供了100倍的渲染速度。
translated by 谷歌翻译