Recent advances in neural radiance fields have enabled the high-fidelity 3D reconstruction of complex scenes for novel view synthesis. However, it remains underexplored how the appearance of such representations can be efficiently edited while maintaining photorealism. In this work, we present PaletteNeRF, a novel method for photorealistic appearance editing of neural radiance fields (NeRF) based on 3D color decomposition. Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases (i.e., 3D segmentations defined by a group of NeRF-type functions) that are shared across the scene. While our palette-based bases are view-independent, we also predict a view-dependent function to capture the color residual (e.g., specular shading). During training, we jointly optimize the basis functions and the color palettes, and we also introduce novel regularizers to encourage the spatial coherence of the decomposition. Our method allows users to efficiently edit the appearance of the 3D scene by modifying the color palettes. We also extend our framework with compressed semantic features for semantic-aware appearance editing. We demonstrate that our technique is superior to baseline methods both quantitatively and qualitatively for appearance editing of complex real-world scenes.
translated by 谷歌翻译
Neural Radiance Field (NeRF) is a powerful tool to faithfully generate novel views for scenes with only sparse captured images. Despite its strong capability for representing 3D scenes and their appearance, its editing ability is very limited. In this paper, we propose a simple but effective extension of vanilla NeRF, named PaletteNeRF, to enable efficient color editing on NeRF-represented scenes. Motivated by recent palette-based image decomposition works, we approximate each pixel color as a sum of palette colors modulated by additive weights. Instead of predicting pixel colors as in vanilla NeRFs, our method predicts additive weights. The underlying NeRF backbone could also be replaced with more recent NeRF models such as KiloNeRF to achieve real-time editing. Experimental results demonstrate that our method achieves efficient, view-consistent, and artifact-free color editing on a wide range of NeRF-represented scenes.
translated by 谷歌翻译
我们提出了一种新的方法来获取来自在线图像集合的对象表示,从具有不同摄像机,照明和背景的照片捕获任意物体的高质量几何形状和材料属性。这使得各种以各种对象渲染应用诸如新颖的综合,致密和协调的背景组合物,从疯狂的内部输入。使用多级方法延伸神经辐射场,首先推断表面几何形状并优化粗估计的初始相机参数,同时利用粗糙的前景对象掩模来提高训练效率和几何质量。我们还介绍了一种强大的正常估计技术,其消除了几何噪声的效果,同时保持了重要细节。最后,我们提取表面材料特性和环境照明,以球形谐波表示,具有处理瞬态元素的延伸部,例如,锋利的阴影。这些组件的结合导致高度模块化和有效的对象采集框架。广泛的评估和比较证明了我们在捕获高质量的几何形状和外观特性方面的方法,可用于渲染应用。
translated by 谷歌翻译
Neural Radiance Fields (NeRFs) encode the radiance in a scene parameterized by the scene's plenoptic function. This is achieved by using an MLP together with a mapping to a higher-dimensional space, and has been proven to capture scenes with a great level of detail. Naturally, the same parameterization can be used to encode additional properties of the scene, beyond just its radiance. A particularly interesting property in this regard is the semantic decomposition of the scene. We introduce a novel technique for semantic soft decomposition of neural radiance fields (named SSDNeRF) which jointly encodes semantic signals in combination with radiance signals of a scene. Our approach provides a soft decomposition of the scene into semantic parts, enabling us to correctly encode multiple semantic classes blending along the same direction -- an impossible feat for existing methods. Not only does this lead to a detailed, 3D semantic representation of the scene, but we also show that the regularizing effects of the MLP used for encoding help to improve the semantic representation. We show state-of-the-art segmentation and reconstruction results on a dataset of common objects and demonstrate how the proposed approach can be applied for high quality temporally consistent video editing and re-compositing on a dataset of casually captured selfie videos.
translated by 谷歌翻译
We present a learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs. We build on Neural Radiance Fields (NeRF), which uses the weights of a multilayer perceptron to model the density and color of a scene as a function of 3D coordinates. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders. We introduce a series of extensions to NeRF to address these issues, thereby enabling accurate reconstructions from unstructured image collections taken from the internet. We apply our system, dubbed NeRF-W, to internet photo collections of famous landmarks, and demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.
translated by 谷歌翻译
我们解决了从由一个未知照明条件照射的物体的多视图图像(及其相机姿势)从多视图图像(和它们的相机姿势)恢复物体的形状和空间变化的空间变化的问题。这使得能够在任意环境照明下呈现对象的新颖视图和对象的材料属性的编辑。我们呼叫神经辐射分解(NERFVERTOR)的方法的关键是蒸馏神经辐射场(NERF)的体积几何形状[MILDENHALL等人。 2020]将物体表示为表面表示,然后在求解空间改变的反射率和环境照明时共同细化几何形状。具体而言,Nerfactor仅使用重新渲染丢失,简单的光滑度Provers以及从真实学中学到的数据驱动的BRDF而无任何监督的表面法线,光可视性,Albedo和双向反射率和双向反射分布函数(BRDF)的3D神经领域-world brdf测量。通过显式建模光可视性,心脏请能够将来自Albedo的阴影分离,并在任意照明条件下合成现实的软或硬阴影。 Nerfactor能够在这场具有挑战性和实际场景的挑战和捕获的捕获设置中恢复令人信服的3D模型进行令人满意的3D模型。定性和定量实验表明,在各种任务中,内容越优于基于经典和基于深度的学习状态。我们的视频,代码和数据可在peoptom.csail.mit.edu/xiuming/projects/nerfactor/上获得。
translated by 谷歌翻译
隐式辐射功能作为重建和渲染3D场景的照片真实观点的强大场景表示形式出现。但是,这些表示的编辑性差。另一方面,诸如多边形网格之类的显式表示允许易于编辑,但不适合重建动态的人头中的准确细节,例如精细的面部特征,头发,牙齿,牙齿和眼睛。在这项工作中,我们提出了神经参数化(NEP),这是一种混合表示,提供了隐式和显式方法的优势。 NEP能够进行照片真实的渲染,同时允许对场景的几何形状和外观进行细粒度编辑。我们首先通过将3D几何形状参数化为2D纹理空间来解开几何形状和外观。我们通过引入显式线性变形层来启用几何编辑性。变形由一组稀疏的密钥点控制,可以明确和直观地移位以编辑几何形状。对于外观,我们开发了一个混合2D纹理,该纹理由明确的纹理图组成,以易于编辑和隐式视图以及时间相关的残差,以建模时间和视图变化。我们将我们的方法与几个重建和编辑基线进行比较。结果表明,NEP在保持高编辑性的同时达到了几乎相同的渲染精度。
translated by 谷歌翻译
本文提出了一个逐步连接的光场网络(Prolif),以构成复杂的前向场景的新观点。扩散编码一个4D光场,该场允许在一个训练步骤中渲染大量射线,以实现图像或贴片级损失。直接从图像中学习神经光场很难呈现多视图一致的图像,因为它对基础3D几何形状的不了解。为了解决这个问题,我们提出了一种渐进培训计划和正则化损失,以推断训练过程中的基础几何形状,这两者都会实现多视图一致性,从而极大地提高了渲染质量。实验表明,与香草神经光场相比,我们的方法能够实现明显更好的渲染质量,并且与挑战性的LLFF数据集和闪亮对象数据集的类似NERF的渲染方法相当。此外,我们证明了与LPIP的损失更好的兼容性,以实现与不同的光条件和剪辑损失的稳健性,以控制场景的渲染方式。项目页面:https://totoro97.github.io/projects/prolif。
translated by 谷歌翻译
Figure 1. Given a monocular image sequence, NR-NeRF reconstructs a single canonical neural radiance field to represent geometry and appearance, and a per-time-step deformation field. We can render the scene into a novel spatio-temporal camera trajectory that significantly differs from the input trajectory. NR-NeRF also learns rigidity scores and correspondences without direct supervision on either. We can use the rigidity scores to remove the foreground, we can supersample along the time dimension, and we can exaggerate or dampen motion.
translated by 谷歌翻译
神经辐射场(NERF)是一种普遍的视图综合技术,其表示作为连续体积函数的场景,由多层的感知来参数化,其提供每个位置处的体积密度和视图相关的发射辐射。虽然基于NERF的技术在代表精细的几何结构时,具有平稳变化的视图依赖性外观,但它们通常无法精确地捕获和再现光泽表面的外观。我们通过引入Ref-nerf来解决这些限制,该ref-nerf替换了nerf的视图依赖性输出辐射的参数化,使用反射辐射的表示和使用空间不同场景属性的集合来构造该函数的表示。我们展示了与正常载体上的规范器一起,我们的模型显着提高了镜面反射的现实主义和准确性。此外,我们表明我们的模型的外向光线的内部表示是可解释的,可用于场景编辑。
translated by 谷歌翻译
在本文中,我们研究了2D视图中3D场景几何分解和操纵的问题。通过利用最新的隐式神经表示技术,尤其是吸引人的神经辐射领域,我们引入了一个对象字段组件,以了解仅从2D监督的3D空间中所有单个对象的独特代码。该组件的关键是一系列精心设计的损失函数,以使每个3D点,尤其是在非占用空间中,即使没有3D标签,也可以有效地优化。此外,我们引入了一种反查询算法,以自由操纵学习的场景表示中的任何指定的3D对象形状。值得注意的是,我们的操纵算法可以明确解决关键问题,例如对象碰撞和视觉遮挡。我们的方法称为DM-NERF,是最早在单个管道中同时重建,分解,操纵和渲染复杂3D场景的方法之一。在三个数据集上进行的大量实验清楚地表明,我们的方法可以从2D视图中准确分解所有3D对象,从而允许在3D空间中自由操纵任何感兴趣的对象,例如翻译,旋转,尺寸调整和变形。
translated by 谷歌翻译
我们提出了一些动态神经辐射场(FDNERF),这是第一种基于NERF的方法,能够根据少量动态图像重建和表达3D面的表达编辑。与需要密集图像作为输入的现有动态NERF不同,并且只能为单个身份建模,我们的方法可以使跨不同人的不同人进行面对重建。与设计用于建模静态场景的最先进的几杆NERF相比,提出的FDNERF接受视图的动态输入,并支持任意的面部表达编辑,即产生具有输入超出输入的新表达式的面孔。为了处理动态输入之间的不一致之处,我们引入了精心设计的条件特征翘曲(CFW)模块,以在2D特征空间中执行表达条件的翘曲,这也是身份自适应和3D约束。结果,不同表达式的特征被转换为目标的特征。然后,我们根据这些视图一致的特征构建一个辐射场,并使用体积渲染来合成建模面的新型视图。进行定量和定性评估的广泛实验表明,我们的方法在3D面重建和表达编辑任务上都优于现有的动态和几乎没有射击的NERF。我们的代码和模型将在接受后提供。
translated by 谷歌翻译
我们呈现高动态范围神经辐射字段(HDR-NERF),以从一组低动态范围(LDR)视图的HDR辐射率字段与不同的曝光。使用HDR-NERF,我们能够在不同的曝光下生成新的HDR视图和新型LDR视图。我们方法的关键是模拟物理成像过程,该过程决定了场景点的辐射与具有两个隐式功能的LDR图像中的像素值转换为:RADIACE字段和音调映射器。辐射场对场景辐射(值在0到+末端之间的值变化),其通过提供相应的射线源和光线方向来输出光线的密度和辐射。 TONE MAPPER模拟映射过程,即在相机传感器上击中的光线变为像素值。通过将辐射和相应的曝光时间送入音调映射器来预测光线的颜色。我们使用经典的卷渲染技术将输出辐射,颜色和密度投影为HDR和LDR图像,同时只使用输入的LDR图像作为监控。我们收集了一个新的前瞻性的HDR数据集,以评估所提出的方法。综合性和现实世界场景的实验结果验证了我们的方法不仅可以准确控制合成视图的曝光,还可以用高动态范围呈现视图。
translated by 谷歌翻译
3D场景感性风格化旨在根据给定的样式图像从任意新颖的视图中生成光真逼真的图像,同时在从不同观点呈现时确保一致性。一些带有神经辐射场的现有风格化方法可以通过将样式图像的特征与多视图图像结合到训练3D场景来有效地预测风格化的场景。但是,这些方法生成了包含令人反感的伪影的新型视图图像。此外,他们无法为3D场景实现普遍的影迷风格化。因此,样式图像必须根据神经辐射场重新训练3D场景表示网络。我们提出了一个新颖的3D场景,逼真的风格转移框架来解决这些问题。它可以通过2D样式图像实现感性3D场景样式转移。我们首先预先训练了2D逼真的样式传输网络,该网络可以符合任何给定内容图像和样式图像之间的影片风格转移。然后,我们使用体素特征来优化3D场景并获得场景的几何表示。最后,我们共同优化了一个超级网络,以实现场景的逼真风格传输的任意样式图像。在转移阶段,我们使用预先训练的2D影视网络来限制3D场景中不同视图和不同样式图像的感性风格。实验结果表明,我们的方法不仅实现了任意样式图像的3D影像风格转移,而且还优于视觉质量和一致性方面的现有方法。项目页面:https://semchan.github.io/upst_nerf。
translated by 谷歌翻译
本文提出了一种程式化的新型视图合成方法。将最新的风格化方法应用于新型视图框架上,通常由于缺乏跨视图一致性而引起抖动的伪像。因此,本文研究了3D场景样式,该风格为一致的新型视图综合提供了强烈的诱导偏置。具体而言,我们采用新兴的神经光辉领域(NERF)作为我们选择的3D场景表示,因为它们有能力为各种场景提供高质量的新颖观点。但是,由于从NERF呈现新颖的视图需要大量样品,因此训练风格化的NERF需要大量的GPU内存,这超出了现成的GPU容量。我们引入了一种新的培训方法,通过交替进行NERF和样式优化步骤来解决此问题。这样的方法使我们能够充分利用自己的硬件记忆能力以更高的分辨率生成图像,又采用更具表现力的图像样式传输方法。我们的实验表明,我们的方法生成了针对各种内容的风格化的NERF,包括室内,室外和动态场景,并综合具有跨视图一致性的高质量小说视图。
translated by 谷歌翻译
神经量渲染能够在自由观看中的人类表演者的照片真实效果图,这是沉浸式VR/AR应用中的关键任务。但是,这种做法受到渲染过程中高计算成本的严重限制。为了解决这个问题,我们提出了紫外线量,这是一种新方法,可以实时呈现人类表演者的可编辑免费视频视频。它将高频(即非平滑)的外观与3D体积分开,并将其编码为2D神经纹理堆栈(NTS)。光滑的紫外线量允许更小且较浅的神经网络获得3D的密度和纹理坐标,同时在2D NT中捕获详细的外观。为了编辑性,参数化的人类模型与平滑纹理坐标之间的映射使我们可以更好地对新型姿势和形状进行更好的概括。此外,NTS的使用启用了有趣的应用程序,例如重新启动。关于CMU Panoptic,ZJU MOCAP和H36M数据集的广泛实验表明,我们的模型平均可以在30fps中呈现960 * 540张图像,并具有可比的照片现实主义与先进方法。该项目和补充材料可从https://github.com/fanegg/uv-volumes获得。
translated by 谷歌翻译
Google Research Basecolor Metallic Roughness Normal Multi-View Images NeRD Volume Decomposed BRDF Relighting & View synthesis Textured MeshFigure 1: Neural Reflectance Decomposition for Relighting. We encode multiple views of an object under varying or fixed illumination into the NeRD volume.We decompose each given image into geometry, spatially-varying BRDF parameters and a rough approximation of the incident illumination in a globally consistent manner. We then extract a relightable textured mesh that can be re-rendered under novel illumination conditions in real-time.
translated by 谷歌翻译
We present NeRFEditor, an efficient learning framework for 3D scene editing, which takes a video captured over 360{\deg} as input and outputs a high-quality, identity-preserving stylized 3D scene. Our method supports diverse types of editing such as guided by reference images, text prompts, and user interactions. We achieve this by encouraging a pre-trained StyleGAN model and a NeRF model to learn from each other mutually. Specifically, we use a NeRF model to generate numerous image-angle pairs to train an adjustor, which can adjust the StyleGAN latent code to generate high-fidelity stylized images for any given angle. To extrapolate editing to GAN out-of-domain views, we devise another module that is trained in a self-supervised learning manner. This module maps novel-view images to the hidden space of StyleGAN that allows StyleGAN to generate stylized images on novel views. These two modules together produce guided images in 360{\deg}views to finetune a NeRF to make stylization effects, where a stable fine-tuning strategy is proposed to achieve this. Experiments show that NeRFEditor outperforms prior work on benchmark and real-world scenes with better editability, fidelity, and identity preservation.
translated by 谷歌翻译
Neural radiance fields (NeRF) achieve highly photo-realistic novel-view synthesis, but it's a challenging problem to edit the scenes modeled by NeRF-based methods, especially for dynamic scenes. We propose editable neural radiance fields that enable end-users to easily edit dynamic scenes and even support topological changes. Input with an image sequence from a single camera, our network is trained fully automatically and models topologically varying dynamics using our picked-out surface key points. Then end-users can edit the scene by easily dragging the key points to desired new positions. To achieve this, we propose a scene analysis method to detect and initialize key points by considering the dynamics in the scene, and a weighted key points strategy to model topologically varying dynamics by joint key points and weights optimization. Our method supports intuitive multi-dimensional (up to 3D) editing and can generate novel scenes that are unseen in the input sequence. Experiments demonstrate that our method achieves high-quality editing on various dynamic scenes and outperforms the state-of-the-art. We will release our code and captured data.
translated by 谷歌翻译
在本文中,我们为复杂场景进行了高效且强大的深度学习解决方案。在我们的方法中,3D场景表示为光场,即,一组光线,每组在到达图像平面时具有相应的颜色。对于高效的新颖视图渲染,我们采用了光场的双面参数化,其中每个光线的特征在于4D参数。然后,我们将光场配向作为4D函数,即将4D坐标映射到相应的颜色值。我们训练一个深度完全连接的网络以优化这种隐式功能并记住3D场景。然后,特定于场景的模型用于综合新颖视图。与以前需要密集的视野的方法不同,需要密集的视野采样来可靠地呈现新颖的视图,我们的方法可以通过采样光线来呈现新颖的视图并直接从网络查询每种光线的颜色,从而使高质量的灯场呈现稀疏集合训练图像。网络可以可选地预测每光深度,从而使诸如自动重新焦点的应用。我们的小说视图合成结果与最先进的综合结果相当,甚至在一些具有折射和反射的具有挑战性的场景中优越。我们在保持交互式帧速率和小的内存占地面积的同时实现这一点。
translated by 谷歌翻译