We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images. To model the illumination of a scene, existing inverse rendering works either completely ignore the indirect illumination or model it by coarse approximations, leading to sub-optimal illumination, geometry, and material prediction of the scene. In this work, we propose a physics-based illumination model that explicitly traces the incoming indirect lights at each surface point based on interreflection, followed by estimating each identified indirect light through an efficient neural network. Furthermore, we utilize the Leibniz's integral rule to resolve non-differentiability in the proposed illumination model caused by one type of environment light -- the tangent lights. As a result, the proposed interreflection-aware illumination model can be learned end-to-end together with geometry and materials estimation. As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed method performs favorably against existing inverse rendering methods on novel view synthesis and inverse rendering.
translated by 谷歌翻译
给定一组场景的图像,从新颖的观点和照明条件中重新渲染了这个场景是计算机视觉和图形中的一个重要且具有挑战性的问题。一方面,计算机视觉中的大多数现有作品通常对图像形成过程(例如直接照明和预定义的材料,以使场景参数估计可进行。另一方面,成熟的计算机图形工具允许对所有场景参数进行复杂的照片现实光传输的建模。结合了这些方法,我们通过学习神经预先计算的辐射转移功能,提出了一种在新观点下重新考虑的场景方法,该方法使用新颖的环境图隐含地处理全球照明效应。在单个未知的照明条件下,我们的方法可以仅在场景的一组真实图像上进行监督。为了消除训练期间的任务,我们在训练过程中紧密整合了可区分的路径示踪剂,并提出了合成的OLAT和真实图像丢失的组合。结果表明,场景参数的恢复分离在目前的现状,因此,我们的重新渲染结果也更加现实和准确。
translated by 谷歌翻译
我们提出了一种从单个图像中编辑复杂室内照明的方法,其深度和光源分割掩码。这是一个极具挑战性的问题,需要对复杂的光传输进行建模,并仅通过对场景的部分LDR观察,将HDR照明从材料和几何形状中解散。我们使用两个新颖的组件解决了这个问题:1)一种整体场景重建方法,该方法估计场景反射率和参数3D照明,以及2)一个神经渲染框架,从我们的预测中重新呈现场景。我们使用基于物理的室内光表示,可以进行直观的编辑,并推断可见和看不见的光源。我们的神经渲染框架结合了基于物理的直接照明和阴影渲染,深层网络近似于全球照明。它可以捕获具有挑战性的照明效果,例如柔软的阴影,定向照明,镜面材料和反射。以前的单个图像逆渲染方法通常纠缠场景照明和几何形状,仅支持对象插入等应用程序。取而代之的是,通过将参数3D照明估计与神经场景渲染相结合,我们演示了从单个图像中实现完整场景重新确定(包括光源插入,删除和替换)的第一种自动方法。所有源代码和数据将公开发布。
translated by 谷歌翻译
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions. Our method represents the scene as a continuous volumetric function parameterized as MLPs whose inputs are a 3D location and whose outputs are the following scene properties at that input location: volume density, surface normal, material parameters, distance to the first surface intersection in any direction, and visibility of the external environment in any direction. Together, these allow us to render novel views of the object under arbitrary lighting, including indirect illumination effects. The predicted visibility and surface intersection fields are critical to our model's ability to simulate direct and indirect illumination during training, because the brute-force techniques used by prior work are intractable for lighting conditions outside of controlled setups with a single light. Our method outperforms alternative approaches for recovering relightable 3D scene representations, and performs well in complex lighting settings that have posed a significant challenge to prior work.
translated by 谷歌翻译
最近的神经渲染方法通过用神经网络预测体积密度和颜色来证明了准确的视图插值。虽然可以在静态和动态场景上监督这种体积表示,但是现有方法隐含地将完整的场景光传输释放到一个神经网络中,用于给定场景,包括曲面建模,双向散射分布函数和间接照明效果。与传统的渲染管道相比,这禁止在场景中改变表面反射率,照明或构成其他物体。在这项工作中,我们明确地模拟了场景表面之间的光传输,我们依靠传统的集成方案和渲染方程来重建场景。所提出的方法允许BSDF恢复,具有未知的光条件和诸如路径传输的经典光传输。通过在传统渲染方法中建立的表面表示的分解传输,该方法自然促进了编辑形状,反射率,照明和场景组成。该方法优于神经,在已知的照明条件下可发光,并为refit和编辑场景产生现实的重建。我们验证了从综合和捕获的视图上了解的场景编辑,致密和反射率估算的建议方法,并捕获了神经数据集的子集。
translated by 谷歌翻译
人类的重大是一项非常可取但具有挑战性的任务。现有作品要么需要使用光阶段捕获的昂贵的一亮(OLAT)捕获的数据,要么无法自由地改变渲染身体的观点。在这项工作中,我们提出了一个原则上的框架,即Relighting4D,该框架可以使自由观看点仅在未知的照明下从人类视频中获得重新拍摄。我们的关键见解是,可以将人体的几何形状和反射率分解为正常,遮挡,弥漫和镜头图的一组神经场。这些神经场进一步整合到反射性吸引物理的渲染中,其中神经场中的每个顶点吸收并反映了环境的光。可以以一种自我监督的方式从视频中学到整个框架,并采用专门的知识培训为正则化。对真实和合成数据集的广泛实验表明,我们的框架能够通过自由观看点重新确认动态人类参与者。
translated by 谷歌翻译
我们提出了一种新的方法来获取来自在线图像集合的对象表示,从具有不同摄像机,照明和背景的照片捕获任意物体的高质量几何形状和材料属性。这使得各种以各种对象渲染应用诸如新颖的综合,致密和协调的背景组合物,从疯狂的内部输入。使用多级方法延伸神经辐射场,首先推断表面几何形状并优化粗估计的初始相机参数,同时利用粗糙的前景对象掩模来提高训练效率和几何质量。我们还介绍了一种强大的正常估计技术,其消除了几何噪声的效果,同时保持了重要细节。最后,我们提取表面材料特性和环境照明,以球形谐波表示,具有处理瞬态元素的延伸部,例如,锋利的阴影。这些组件的结合导致高度模块化和有效的对象采集框架。广泛的评估和比较证明了我们在捕获高质量的几何形状和外观特性方面的方法,可用于渲染应用。
translated by 谷歌翻译
我们解决了从由一个未知照明条件照射的物体的多视图图像(及其相机姿势)从多视图图像(和它们的相机姿势)恢复物体的形状和空间变化的空间变化的问题。这使得能够在任意环境照明下呈现对象的新颖视图和对象的材料属性的编辑。我们呼叫神经辐射分解(NERFVERTOR)的方法的关键是蒸馏神经辐射场(NERF)的体积几何形状[MILDENHALL等人。 2020]将物体表示为表面表示,然后在求解空间改变的反射率和环境照明时共同细化几何形状。具体而言,Nerfactor仅使用重新渲染丢失,简单的光滑度Provers以及从真实学中学到的数据驱动的BRDF而无任何监督的表面法线,光可视性,Albedo和双向反射率和双向反射分布函数(BRDF)的3D神经领域-world brdf测量。通过显式建模光可视性,心脏请能够将来自Albedo的阴影分离,并在任意照明条件下合成现实的软或硬阴影。 Nerfactor能够在这场具有挑战性和实际场景的挑战和捕获的捕获设置中恢复令人信服的3D模型进行令人满意的3D模型。定性和定量实验表明,在各种任务中,内容越优于基于经典和基于深度的学习状态。我们的视频,代码和数据可在peoptom.csail.mit.edu/xiuming/projects/nerfactor/上获得。
translated by 谷歌翻译
Google Research Basecolor Metallic Roughness Normal Multi-View Images NeRD Volume Decomposed BRDF Relighting & View synthesis Textured MeshFigure 1: Neural Reflectance Decomposition for Relighting. We encode multiple views of an object under varying or fixed illumination into the NeRD volume.We decompose each given image into geometry, spatially-varying BRDF parameters and a rough approximation of the incident illumination in a globally consistent manner. We then extract a relightable textured mesh that can be re-rendered under novel illumination conditions in real-time.
translated by 谷歌翻译
我们提出了一种准确的3D重建方法的方法。我们基于神经重建和渲染(例如神经辐射场(NERF))的最新进展的优势。这种方法的一个主要缺点是,它们未能重建对象的任何部分,这些部分在训练图像中不明确可见,这通常是野外图像和视频的情况。当缺乏证据时,可以使用诸如对称的结构先验来完成缺失的信息。但是,在神经渲染中利用此类先验是高度不平凡的:虽然几何和非反射材料可能是对称的,但环境场景的阴影和反射通常不是对称的。为了解决这个问题,我们将软对称性约束应用于3D几何和材料特性,并将外观纳入照明,反照率和反射率。我们在最近引入的CO3D数据集上评估了我们的方法,这是由于重建高度反射材料的挑战,重点是汽车类别。我们表明,它可以用高保真度重建未观察到的区域,并渲染高质量的新型视图图像。
translated by 谷歌翻译
传统的多视图光度立体声(MVP)方法通常由多个不相交阶段组成,从而导致明显的累积错误。在本文中,我们提出了一种基于隐式表示的MVP的神经反向渲染方法。给定通过多个未知方向灯照亮的非陆层物体的多视图图像,我们的方法共同估计几何形状,材料和灯光。我们的方法首先采用多光图像来估计每视图正常地图,这些图用于使从神经辐射场得出的正态定向。然后,它可以根据具有阴影可区分的渲染层共同优化表面正态,空间变化的BRDF和灯。优化后,重建的对象可用于新颖的视图渲染,重新定义和材料编辑。合成数据集和真实数据集的实验表明,与现有的MVP和神经渲染方法相比,我们的方法实现了更准确的形状重建。我们的代码和模型可以在https://ywq.github.io/psnerf上找到。
translated by 谷歌翻译
We present a multi-view inverse rendering method for large-scale real-world indoor scenes that reconstructs global illumination and physically-reasonable SVBRDFs. Unlike previous representations, where the global illumination of large scenes is simplified as multiple environment maps, we propose a compact representation called Texture-based Lighting (TBL). It consists of 3D meshs and HDR textures, and efficiently models direct and infinite-bounce indirect lighting of the entire large scene. Based on TBL, we further propose a hybrid lighting representation with precomputed irradiance, which significantly improves the efficiency and alleviate the rendering noise in the material optimization. To physically disentangle the ambiguity between materials, we propose a three-stage material optimization strategy based on the priors of semantic segmentation and room segmentation. Extensive experiments show that the proposed method outperforms the state-of-the-arts quantitatively and qualitatively, and enables physically-reasonable mixed-reality applications such as material editing, editable novel view synthesis and relighting. The project page is at https://lzleejean.github.io/TexIR.
translated by 谷歌翻译
神经隐式表示在新的视图合成和来自多视图图像的高质量3D重建方面显示了其有效性。但是,大多数方法都集中在整体场景表示上,但忽略了其中的各个对象,从而限制了潜在的下游应用程序。为了学习对象组合表示形式,一些作品将2D语义图作为训练中的提示,以掌握对象之间的差异。但是他们忽略了对象几何和实例语义信息之间的牢固联系,这导致了单个实例的不准确建模。本文提出了一个新颖的框架ObjectsDF,以在3D重建和对象表示中构建具有高保真度的对象复合神经隐式表示。观察常规音量渲染管道的歧义,我们通过组合单个对象的签名距离函数(SDF)来对场景进行建模,以发挥明确的表面约束。区分不同实例的关键是重新审视单个对象的SDF和语义标签之间的牢固关联。特别是,我们将语义信息转换为对象SDF的函数,并为场景和对象开发统一而紧凑的表示形式。实验结果表明,ObjectSDF框架在表示整体对象组合场景和各个实例方面的优越性。可以在https://qianyiwu.github.io/objectsdf/上找到代码
translated by 谷歌翻译
Reflections on glossy objects contain valuable and hidden information about the surrounding environment. By converting these objects into cameras, we can unlock exciting applications, including imaging beyond the camera's field-of-view and from seemingly impossible vantage points, e.g. from reflections on the human eye. However, this task is challenging because reflections depend jointly on object geometry, material properties, the 3D environment, and the observer viewing direction. Our approach converts glossy objects with unknown geometry into radiance-field cameras to image the world from the object's perspective. Our key insight is to convert the object surface into a virtual sensor that captures cast reflections as a 2D projection of the 5D environment radiance field visible to the object. We show that recovering the environment radiance fields enables depth and radiance estimation from the object to its surroundings in addition to beyond field-of-view novel-view synthesis, i.e. rendering of novel views that are only directly-visible to the glossy object present in the scene, but not the observer. Moreover, using the radiance field we can image around occluders caused by close-by objects in the scene. Our method is trained end-to-end on multi-view images of the object and jointly estimates object geometry, diffuse radiance, and the 5D environment radiance field.
translated by 谷歌翻译
我们提出了一种有效的方法,用于从多视图图像观察中联合优化拓扑,材料和照明。与最近的多视图重建方法不同,通常在神经网络中产生纠缠的3D表示,我们将三角形网格输出具有空间不同的材料和环境照明,这些方法可以在任何传统的图形引擎中未修改。我们利用近期工作在可差异化的渲染中,基于坐标的网络紧凑地代表体积纹理,以及可微分的游行四边形,以便直接在表面网上直接实现基于梯度的优化。最后,我们介绍了环境照明的分流和近似的可分辨率配方,以有效地回收全频照明。实验表明我们的提取模型用于高级场景编辑,材料分解和高质量的视图插值,全部以三角形的渲染器(光栅化器和路径示踪剂)的交互式速率运行。
translated by 谷歌翻译
将现有的旅游照片从部分捕获的场景扩展到完整的场景是摄影应用的理想体验之一。尽管对照片的外推进行了充分的研究,但是将照片(即自拍照)从狭窄的视野推断到更广阔的视野,同时保持相似的视觉样式是更具挑战性的。在本文中,我们提出了一个分解的神经重新渲染模型,以从混乱的户外互联网照片集中产生逼真的新颖观点,该视图可以使应用程序包括可控场景重新渲染,照片外推甚至外推3D照片生成。具体而言,我们首先开发出一种新颖的分解重新渲染管道,以处理几何,外观和照明分解中的歧义。我们还提出了一种合成的培训策略,以应对互联网图像中意外的阻塞。此外,为了推断旅游照片时增强照片现实主义,我们提出了一个新颖的现实主义增强过程来补充外观细节,该过程会自动传播质地细节,从狭窄的捕获照片到外推神经渲染图像。室外场景上的实验和照片编辑示例证明了我们在照片现实主义和下游应用中提出的方法的出色性能。
translated by 谷歌翻译
Indoor scenes typically exhibit complex, spatially-varying appearance from global illumination, making inverse rendering a challenging ill-posed problem. This work presents an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling. The framework takes a single image as input to jointly recover the underlying geometry, spatially-varying lighting, and photorealistic materials. Specifically, we introduce a physically-based differentiable rendering layer with screen-space ray tracing, resulting in more realistic specular reflections that match the input photo. In addition, we create a large-scale, photorealistic indoor scene dataset with significantly richer details like complex furniture and dedicated decorations. Further, we design a novel out-of-view lighting network with uncertainty-aware refinement leveraging hypernetwork-based neural radiance fields to predict lighting outside the view of the input photo. Through extensive evaluations on common benchmark datasets, we demonstrate superior inverse rendering quality of our method compared to state-of-the-art baselines, enabling various applications such as complex object insertion and material editing with high fidelity. Code and data will be made available at \url{https://jingsenzhu.github.io/invrend}.
translated by 谷歌翻译
我们建议使用以光源方向为条件的神经辐射场(NERF)的扩展来解决多视光度立体声问题。我们神经表示的几何部分预测表面正常方向,使我们能够理解局部表面反射率。我们的神经表示的外观部分被分解为神经双向反射率函数(BRDF),作为拟合过程的一部分学习,阴影预测网络(以光源方向为条件),使我们能够对明显的BRDF进行建模。基于物理图像形成模型的诱导偏差的学到的组件平衡使我们能够远离训练期间观察到的光源和查看器方向。我们证明了我们在多视光学立体基准基准上的方法,并表明可以通过NERF的神经密度表示可以获得竞争性能。
translated by 谷歌翻译
从单个图像中恢复人头的几何形状,同时对材料和照明进行分解是一个严重不良的问题,需要事先解决。基于3D形态模型(3DMM)及其与可区分渲染器的组合的方法已显示出令人鼓舞的结果。但是,3DMM的表现力受到限制,它们通常会产生过度平滑和身份敏捷的3D形状,仅限于面部区域。最近,使用多层感知器参数化几何形状的神经场获得了高度准确的全头部重建。这些表示形式的多功能性也已被证明可有效解开几何形状,材料和照明。但是,这些方法需要几十个输入图像。在本文中,我们介绍了Sira,该方法从单个图像中,从一个图像中重建了具有高保真度几何形状和分解的灯光和表面材料的人头头像。我们的关键成分是基于神经场的两个数据驱动的统计模型,这些模型可以解决单视3D表面重建和外观分解的歧义。实验表明,Sira获得了最新的状态导致3D头重建,同时它成功地解开了全局照明以及弥漫性和镜面反照率。此外,我们的重建适合基于物理的外观编辑和头部模型重新构建。
translated by 谷歌翻译
We propose an end-to-end inverse rendering pipeline called SupeRVol that allows us to recover 3D shape and material parameters from a set of color images in a super-resolution manner. To this end, we represent both the bidirectional reflectance distribution function (BRDF) and the signed distance function (SDF) by multi-layer perceptrons. In order to obtain both the surface shape and its reflectance properties, we revert to a differentiable volume renderer with a physically based illumination model that allows us to decouple reflectance and lighting. This physical model takes into account the effect of the camera's point spread function thereby enabling a reconstruction of shape and material in a super-resolution quality. Experimental validation confirms that SupeRVol achieves state of the art performance in terms of inverse rendering quality. It generates reconstructions that are sharper than the individual input images, making this method ideally suited for 3D modeling from low-resolution imagery.
translated by 谷歌翻译