Computer vision applications have heavily relied on the linear combination of Lambertian diffuse and microfacet specular reflection models for representing reflected radiance, which turns out to be physically incompatible and limited in applicability. In this paper, we derive a novel analytical reflectance model, which we refer to as Fresnel Microfacet BRDF model, that is physically accurate and generalizes to various real-world surfaces. Our key idea is to model the Fresnel reflection and transmission of the surface microgeometry with a collection of oriented mirror facets, both for body and surface reflections. We carefully derive the Fresnel reflection and transmission for each microfacet as well as the light transport between them in the subsurface. This physically-grounded modeling also allows us to express the polarimetric behavior of reflected light in addition to its radiometric behavior. That is, FMBRDF unifies not only body and surface reflections but also light reflection in radiometry and polarization and represents them in a single model. Experimental results demonstrate its effectiveness in accuracy, expressive power, and image-based estimation.
translated by 谷歌翻译
椭圆测量技术允许测量材料的极化信息,需要具有不同灯和传感器配置的光学组件的精确旋转。这会导致繁琐的捕获设备,在实验室条件下仔细校准,并且在很长的获取时间,通常按照每个物体几天的顺序。最近的技术允许捕获偏振偏光的反射率信息,但仅限于单个视图,或涵盖所有视图方向,但仅限于单个均匀材料制成的球形对象。我们提出了稀疏椭圆测量法,这是一种便携式偏光获取方法,同时同时捕获极化SVBRDF和3D形状。我们的手持设备由现成的固定光学组件组成。每个物体的总收购时间在二十分钟之间变化,而不是天数。我们开发了一个完整的极化SVBRDF模型,其中包括分散和镜面成分以及单个散射,并通过生成模型来设计一种新型的极化逆渲染算法,并通过数据增强镜面反射样品的数据增强。我们的结果表明,与现实世界对象捕获的极化BRDF的最新基础数据集有很强的一致性。
translated by 谷歌翻译
飞行时间(TOF)传感器提供了一种成像模型加油,包括自主驾驶,机器人和增强现实的激光雷达。传统的TOF成像方法通过将光的脉冲发送到场景中并测量直接从场景表面反射的第一到达光子的TOF而没有任何时间延迟来估计深度。因此,在该第一响应之后的所有光子通常被认为是不需要的噪声。在本文中,我们通过使用第一到达光子的原理来涉及全光子TOF成像方法来结合第一和​​后退光子的时间 - 极化分析,这具有关于其几何和材料的丰富现场信息。为此,我们提出了一种新的时间 - 偏振反射模型,一种有效的捕获方法和重建方法,其利用由表面和子表面反射反射的光的时间 - 极性变化。所提出的全光子偏振子TOF成像方法允许通过利用系统捕获的所有光子来获取场景的深度,表面法线和材料参数,而传统的TOF成像仅从第一到达光子获得粗糙的深度。我们使用原型验证我们的模拟方法和实验。
translated by 谷歌翻译
The vast majority of Shape-from-Polarization (SfP) methods work under the oversimplified assumption of using orthographic cameras. Indeed, it is still not well understood how to project the Stokes vectors when the incoming rays are not orthogonal to the image plane. We try to answer this question presenting a geometric model describing how a general projective camera captures the light polarization state. Based on the optical properties of a tilted polarizer, our model is implemented as a pre-processing operation acting on raw images, followed by a per-pixel rotation of the reconstructed normal field. In this way, all the existing SfP methods assuming orthographic cameras can behave like they were designed for projective ones. Moreover, our model is consistent with state-of-the-art forward and inverse renderers (like Mitsuba3 and ART), intrinsically enforces physical constraints among the captured channels, and handles demosaicing of DoFP sensors. Experiments on existing and new datasets demonstrate the accuracy of the model when applied to commercially available polarimetric cameras.
translated by 谷歌翻译
在这项工作中,我们提出了一种新的方法,用于利用极化线索来详细地重建透明对象。大多数现有方法通常缺乏足够的限制,并且遭受了过度平滑的问题。因此,我们将极化信息作为互补提示引入。我们将对象的几何形状隐式表示为神经网络,而极化渲染能够从给定的形状和照明配置中呈现对象的极化图像。由于透明对象的传输,将渲染的极化图像与现实世界捕获的图像进行直接比较将存在其他错误。为了解决这个问题,引入了代表反射部分比例的反射百分比的概念。反射百分比由射线示踪剂计算,然后用于加权极化损失。我们为多视图透明形状重建构建极化数据集以验证我们的方法。实验结果表明,我们的方法能够恢复详细的形状并提高透明物体的重建质量。我们的数据集和代码将在https://github.com/shaomq2187/transpir上公开获得。
translated by 谷歌翻译
我们提出了使用镜面多声激光雷达返回的方法来检测和映射镜面表面,这些表面可能是依赖直接单刻钟返回的常规LIDAR系统看不见的。我们得出将这些多声音返回的时间和到达的表达式与镜面表面上的散射点相关联,然后使用这些表达式来制定技术以检索镜面几何时,当场景被单光束扫描或照亮时带有多光束闪光灯。我们还考虑了透明的镜面表面的特殊情况,可以将表面反射与散布在表面后面的物体上的光混合在一起。
translated by 谷歌翻译
传统上,本征成像或内在图像分解被描述为将图像分解为两层:反射率,材料的反射率;和一个阴影,由光和几何之间的相互作用产生。近年来,深入学习技术已广泛应用,以提高这些分离的准确性。在本调查中,我们概述了那些在知名内在图像数据集和文献中使用的相关度量的结果,讨论了预测所需的内在图像分解的适用性。虽然Lambertian的假设仍然是许多方法的基础,但我们表明,对图像形成过程更复杂的物理原理组件的潜力越来越意识到,这是光学准确的材料模型和几何形状,更完整的逆轻型运输估计。考虑使用的前瞻和模型以及驾驶分解过程的学习架构和方法,我们将这些方法分类为分解的类型。考虑到最近神经,逆和可微分的渲染技术的进步,我们还提供了关于未来研究方向的见解。
translated by 谷歌翻译
当前的极化3D重建方法,包括具有偏振文献的良好形状的方法,均在正交投影假设下开发。但是,在较大的视野中,此假设不存在,并且可能导致对此假设的方法发生重大的重建错误。为了解决此问题,我们介绍适用于透视摄像机的透视相位角(PPA)模型。与拼字法模型相比,提出的PPA模型准确地描述了在透视投影下极化相位角与表面正常之间的关系。此外,PPA模型使得仅从一个单视相位映射估算表面正态,并且不遭受所谓的{\ pi} - ambiguity问题。实际数据上的实验表明,PPA模型对于具有透视摄像机的表面正常估计比拼字法模型更准确。
translated by 谷歌翻译
A polarization camera has great potential for 3D reconstruction since the angle of polarization (AoP) and the degree of polarization (DoP) of reflected light are related to an object's surface normal. In this paper, we propose a novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that effectively exploits geometric, photometric, and polarimetric cues extracted from input multi-view color-polarization images. We first estimate camera poses and an initial 3D model by geometric reconstruction with a standard structure-from-motion and multi-view stereo pipeline. We then refine the initial model by optimizing photometric rendering errors and polarimetric errors using multi-view RGB, AoP, and DoP images, where we propose a novel polarimetric cost function that enables an effective constraint on the estimated surface normal of each vertex, while considering four possible ambiguous azimuth angles revealed from the AoP measurement. The weight for the polarimetric cost is effectively determined based on the DoP measurement, which is regarded as the reliability of polarimetric information. Experimental results using both synthetic and real data demonstrate that our Polarimetric MVIR can reconstruct a detailed 3D shape without assuming a specific surface material and lighting condition.
translated by 谷歌翻译
本文解决了3D对象重建的未校准光度立体声的任务,其中对象形状,对象反射率和照明方向均未知。这是一项极其困难的任务,挑战与光度法立体声中众所周知的普遍浮雕(GBR)歧义的存在进一步更加复杂。解决这种歧义的先前方法要么依赖于过度简化的反射模型,要么假设特殊的光分布。我们提出了一种新方法,该方法在一般表面和灯光假设下共同优化对象形状,光方向和光强度。镜面可显式地通过神经反向渲染过程求解未校准的光度立体声。我们使用新型的进行性镜面底座逐渐拟合从闪亮到粗糙的镜面。我们的方法通过最大程度地减少对每个对象基础的重建误差来利用基于物理的渲染方程。我们的方法证明了在现实世界数据集上的光估计和形状恢复中的最新精度。
translated by 谷歌翻译
Specularity prediction is essential to many computer vision applications, giving important visual cues usable in Augmented Reality (AR), Simultaneous Localisation and Mapping (SLAM), 3D reconstruction and material modeling. However, it is a challenging task requiring numerous information from the scene including the camera pose, the geometry of the scene, the light sources and the material properties. Our previous work addressed this task by creating an explicit model using an ellipsoid whose projection fits the specularity image contours for a given camera pose. These ellipsoid-based approaches belong to a family of models called JOint-LIght MAterial Specularity (JOLIMAS), which we have gradually improved by removing assumptions on the scene geometry. However, our most recent approach is still limited to uniformly curved surfaces. This paper generalises JOLIMAS to any surface geometry while improving the quality of specularity prediction, without sacrificing computation performances. The proposed method establishes a link between surface curvature and specularity shape in order to lift the geometric assumptions made in previous work. Contrary to previous work, our new model is built from a physics-based local illumination model namely Torrance-Sparrow, providing an improved reconstruction. Specularity prediction using our new model is tested against the most recent JOLIMAS version on both synthetic and real sequences with objects of various general shapes. Our method outperforms previous approaches in specularity prediction, including the real-time setup, as shown in the supplementary videos.
translated by 谷歌翻译
神经辐射场(NERF)是一种普遍的视图综合技术,其表示作为连续体积函数的场景,由多层的感知来参数化,其提供每个位置处的体积密度和视图相关的发射辐射。虽然基于NERF的技术在代表精细的几何结构时,具有平稳变化的视图依赖性外观,但它们通常无法精确地捕获和再现光泽表面的外观。我们通过引入Ref-nerf来解决这些限制,该ref-nerf替换了nerf的视图依赖性输出辐射的参数化,使用反射辐射的表示和使用空间不同场景属性的集合来构造该函数的表示。我们展示了与正常载体上的规范器一起,我们的模型显着提高了镜面反射的现实主义和准确性。此外,我们表明我们的模型的外向光线的内部表示是可解释的,可用于场景编辑。
translated by 谷歌翻译
我们介绍了一种新型的多视图立体声(MVS)方法,该方法不仅可以同时恢复每个像素深度,而且还可以恢复表面正常状态,以及在已知但自然照明下捕获的无纹理,复杂的非斜面表面的反射。我们的关键想法是将MVS作为端到端的可学习网络,我们称为NLMVS-NET,该网络无缝地集成了放射线线索,以利用表面正常状态作为视图的表面特征,以实现学习成本量的构建和过滤。它首先通过新颖的形状从阴影网络估算出每个视图的像素概率密度。然后,这些每个像素表面正常密度和输入多视图图像将输入到一个新颖的成本量滤波网络中,该网络学会恢复每个像素深度和表面正常。通过与几何重建交替进行交替估计反射率。对新建立的合成和现实世界数据集进行了广泛的定量评估表明,NLMVS-NET可以稳健而准确地恢复自然设置中复杂物体的形状和反射率。
translated by 谷歌翻译
Multispectral photometric stereo(MPS) aims at recovering the surface normal of a scene from a single-shot multispectral image captured under multispectral illuminations. Existing MPS methods adopt the Lambertian reflectance model to make the problem tractable, but it greatly limits their application to real-world surfaces. In this paper, we propose a deep neural network named NeuralMPS to solve the MPS problem under general non-Lambertian spectral reflectances. Specifically, we present a spectral reflectance decomposition(SRD) model to disentangle the spectral reflectance into geometric components and spectral components. With this decomposition, we show that the MPS problem for surfaces with a uniform material is equivalent to the conventional photometric stereo(CPS) with unknown light intensities. In this way, NeuralMPS reduces the difficulty of the non-Lambertian MPS problem by leveraging the well-studied non-Lambertian CPS methods. Experiments on both synthetic and real-world scenes demonstrate the effectiveness of our method.
translated by 谷歌翻译
Reflections on glossy objects contain valuable and hidden information about the surrounding environment. By converting these objects into cameras, we can unlock exciting applications, including imaging beyond the camera's field-of-view and from seemingly impossible vantage points, e.g. from reflections on the human eye. However, this task is challenging because reflections depend jointly on object geometry, material properties, the 3D environment, and the observer viewing direction. Our approach converts glossy objects with unknown geometry into radiance-field cameras to image the world from the object's perspective. Our key insight is to convert the object surface into a virtual sensor that captures cast reflections as a 2D projection of the 5D environment radiance field visible to the object. We show that recovering the environment radiance fields enables depth and radiance estimation from the object to its surroundings in addition to beyond field-of-view novel-view synthesis, i.e. rendering of novel views that are only directly-visible to the glossy object present in the scene, but not the observer. Moreover, using the radiance field we can image around occluders caused by close-by objects in the scene. Our method is trained end-to-end on multi-view images of the object and jointly estimates object geometry, diffuse radiance, and the 5D environment radiance field.
translated by 谷歌翻译
我们解决了从由一个未知照明条件照射的物体的多视图图像(及其相机姿势)从多视图图像(和它们的相机姿势)恢复物体的形状和空间变化的空间变化的问题。这使得能够在任意环境照明下呈现对象的新颖视图和对象的材料属性的编辑。我们呼叫神经辐射分解(NERFVERTOR)的方法的关键是蒸馏神经辐射场(NERF)的体积几何形状[MILDENHALL等人。 2020]将物体表示为表面表示,然后在求解空间改变的反射率和环境照明时共同细化几何形状。具体而言,Nerfactor仅使用重新渲染丢失,简单的光滑度Provers以及从真实学中学到的数据驱动的BRDF而无任何监督的表面法线,光可视性,Albedo和双向反射率和双向反射分布函数(BRDF)的3D神经领域-world brdf测量。通过显式建模光可视性,心脏请能够将来自Albedo的阴影分离,并在任意照明条件下合成现实的软或硬阴影。 Nerfactor能够在这场具有挑战性和实际场景的挑战和捕获的捕获设置中恢复令人信服的3D模型进行令人满意的3D模型。定性和定量实验表明,在各种任务中,内容越优于基于经典和基于深度的学习状态。我们的视频,代码和数据可在peoptom.csail.mit.edu/xiuming/projects/nerfactor/上获得。
translated by 谷歌翻译
未校准的光度立体声(UPS)由于未知光带来的固有歧义而具有挑战性。现有的解决方案通过将反射率明确关联到光条件或以监督方式解决光条件来减轻歧义。本文建立了光线线索和光估计之间的隐含关系,并以无监督的方式解决了UPS。关键思想是将反射率表示为四个神经内在字段,即\ ie,位置,光,镜头和阴影,基于神经光场与镜面反射和铸造阴影的光线线索隐含相关联。神经内在字段的无监督,关节优化可以不受训练数据偏差和累积误差,并完全利用所有观察到的像素值的UPS值。我们的方法在常规和具有挑战性的设置下,在公共和自我收集的数据集上获得了优于最先进的UPS方法的优势。该代码将很快发布。
translated by 谷歌翻译
我们建议使用以光源方向为条件的神经辐射场(NERF)的扩展来解决多视光度立体声问题。我们神经表示的几何部分预测表面正常方向,使我们能够理解局部表面反射率。我们的神经表示的外观部分被分解为神经双向反射率函数(BRDF),作为拟合过程的一部分学习,阴影预测网络(以光源方向为条件),使我们能够对明显的BRDF进行建模。基于物理图像形成模型的诱导偏差的学到的组件平衡使我们能够远离训练期间观察到的光源和查看器方向。我们证明了我们在多视光学立体基准基准上的方法,并表明可以通过NERF的神经密度表示可以获得竞争性能。
translated by 谷歌翻译
Physically based rendering of complex scenes can be prohibitively costly with a potentially unbounded and uneven distribution of complexity across the rendered image. The goal of an ideal level of detail (LoD) method is to make rendering costs independent of the 3D scene complexity, while preserving the appearance of the scene. However, current prefiltering LoD methods are limited in the appearances they can support due to their reliance of approximate models and other heuristics. We propose the first comprehensive multi-scale LoD framework for prefiltering 3D environments with complex geometry and materials (e.g., the Disney BRDF), while maintaining the appearance with respect to the ray-traced reference. Using a multi-scale hierarchy of the scene, we perform a data-driven prefiltering step to obtain an appearance phase function and directional coverage mask at each scale. At the heart of our approach is a novel neural representation that encodes this information into a compact latent form that is easy to decode inside a physically based renderer. Once a scene is baked out, our method requires no original geometry, materials, or textures at render time. We demonstrate that our approach compares favorably to state-of-the-art prefiltering methods and achieves considerable savings in memory for complex scenes.
translated by 谷歌翻译
我们解决了在均质半透明材料中建模光散射并估算其散射参数的问题。散射相函数是影响散射辐射分布的此类参数之一。它是在实践中建模的最复杂,最具挑战性的参数,通常使用经验相位函数。经验相函数(例如Henyey-Greenstein(HG)相位函数)通常会呈现,并限于特定的散射材料范围。这种限制引起了人们对目标材料通常未知的反向渲染问题的关注。在这种情况下,首选更通用的相位函数。尽管使用诸如Legendre多项式\ cite {Fowler1983}之类的基础中存在这种通用相位函数,但此相函数的逆渲染并不直接。这是因为基础多项式在某个地方可能是负面的,而相位函数不能。这项研究提出了一种新型的通用相位功能,可以避免此问题,并使用此阶段函数进行逆渲染应用。通过以MIE散射理论建模的广泛的材料对所提出的相函数进行了积极评估。通过模拟和现实世界实验评估了带有建议的相函数的散射参数估计。
translated by 谷歌翻译