Neural implicit 3D representations have emerged as a powerful paradigm for reconstructing surfaces from multiview images and synthesizing novel views. Unfortunately, existing methods such as DVR or IDR require accurate perpixel object masks as supervision. At the same time, neural radiance fields have revolutionized novel view synthesis. However, NeRF's estimated volume density does not admit accurate surface reconstruction. Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering using the same model. This unified perspective enables novel, more efficient sampling procedures and the ability to reconstruct accurate surfaces without input masks. We compare our method on the DTU, BlendedMVS, and a synthetic indoor dataset. Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
translated by 谷歌翻译
我们介绍了一种新的神经表面重建方法,称为Neus,用于重建具有高保真的对象和场景,从2D图像输入。现有的神经表面重建方法,例如DVR和IDR,需要前景掩模作为监控,容易被捕获在局部最小值中,因此与具有严重自动遮挡或薄结构的物体的重建斗争。同时,新型观测合成的最近神经方法,例如Nerf及其变体,使用体积渲染来产生具有优化的稳健性的神经场景表示,即使对于高度复杂的物体。然而,从该学习的内隐式表示提取高质量表面是困难的,因为表示表示没有足够的表面约束。在Neus中,我们建议将表面代表为符号距离功能(SDF)的零级集,并开发一种新的卷渲染方法来训练神经SDF表示。我们观察到传统的体积渲染方法导致表面重建的固有的几何误差(即偏置),因此提出了一种新的制剂,其在第一阶的第一阶偏差中没有偏置,因此即使没有掩码监督,也导致更准确的表面重建。 DTU数据集的实验和BlendedMVS数据集显示,Neus在高质量的表面重建中优于最先进的,特别是对于具有复杂结构和自动闭塞的物体和场景。
translated by 谷歌翻译
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel-and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our singleview reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.
translated by 谷歌翻译
由于其成功在从稀疏的输入图像集合中合成了场景的新颖视图,最近越来越受欢迎。到目前为止,通过通用密度函数建模了神经体积渲染技术的几何形状。此外,使用通向嘈杂的任意水平函数的任意水平集合来提取几何形状本身,通常是低保真重建。本文的目标是改善神经体积渲染中的几何形象和重建。我们通过将体积密度建模为几何形状来实现这一点。这与以前的工作与体积密度的函数建模几何。更详细地,我们将音量密度函数定义为Laplace的累积分发功能(CDF)应用于符号距离功能(SDF)表示。这种简单的密度表示有三个好处:(i)它为神经体积渲染过程中学到的几何形状提供了有用的电感偏差; (ii)它促进了缺陷近似误差的束缚,导致观看光线的准确采样。精确的采样对于提供几何和光线的精确耦合非常重要; (iii)允许高效无监督的脱位形状和外观在体积渲染中。将此新密度表示应用于具有挑战性的场景多视图数据集生产了高质量的几何重建,表现优于相关的基线。此外,由于两者的解剖学,场景之间的切换形状和外观是可能的。
translated by 谷歌翻译
神经隐式表面已成为多视图3D重建的重要技术,但它们的准确性仍然有限。在本文中,我们认为这来自难以学习和呈现具有神经网络的高频纹理。因此,我们建议在不同视图中添加标准神经渲染优化直接照片一致性术语。直观地,我们优化隐式几何体,以便以一致的方式扭曲彼此的视图。我们证明,两个元素是这种方法成功的关键:(i)使用沿着每条光线的预测占用和3D点的预测占用和法线来翘曲整个补丁,并用稳健的结构相似度测量它们的相似性; (ii)以这种方式处理可见性和遮挡,使得不正确的扭曲不会给出太多的重要性,同时鼓励重建尽可能完整。我们评估了我们的方法,在标准的DTU和EPFL基准上被称为NeuralWarp,并表明它在两个数据集上以超过20%重建的艺术态度优于未经监督的隐式表面。
translated by 谷歌翻译
获取房间规模场景的高质量3D重建对于即将到来的AR或VR应用是至关重要的。这些范围从混合现实应用程序进行电话会议,虚拟测量,虚拟房间刨,到机器人应用。虽然使用神经辐射场(NERF)的基于卷的视图合成方法显示有希望再现对象或场景的外观,但它们不会重建实际表面。基于密度的表面的体积表示在使用行进立方体提取表面时导致伪影,因为在优化期间,密度沿着射线累积,并且不在单个样本点处于隔离点。我们建议使用隐式函数(截短的签名距离函数)来代表表面来代表表面。我们展示了如何在NERF框架中纳入此表示,并将其扩展为使用来自商品RGB-D传感器的深度测量,例如Kinect。此外,我们提出了一种姿势和相机细化技术,可提高整体重建质量。相反,与集成NERF的深度前瞻性的并发工作,其专注于新型视图合成,我们的方法能够重建高质量的韵律3D重建。
translated by 谷歌翻译
https://video-nerf.github.io Figure 1. Our method takes a single casually captured video as input and learns a space-time neural irradiance field. (Top) Sample frames from the input video. (Middle) Novel view images rendered from textured meshes constructed from depth maps. (Bottom) Our results rendered from the proposed space-time neural irradiance field.
translated by 谷歌翻译
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (nonconvolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
translated by 谷歌翻译
在许多计算机视觉和图形应用程序中,从2D图像重建3D室内场景是一项重要任务。这项任务中的一个主要挑战是,典型的室内场景中的无纹理区域使现有方法难以产生令人满意的重建结果。我们提出了一种名为Neuris的新方法,以高质量地重建室内场景。 Neuris的关键思想是将估计的室内场景正常整合为神经渲染框架中的先验,以重建大型无纹理形状,并且重要的是,以适应性的方式进行此操作,以便重建不规则的形状,并具有很好的细节。 。具体而言,我们通过检查优化过程中重建的多视图一致性来评估正常先验的忠诚。只有被接受为忠实的正常先验才能用于3D重建,通常发生在平滑形状的区域中,可能具有弱质地。但是,对于那些具有小物体或薄结构的区域,普通先验通常不可靠,我们只能依靠输入图像的视觉特征,因为此类区域通常包含相对较丰富的视觉特征(例如,阴影变化和边界轮廓)。广泛的实验表明,在重建质量方面,Neuris明显优于最先进的方法。
translated by 谷歌翻译
In this work we address the challenging problem of multiview 3D surface reconstruction. We introduce a neural network architecture that simultaneously learns the unknown geometry, camera parameters, and a neural renderer that approximates the light reflected from the surface towards the camera. The geometry is represented as a zero level-set of a neural network, while the neural renderer, derived from the rendering equation, is capable of (implicitly) modeling a wide set of lighting conditions and materials. We trained our network on real world 2D images of objects with different material properties, lighting conditions, and noisy camera initializations from the DTU MVS dataset. We found our model to produce state of the art 3D surface reconstructions with high fidelity, resolution and detail.
translated by 谷歌翻译
虚拟内容创建和互动在现代3D应用中起着重要作用,例如AR和VR。从真实场景中恢复详细的3D模型可以显着扩大其应用程序的范围,并在计算机视觉和计算机图形社区中进行了数十年的研究。我们提出了基于体素的隐式表面表示Vox-Surf。我们的Vox-Surf将空间分为有限的体素。每个体素将几何形状和外观信息存储在其角顶点。 Vox-Surf得益于从体素表示继承的稀疏性,几乎适用于任何情况,并且可以轻松地从多个视图图像中训练。我们利用渐进式训练程序逐渐提取重要体素,以进一步优化,以便仅保留有效的体素,从而大大减少了采样点的数量并增加了渲染速度。细素还可以视为碰撞检测的边界量。该实验表明,与其他方法相比,Vox-Surf表示可以学习精致的表面细节和准确的颜色,并以更少的记忆力和更快的渲染速度来学习。我们还表明,Vox-Surf在场景编辑和AR应用中可能更实用。
translated by 谷歌翻译
神经隐式表示在新的视图合成和来自多视图图像的高质量3D重建方面显示了其有效性。但是,大多数方法都集中在整体场景表示上,但忽略了其中的各个对象,从而限制了潜在的下游应用程序。为了学习对象组合表示形式,一些作品将2D语义图作为训练中的提示,以掌握对象之间的差异。但是他们忽略了对象几何和实例语义信息之间的牢固联系,这导致了单个实例的不准确建模。本文提出了一个新颖的框架ObjectsDF,以在3D重建和对象表示中构建具有高保真度的对象复合神经隐式表示。观察常规音量渲染管道的歧义,我们通过组合单个对象的签名距离函数(SDF)来对场景进行建模,以发挥明确的表面约束。区分不同实例的关键是重新审视单个对象的SDF和语义标签之间的牢固关联。特别是,我们将语义信息转换为对象SDF的函数,并为场景和对象开发统一而紧凑的表示形式。实验结果表明,ObjectSDF框架在表示整体对象组合场景和各个实例方面的优越性。可以在https://qianyiwu.github.io/objectsdf/上找到代码
translated by 谷歌翻译
我们向渲染和时间(4D)重建人类的渲染和时间(4D)重建的神经辐射场,通过稀疏的摄像机捕获或甚至来自单眼视频。我们的方法将思想与神经场景表示,新颖的综合合成和隐式统计几何人称的人类表示相结合,耦合使用新颖的损失功能。在先前使用符号距离功能表示的结构化隐式人体模型,而不是使用统一的占用率来学习具有统一占用的光域字段。这使我们能够从稀疏视图中稳健地融合信息,并概括超出在训练中观察到的姿势或视图。此外,我们应用几何限制以共同学习观察到的主题的结构 - 包括身体和衣服 - 并将辐射场正规化为几何合理的解决方案。在多个数据集上的广泛实验证明了我们方法的稳健性和准确性,其概括能力显着超出了一系列的姿势和视图,以及超出所观察到的形状的统计外推。
translated by 谷歌翻译
We represent the ResNeRF, a novel geometry-guided two-stage framework for indoor scene novel view synthesis. Be aware of that a good geometry would greatly boost the performance of novel view synthesis, and to avoid the geometry ambiguity issue, we propose to characterize the density distribution of the scene based on a base density estimated from scene geometry and a residual density parameterized by the geometry. In the first stage, we focus on geometry reconstruction based on SDF representation, which would lead to a good geometry surface of the scene and also a sharp density. In the second stage, the residual density is learned based on the SDF learned in the first stage for encoding more details about the appearance. In this way, our method can better learn the density distribution with the geometry prior for high-fidelity novel view synthesis while preserving the 3D structures. Experiments on large-scale indoor scenes with many less-observed and textureless areas show that with the good 3D surface, our method achieves state-of-the-art performance for novel view synthesis.
translated by 谷歌翻译
With the success of neural volume rendering in novel view synthesis, neural implicit reconstruction with volume rendering has become popular. However, most methods optimize per-scene functions and are unable to generalize to novel scenes. We introduce VolRecon, a generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF). To reconstruct with fine details and little noise, we combine projection features, aggregated from multi-view features with a view transformer, and volume features interpolated from a coarse global feature volume. A ray transformer computes SRDF values of all the samples along a ray to estimate the surface location, which are used for volume rendering of color and depth. Extensive experiments on DTU and ETH3D demonstrate the effectiveness and generalization ability of our method. On DTU, our method outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable quality as MVSNet in full view reconstruction. Besides, our method shows good generalization ability on the large-scale ETH3D benchmark. Project page: https://fangjinhuawang.github.io/VolRecon.
translated by 谷歌翻译
生成辐射田地的出现显着促进了3D感知图像合成的发展。辐射字段中的累积渲染过程使得这些生成模型更容易,因为渐变在整个音量上分布,但导致扩散的物体表面。与此同时,与Radiance Fields相比,占用表示可以本质地确保确定性表面。但是,如果我们直接向生成模型应用占用表示,在培训期间,它们只会在物体表面上接收稀疏梯度,并最终遭受收敛问题。在本文中,我们提出了一种基于生成的辐射场的新型模型的生成占用场(GOF),这些模型可以在不妨碍其训练收敛的情况下学习紧凑的物体表面。 GOF的关键介绍是从辐射字段中累积渲染到渲染的专用过渡,只有在学习的表面越来越准确的情况下,只有曲面点渲染。通过这种方式,GOF将两个表示的优点组合在统一的框架中。在实践中,通过逐渐将采样区域从整个体积逐渐缩小到表面周围的最小相邻区域,在GOF中实现了从辐射场和3月到占用表示的训练时间转换。通过对多个数据集的全面实验,我们证明了GOF可以合成具有3D一致性的高质量图像,并同时学习紧凑且光滑的物体表面。代码,模型和演示视频可在https://shedontsui.g​​ithub.io/projects/gof中获得
translated by 谷歌翻译
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multiview posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. 1
translated by 谷歌翻译
在本文中,我们解决了多视图3D形状重建的问题。尽管最近与隐式形状表示相关的最新可区分渲染方法提供了突破性的表现,但它们仍然在计算上很重,并且在估计的几何形状上通常缺乏精确性。为了克服这些局限性,我们研究了一种基于体积的新型表示形式建立的新计算方法,就像在最近的可区分渲染方法中一样,但是用深度图进行了参数化,以更好地实现形状表面。与此表示相关的形状能量可以评估给定颜色图像的3D几何形状,并且不需要外观预测,但在优化时仍然受益于体积整合。在实践中,我们提出了一个隐式形状表示,SRDF基于签名距离,我们通过沿摄像头射线进行参数化。相关的形状能量考虑了深度预测一致性和光度一致性之间的一致性,这是在体积表示内的3D位置。可以考虑各种照片一致先验的基础基线,或者像学习功能一样详细的标准。该方法保留具有深度图的像素准确性,并且可行。我们对标准数据集进行的实验表明,它提供了有关具有隐式形状表示的最新方法以及传统的多视角立体方法的最新结果。
translated by 谷歌翻译
神经渲染可用于在没有3D监督的情况下重建形状的隐式表示。然而,当前的神经表面重建方法难以学习形状的高频细节,因此经常过度厚度地呈现重建形状。我们提出了一种新的方法来提高神经渲染中表面重建的质量。我们遵循最近的工作,将表面模型为签名的距离字段。首先,我们提供了一个派生,以分析签名的距离函数,体积密度,透明度函数和体积渲染方程中使用的加权函数之间的关系。其次,我们观察到,试图在单个签名的距离函数中共同编码高频和低频组件会导致不稳定的优化。我们建议在基本函数和位移函数中分解签名的距离函数以及粗到最新的策略,以逐渐增加高频细节。最后,我们建议使用一种自适应策略,使优化能够专注于改善签名距离场具有伪影的表面附近的某些区域。我们的定性和定量结果表明,我们的方法可以重建高频表面细节,并获得比目前的现状更好的表面重建质量。代码将在https://github.com/yiqun-wang/hfs上发布。
translated by 谷歌翻译
Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism. The experiments on the DTU, the NeRF Synthetics , the ScanNet and the Tanks and Temples datasets demonstrate Point-NeRF can surpass the existing methods and achieve the state-of-the-art results.
translated by 谷歌翻译