We represent the ResNeRF, a novel geometry-guided two-stage framework for indoor scene novel view synthesis. Be aware of that a good geometry would greatly boost the performance of novel view synthesis, and to avoid the geometry ambiguity issue, we propose to characterize the density distribution of the scene based on a base density estimated from scene geometry and a residual density parameterized by the geometry. In the first stage, we focus on geometry reconstruction based on SDF representation, which would lead to a good geometry surface of the scene and also a sharp density. In the second stage, the residual density is learned based on the SDF learned in the first stage for encoding more details about the appearance. In this way, our method can better learn the density distribution with the geometry prior for high-fidelity novel view synthesis while preserving the 3D structures. Experiments on large-scale indoor scenes with many less-observed and textureless areas show that with the good 3D surface, our method achieves state-of-the-art performance for novel view synthesis.
translated by 谷歌翻译
在许多计算机视觉和图形应用程序中,从2D图像重建3D室内场景是一项重要任务。这项任务中的一个主要挑战是,典型的室内场景中的无纹理区域使现有方法难以产生令人满意的重建结果。我们提出了一种名为Neuris的新方法,以高质量地重建室内场景。 Neuris的关键思想是将估计的室内场景正常整合为神经渲染框架中的先验,以重建大型无纹理形状,并且重要的是,以适应性的方式进行此操作,以便重建不规则的形状,并具有很好的细节。 。具体而言,我们通过检查优化过程中重建的多视图一致性来评估正常先验的忠诚。只有被接受为忠实的正常先验才能用于3D重建,通常发生在平滑形状的区域中,可能具有弱质地。但是,对于那些具有小物体或薄结构的区域,普通先验通常不可靠,我们只能依靠输入图像的视觉特征,因为此类区域通常包含相对较丰富的视觉特征(例如,阴影变化和边界轮廓)。广泛的实验表明,在重建质量方面,Neuris明显优于最先进的方法。
translated by 谷歌翻译
神经隐式表示在新的视图合成和来自多视图图像的高质量3D重建方面显示了其有效性。但是,大多数方法都集中在整体场景表示上,但忽略了其中的各个对象,从而限制了潜在的下游应用程序。为了学习对象组合表示形式,一些作品将2D语义图作为训练中的提示,以掌握对象之间的差异。但是他们忽略了对象几何和实例语义信息之间的牢固联系,这导致了单个实例的不准确建模。本文提出了一个新颖的框架ObjectsDF,以在3D重建和对象表示中构建具有高保真度的对象复合神经隐式表示。观察常规音量渲染管道的歧义,我们通过组合单个对象的签名距离函数(SDF)来对场景进行建模,以发挥明确的表面约束。区分不同实例的关键是重新审视单个对象的SDF和语义标签之间的牢固关联。特别是,我们将语义信息转换为对象SDF的函数,并为场景和对象开发统一而紧凑的表示形式。实验结果表明,ObjectSDF框架在表示整体对象组合场景和各个实例方面的优越性。可以在https://qianyiwu.github.io/objectsdf/上找到代码
translated by 谷歌翻译
神经表面重建旨在基于多视图图像重建准确的3D表面。基于神经量的先前方法主要训练完全隐式的模型,它们需要单个场景的数小时培训。最近的努力探讨了明确的体积表示,该表示通过记住可学习的素网格中的重要信息,从而大大加快了优化过程。但是,这些基于体素的方法通常在重建细粒几何形状方面遇到困难。通过实证研究,我们发现高质量的表面重建取决于两个关键因素:构建相干形状的能力和颜色几何依赖性的精确建模。特别是,后者是准确重建细节的关键。受这些发现的启发,我们开发了Voxurf,这是一种基于体素的方法,用于有效,准确的神经表面重建,该方法由两个阶段组成:1)利用可学习的特征网格来构建颜色场并获得连贯的粗糙形状,并且2)使用双色网络来完善详细的几何形状,可捕获精确的颜色几何依赖性。我们进一步引入了层次几何特征,以启用跨体素的信息共享。我们的实验表明,Voxurf同时达到了高效率和高质量。在DTU基准测试中,与最先进的方法相比,Voxurf获得了更高的重建质量,训练的加速度为20倍。
translated by 谷歌翻译
Neural implicit 3D representations have emerged as a powerful paradigm for reconstructing surfaces from multiview images and synthesizing novel views. Unfortunately, existing methods such as DVR or IDR require accurate perpixel object masks as supervision. At the same time, neural radiance fields have revolutionized novel view synthesis. However, NeRF's estimated volume density does not admit accurate surface reconstruction. Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering using the same model. This unified perspective enables novel, more efficient sampling procedures and the ability to reconstruct accurate surfaces without input masks. We compare our method on the DTU, BlendedMVS, and a synthetic indoor dataset. Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
translated by 谷歌翻译
我们呈现DD-NERF,一种用于代表人体几何形状和从任意输入视图的外观的新型推广隐含区域。核心贡献是一种双重扩散机制,利用稀疏的卷积神经网络来构建代表不同水平的人体的两个体积:粗糙的体积利用不清的可变形网格来提供大规模的几何指导,以及详细信息卷从本地图像功能中了解复杂的几何图形。我们还使用变压器网络聚合跨视图的图像特征和原始像素,以计算最终的高保真辐射域。各种数据集的实验表明,所提出的方法优于几何重建和新颖观看综合质量的先前工作。
translated by 谷歌翻译
在本文中,我们为复杂场景进行了高效且强大的深度学习解决方案。在我们的方法中,3D场景表示为光场,即,一组光线,每组在到达图像平面时具有相应的颜色。对于高效的新颖视图渲染,我们采用了光场的双面参数化,其中每个光线的特征在于4D参数。然后,我们将光场配向作为4D函数,即将4D坐标映射到相应的颜色值。我们训练一个深度完全连接的网络以优化这种隐式功能并记住3D场景。然后,特定于场景的模型用于综合新颖视图。与以前需要密集的视野的方法不同,需要密集的视野采样来可靠地呈现新颖的视图,我们的方法可以通过采样光线来呈现新颖的视图并直接从网络查询每种光线的颜色,从而使高质量的灯场呈现稀疏集合训练图像。网络可以可选地预测每光深度,从而使诸如自动重新焦点的应用。我们的小说视图合成结果与最先进的综合结果相当,甚至在一些具有折射和反射的具有挑战性的场景中优越。我们在保持交互式帧速率和小的内存占地面积的同时实现这一点。
translated by 谷歌翻译
With the success of neural volume rendering in novel view synthesis, neural implicit reconstruction with volume rendering has become popular. However, most methods optimize per-scene functions and are unable to generalize to novel scenes. We introduce VolRecon, a generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF). To reconstruct with fine details and little noise, we combine projection features, aggregated from multi-view features with a view transformer, and volume features interpolated from a coarse global feature volume. A ray transformer computes SRDF values of all the samples along a ray to estimate the surface location, which are used for volume rendering of color and depth. Extensive experiments on DTU and ETH3D demonstrate the effectiveness and generalization ability of our method. On DTU, our method outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable quality as MVSNet in full view reconstruction. Besides, our method shows good generalization ability on the large-scale ETH3D benchmark. Project page: https://fangjinhuawang.github.io/VolRecon.
translated by 谷歌翻译
我们介绍了一种新的神经表面重建方法,称为Neus,用于重建具有高保真的对象和场景,从2D图像输入。现有的神经表面重建方法,例如DVR和IDR,需要前景掩模作为监控,容易被捕获在局部最小值中,因此与具有严重自动遮挡或薄结构的物体的重建斗争。同时,新型观测合成的最近神经方法,例如Nerf及其变体,使用体积渲染来产生具有优化的稳健性的神经场景表示,即使对于高度复杂的物体。然而,从该学习的内隐式表示提取高质量表面是困难的,因为表示表示没有足够的表面约束。在Neus中,我们建议将表面代表为符号距离功能(SDF)的零级集,并开发一种新的卷渲染方法来训练神经SDF表示。我们观察到传统的体积渲染方法导致表面重建的固有的几何误差(即偏置),因此提出了一种新的制剂,其在第一阶的第一阶偏差中没有偏置,因此即使没有掩码监督,也导致更准确的表面重建。 DTU数据集的实验和BlendedMVS数据集显示,Neus在高质量的表面重建中优于最先进的,特别是对于具有复杂结构和自动闭塞的物体和场景。
translated by 谷歌翻译
We present a novel neural surface reconstruction method called NeuralRoom for reconstructing room-sized indoor scenes directly from a set of 2D images. Recently, implicit neural representations have become a promising way to reconstruct surfaces from multiview images due to their high-quality results and simplicity. However, implicit neural representations usually cannot reconstruct indoor scenes well because they suffer severe shape-radiance ambiguity. We assume that the indoor scene consists of texture-rich and flat texture-less regions. In texture-rich regions, the multiview stereo can obtain accurate results. In the flat area, normal estimation networks usually obtain a good normal estimation. Based on the above observations, we reduce the possible spatial variation range of implicit neural surfaces by reliable geometric priors to alleviate shape-radiance ambiguity. Specifically, we use multiview stereo results to limit the NeuralRoom optimization space and then use reliable geometric priors to guide NeuralRoom training. Then the NeuralRoom would produce a neural scene representation that can render an image consistent with the input training images. In addition, we propose a smoothing method called perturbation-residual restrictions to improve the accuracy and completeness of the flat region, which assumes that the sampling points in a local surface should have the same normal and similar distance to the observation center. Experiments on the ScanNet dataset show that our method can reconstruct the texture-less area of indoor scenes while maintaining the accuracy of detail. We also apply NeuralRoom to more advanced multiview reconstruction algorithms and significantly improve their reconstruction quality.
translated by 谷歌翻译
通过隐式表示表示视觉信号(例如,基于坐标的深网)在许多视觉任务中都占了上风。这项工作探讨了一个新的有趣的方向:使用可以适用于各种2D和3D场景的广义方法训练风格化的隐式表示。我们对各种隐式函数进行了试点研究,包括基于2D坐标的表示,神经辐射场和签名距离函数。我们的解决方案是一个统一的隐式神经风化框架,称为INS。与Vanilla隐式表示相反,INS将普通隐式函数分解为样式隐式模块和内容隐式模块,以便从样式图像和输入场景中分别编码表示表示。然后,应用合并模块来汇总这些信息并合成样式化的输出。为了使3D场景中的几何形状进行正规化,我们提出了一种新颖的自我鉴定几何形状一致性损失,该损失保留了风格化场景的几何忠诚度。全面的实验是在多个任务设置上进行的,包括对复杂场景的新型综合,隐式表面的风格化以及使用MLP拟合图像。我们进一步证明,学到的表示不仅是连续的,而且在风格上都是连续的,从而导致不同样式之间毫不费力地插值,并以新的混合样式生成图像。请参阅我们的项目页面上的视频以获取更多查看综合结果:https://zhiwenfan.github.io/ins。
translated by 谷歌翻译
虚拟内容创建和互动在现代3D应用中起着重要作用,例如AR和VR。从真实场景中恢复详细的3D模型可以显着扩大其应用程序的范围,并在计算机视觉和计算机图形社区中进行了数十年的研究。我们提出了基于体素的隐式表面表示Vox-Surf。我们的Vox-Surf将空间分为有限的体素。每个体素将几何形状和外观信息存储在其角顶点。 Vox-Surf得益于从体素表示继承的稀疏性,几乎适用于任何情况,并且可以轻松地从多个视图图像中训练。我们利用渐进式训练程序逐渐提取重要体素,以进一步优化,以便仅保留有效的体素,从而大大减少了采样点的数量并增加了渲染速度。细素还可以视为碰撞检测的边界量。该实验表明,与其他方法相比,Vox-Surf表示可以学习精致的表面细节和准确的颜色,并以更少的记忆力和更快的渲染速度来学习。我们还表明,Vox-Surf在场景编辑和AR应用中可能更实用。
translated by 谷歌翻译
Neural Radiance Field (NeRF) has revolutionized free viewpoint rendering tasks and achieved impressive results. However, the efficiency and accuracy problems hinder its wide applications. To address these issues, we propose Geometry-Aware Generalized Neural Radiance Field (GARF) with a geometry-aware dynamic sampling (GADS) strategy to perform real-time novel view rendering and unsupervised depth estimation on unseen scenes without per-scene optimization. Distinct from most existing generalized NeRFs, our framework infers the unseen scenes on both pixel-scale and geometry-scale with only a few input images. More specifically, our method learns common attributes of novel-view synthesis by an encoder-decoder structure and a point-level learnable multi-view feature fusion module which helps avoid occlusion. To preserve scene characteristics in the generalized model, we introduce an unsupervised depth estimation module to derive the coarse geometry, narrow down the ray sampling interval to proximity space of the estimated surface and sample in expectation maximum position, constituting Geometry-Aware Dynamic Sampling strategy (GADS). Moreover, we introduce a Multi-level Semantic Consistency loss (MSC) to assist more informative representation learning. Extensive experiments on indoor and outdoor datasets show that comparing with state-of-the-art generalized NeRF methods, GARF reduces samples by more than 25\%, while improving rendering quality and 3D geometry estimation.
translated by 谷歌翻译
Recent methods for neural surface representation and rendering, for example NeuS, have demonstrated remarkably high-quality reconstruction of static scenes. However, the training of NeuS takes an extremely long time (8 hours), which makes it almost impossible to apply them to dynamic scenes with thousands of frames. We propose a fast neural surface reconstruction approach, called NeuS2, which achieves two orders of magnitude improvement in terms of acceleration without compromising reconstruction quality. To accelerate the training process, we integrate multi-resolution hash encodings into a neural surface representation and implement our whole algorithm in CUDA. We also present a lightweight calculation of second-order derivatives tailored to our networks (i.e., ReLU-based MLPs), which achieves a factor two speed up. To further stabilize training, a progressive learning strategy is proposed to optimize multi-resolution hash encodings from coarse to fine. In addition, we extend our method for reconstructing dynamic scenes with an incremental training strategy. Our experiments on various datasets demonstrate that NeuS2 significantly outperforms the state-of-the-arts in both surface reconstruction accuracy and training speed. The video is available at https://vcai.mpi-inf.mpg.de/projects/NeuS2/ .
translated by 谷歌翻译
神经辐射场(NERF)具有密集捕获的输入图像实现光真实的视图合成。然而,鉴于稀疏的视图,NERF的几何形状极为严重,从而导致新观点合成质量的显着降解。受到自我监督的深度估计方法的启发,我们提出了structnerf,这是针对稀疏输入的室内场景的新型视图合成的解决方案。 structnerf利用自然嵌入多视图输入中的结构提示来处理NERF中无约束的几何问题。具体而言,它分别解决了纹理和非纹理区域:提出了基于贴片的多视图一致的光度损失来限制纹理区域的几何形状;对于非纹理的,我们明确地将它们限制为3D一致的平面。通过密集的自我监督深度约束,我们的方法可以改善NERF的几何形状和视图综合性能,而无需对外部数据进行任何其他培训。在几个现实世界数据集上进行的广泛实验表明,构造者超过了针对室内场景的最新方法,这些方法具有稀疏输入的定量和定性。
translated by 谷歌翻译
Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a diffentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF (Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering. Code and data are available at our website: https://github.com/facebookresearch/NSVF.
translated by 谷歌翻译
我们提出了一种基于神经隐式表示的少量新型视图综合信息 - 理论正规化技术。所提出的方法最小化由于在每个光线中强制密度的熵约束而发生的潜在的重建不一致。另外,当从几乎冗余的观点获取所有训练图像时,为了减轻潜在的退化问题,我们还通过限制来自一对略微不同观点的光线的信息增益来将空间平滑度约束纳入估计的图像。我们的算法的主要思想是使重建的场景沿各个光线紧凑,并在附近的光线上一致。所提出的常规方基于Nerf以直接的方式插入大部分现有的神经体积渲染技术。尽管其简单性,但是,与现有的神经观察合成方法通过大量标准基准测试的现有神经观察方法相比,我们实现了一致的性能。我们的项目网站可用于\ url {http://cvlab.snu.ac.kr/research/infonerf}。
translated by 谷歌翻译
Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism. The experiments on the DTU, the NeRF Synthetics , the ScanNet and the Tanks and Temples datasets demonstrate Point-NeRF can surpass the existing methods and achieve the state-of-the-art results.
translated by 谷歌翻译
本文解决了从多视频视频中重建动画人类模型的挑战。最近的一些作品提出,将一个非刚性变形的场景分解为规范的神经辐射场和一组变形场,它们映射观察空间指向规范空间,从而使它们能够从图像中学习动态场景。但是,它们代表变形场作为转换矢量场或SE(3)字段,这使得优化高度不受限制。此外,这些表示无法通过输入动议明确控制。取而代之的是,我们基于线性混合剥皮算法引入了一个姿势驱动的变形场,该算法结合了混合重量场和3D人类骨架,以产生观察到的对应对应。由于3D人类骨骼更容易观察到,因此它们可以正规化变形场的学习。此外,可以通过输入骨骼运动来控制姿势驱动的变形场,以生成新的变形字段来动画规范人类模型。实验表明,我们的方法显着优于最近的人类建模方法。该代码可在https://zju3dv.github.io/animatable_nerf/上获得。
translated by 谷歌翻译
在这项工作中,我们在具有稀疏相机视图的设置下,开发了一个可概括和高效的神经辐射场(nerf)管道,用于高保真自由观点人体合成。虽然现有的基于NERF的方法可以合成人体的相当逼真的细节,但是当输入具有自动闭塞时,它们往往会产生差的结果,特别是对于在稀疏视野下的看不见的人类。此外,这些方法通常需要大量的采样点进行渲染,这导致效率低,限制了其现实世界的适用性。为了解决这些挑战,我们提出了一种几何形状导向的进步nerf〜(GP-NERF)。特别地,为了更好地解决自动阻塞,我们设计了一种几何指导的多视图特征集成方法,该多视图特征集成方法在从输入视图集成不完全信息之前利用估计的几何形状,并构建目标人体的完整几何体积。同时,为了实现更高的渲染效率,我们引入了几何形状导向的渐进性渲染管线,其利用几何特征卷和预测的密度值来逐步减少采样点的数量并加快渲染过程。 ZJU-Mocap和Thuman数据集的实验表明,我们的方法在多种泛化设置上显着优于最先进的,而通过应用我们有效的渐进式渲染管道,时间成本降低> 70%。
translated by 谷歌翻译