We present a learnt system for multi-view stereopsis. In contrast to recent learning based methods for 3D reconstruction, we leverage the underlying 3D geometry of the problem through feature projection and unprojection along viewing rays. By formulating these operations in a differentiable manner, we are able to learn the system end-to-end for the task of metric 3D reconstruction. End-to-end learning allows us to jointly reason about shape priors while conforming to geometric constraints, enabling reconstruction from much fewer images (even a single image) than required by classical approaches as well as completion of unseen surfaces. We thoroughly evaluate our approach on the ShapeNet dataset and demonstrate the benefits over classical approaches and recent learning based methods.
translated by 谷歌翻译
In this work, we address the lack of 3D understanding of generative neural networks by introducing a persistent 3D feature embedding for view synthesis. To this end, we propose DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry. At its core, our approach is based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying 3D scene structure. Our approach combines insights from 3D geometric computer vision with recent advances in learning image-to-image mappings based on adversarial loss functions. DeepVoxels is supervised, without requiring a 3D reconstruction of the scene, using a 2D re-rendering loss and enforces perspective and multi-view geometry in a principled manner. We apply our persistent 3D scene representation to the problem of novel view synthesis demonstrating high-quality results for a variety of challenging scenes.
translated by 谷歌翻译
We present an end-to-end deep learning architecture for depth map inference from multi-view images. In the network, we first extract deep visual image features, and then build the 3D cost volume upon the reference camera frustum via the differentiable homography warping. Next, we apply 3D convolutions to regularize and regress the initial depth map, which is then refined with the reference image to generate the final output. Our framework flexibly adapts arbitrary N-view inputs using a variance-based cost metric that maps multiple features into one cost feature. The proposed MVSNet is demonstrated on the large-scale indoor DTU dataset. With simple post-processing, our method not only significantly outperforms previous state-of-the-arts, but also is several times faster in runtime. We also evaluate MVSNet on the complex outdoor Tanks and Temples dataset, where our method ranks first before April 18, 2018 without any fine-tuning, showing the strong generalization ability of MVSNet.
translated by 谷歌翻译
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel-and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our singleview reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.
translated by 谷歌翻译
在本文中,我们解决了多视图3D形状重建的问题。尽管最近与隐式形状表示相关的最新可区分渲染方法提供了突破性的表现,但它们仍然在计算上很重,并且在估计的几何形状上通常缺乏精确性。为了克服这些局限性,我们研究了一种基于体积的新型表示形式建立的新计算方法,就像在最近的可区分渲染方法中一样,但是用深度图进行了参数化,以更好地实现形状表面。与此表示相关的形状能量可以评估给定颜色图像的3D几何形状,并且不需要外观预测,但在优化时仍然受益于体积整合。在实践中,我们提出了一个隐式形状表示,SRDF基于签名距离,我们通过沿摄像头射线进行参数化。相关的形状能量考虑了深度预测一致性和光度一致性之间的一致性,这是在体积表示内的3D位置。可以考虑各种照片一致先验的基础基线,或者像学习功能一样详细的标准。该方法保留具有深度图的像素准确性,并且可行。我们对标准数据集进行的实验表明,它提供了有关具有隐式形状表示的最新方法以及传统的多视角立体方法的最新结果。
translated by 谷歌翻译
where the highest resolution is required, using facial performance capture as a case in point.
translated by 谷歌翻译
Unsupervised learning with generative models has the potential of discovering rich representations of 3D scenes. While geometric deep learning has explored 3Dstructure-aware representations of scene geometry, these models typically require explicit 3D supervision. Emerging neural scene representations can be trained only with posed 2D images, but existing methods ignore the three-dimensional structure of scenes. We propose Scene Representation Networks (SRNs), a continuous, 3Dstructure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-toend from only 2D images and their camera poses, without access to depth or shape. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model. 1
translated by 谷歌翻译
我们呈现3DVNET,一种新型多视图立体声(MVS)深度预测方法,该方法结合了基于深度和体积的MVS方法的优点。我们的关键思想是使用3D场景建模网络,可迭代地更新一组粗略深度预测,从而产生高度准确的预测,它达成底层场景几何形状。与现有的深度预测技术不同,我们的方法使用体积3D卷积神经网络(CNN),该网络(CNN)在所有深度图中共同地在世界空间上运行。因此,网络可以学习有意义的场景级别。此外,与现有的体积MVS技术不同,我们的3D CNN在特征增强点云上运行,允许有效地聚合多视图信息和灵活的深度映射的迭代细化。实验结果表明,我们的方法超过了Scannet DataSet的深度预测和3D重建度量的最先进的准确性,以及来自Tum-RGBD和ICL-Nuim数据集的一系列场景。这表明我们的方法既有效又推广到新设置。
translated by 谷歌翻译
传统上,来自摆姿势的图像的3D室内场景重建分为两个阶段:人均深度估计,然后进行深度合并和表面重建。最近,出现了一个直接在最终3D体积特征空间中进行重建的方法家族。尽管这些方法显示出令人印象深刻的重建结果,但它们依赖于昂贵的3D卷积层,从而限制了其在资源受限环境中的应用。在这项工作中,我们回到了传统的路线,并展示着专注于高质量的多视图深度预测如何使用简单的现成深度融合来高度准确的3D重建。我们提出了一个简单的最先进的多视图深度估计器,其中有两个主要贡献:1)精心设计的2D CNN,该2D CNN利用强大的图像先验以及平面扫描特征量和几何损失,并结合2)将密钥帧和几何元数据集成到成本量中,这允许知情的深度平面评分。我们的方法在当前的最新估计中获得了重要的领先优势,以进行深度估计,并在扫描仪和7个镜头上进行3D重建,但仍允许在线实时实时低音重建。代码,模型和结果可在https://nianticlabs.github.io/simplerecon上找到
translated by 谷歌翻译
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multiview posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. 1
translated by 谷歌翻译
综合照片 - 现实图像和视频是计算机图形的核心,并且是几十年的研究焦点。传统上,使用渲染算法(如光栅化或射线跟踪)生成场景的合成图像,其将几何形状和材料属性的表示为输入。统称,这些输入定义了实际场景和呈现的内容,并且被称为场景表示(其中场景由一个或多个对象组成)。示例场景表示是具有附带纹理的三角形网格(例如,由艺术家创建),点云(例如,来自深度传感器),体积网格(例如,来自CT扫描)或隐式曲面函数(例如,截短的符号距离)字段)。使用可分辨率渲染损耗的观察结果的这种场景表示的重建被称为逆图形或反向渲染。神经渲染密切相关,并将思想与经典计算机图形和机器学习中的思想相结合,以创建用于合成来自真实观察图像的图像的算法。神经渲染是朝向合成照片现实图像和视频内容的目标的跨越。近年来,我们通过数百个出版物显示了这一领域的巨大进展,这些出版物显示了将被动组件注入渲染管道的不同方式。这种最先进的神经渲染进步的报告侧重于将经典渲染原则与学习的3D场景表示结合的方法,通常现在被称为神经场景表示。这些方法的一个关键优势在于它们是通过设计的3D-一致,使诸如新颖的视点合成捕获场景的应用。除了处理静态场景的方法外,我们还涵盖了用于建模非刚性变形对象的神经场景表示...
translated by 谷歌翻译
人类可以从少量的2D视图中从3D中感知场景。对于AI代理商,只有几个图像的任何视点识别场景的能力使它们能够有效地与场景及其对象交互。在这项工作中,我们试图通过这种能力赋予机器。我们提出了一种模型,它通过将新场景的几个RGB图像进行输入,并通过将其分割为语义类别来识别新的视点中的场景。所有这一切都没有访问这些视图的RGB图像。我们将2D场景识别与隐式3D表示,并从数百个场景的多视图2D注释中学习,而无需超出相机姿势的3D监督。我们试验具有挑战性的数据集,并展示我们模型的能力,共同捕捉新颖场景的语义和几何形状,具有不同的布局,物体类型和形状。
translated by 谷歌翻译
在视觉计算中,3D几何形状以许多不同的形式表示,包括网格,点云,体素电网,水平集和深度图像。每个表示都适用于不同的任务,从而使一个表示形式转换为另一个表示(前向地图)是一个重要且常见的问题。我们提出了全向距离字段(ODF),这是一种新的3D形状表示形式,该表示通过将深度从任何观看方向从任何3D位置存储到对象的表面来编码几何形状。由于射线是ODF的基本单元,因此可以轻松地从通用的3D表示和点云等常见的3D表示。与限制代表封闭表面的水平集方法不同,ODF是未签名的,因此可以对开放表面进行建模(例如服装)。我们证明,尽管在遮挡边界处存在固有的不连续性,但可以通过神经网络(Neururodf)有效地学习ODF。我们还引入了有效的前向映射算法,以转换odf to&从常见的3D表示。具体而言,我们引入了一种有效的跳跃立方体算法,用于从ODF生成网格。实验表明,神经模型可以通过过度拟合单个对象学会学会捕获高质量的形状,并学会概括对共同的形状类别。
translated by 谷歌翻译
With the success of neural volume rendering in novel view synthesis, neural implicit reconstruction with volume rendering has become popular. However, most methods optimize per-scene functions and are unable to generalize to novel scenes. We introduce VolRecon, a generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF). To reconstruct with fine details and little noise, we combine projection features, aggregated from multi-view features with a view transformer, and volume features interpolated from a coarse global feature volume. A ray transformer computes SRDF values of all the samples along a ray to estimate the surface location, which are used for volume rendering of color and depth. Extensive experiments on DTU and ETH3D demonstrate the effectiveness and generalization ability of our method. On DTU, our method outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable quality as MVSNet in full view reconstruction. Besides, our method shows good generalization ability on the large-scale ETH3D benchmark. Project page: https://fangjinhuawang.github.io/VolRecon.
translated by 谷歌翻译
神经隐式表面已成为多视图3D重建的重要技术,但它们的准确性仍然有限。在本文中,我们认为这来自难以学习和呈现具有神经网络的高频纹理。因此,我们建议在不同视图中添加标准神经渲染优化直接照片一致性术语。直观地,我们优化隐式几何体,以便以一致的方式扭曲彼此的视图。我们证明,两个元素是这种方法成功的关键:(i)使用沿着每条光线的预测占用和3D点的预测占用和法线来翘曲整个补丁,并用稳健的结构相似度测量它们的相似性; (ii)以这种方式处理可见性和遮挡,使得不正确的扭曲不会给出太多的重要性,同时鼓励重建尽可能完整。我们评估了我们的方法,在标准的DTU和EPFL基准上被称为NeuralWarp,并表明它在两个数据集上以超过20%重建的艺术态度优于未经监督的隐式表面。
translated by 谷歌翻译
We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.
translated by 谷歌翻译
我们为RGB视频提供了基于变压器的神经网络体系结构,用于多对象3D重建。它依赖于表示知识的两种替代方法:作为特征的全局3D网格和一系列特定的2D网格。我们通过专用双向注意机制在两者之间逐步交换信息。我们利用有关图像形成过程的知识,以显着稀疏注意力重量矩阵,从而使我们的体系结构在记忆和计算方面可行。我们在3D特征网格的顶部附上一个detr风格的头,以检测场景中的对象并预测其3D姿势和3D形状。与以前的方法相比,我们的体系结构是单阶段,端到端可训练,并且可以从整体上考虑来自多个视频帧的场景,而无需脆弱的跟踪步骤。我们在挑战性的SCAN2CAD数据集上评估了我们的方法,在该数据集中,我们的表现要优于RGB视频的3D对象姿势估算的最新最新方法; (2)将多视图立体声与RGB-D CAD对齐结合的强大替代方法。我们计划发布我们的源代码。
translated by 谷歌翻译
在不同观点之间找到准确的对应关系是无监督的多视图立体声(MVS)的跟腱。现有方法是基于以下假设:相应的像素具有相似的光度特征。但是,在实际场景中,多视图图像观察到非斜面的表面和经验遮挡。在这项工作中,我们提出了一种新颖的方法,即神经渲染(RC-MVSNET),以解决观点之间对应关系的歧义问题。具体而言,我们施加了一个深度渲染一致性损失,以限制靠近对象表面的几何特征以减轻遮挡。同时,我们引入了参考视图综合损失,以产生一致的监督,即使是针对非兰伯特表面。关于DTU和TANKS \&Temples基准测试的广泛实验表明,我们的RC-MVSNET方法在无监督的MVS框架上实现了最先进的性能,并对许多有监督的方法进行了竞争性能。该代码在https://github.com/上发布。 BOESE0601/RC-MVSNET
translated by 谷歌翻译
Deep learning has recently demonstrated its excellent performance for multi-view stereo (MVS). However, one major limitation of current learned MVS approaches is the scalability: the memory-consuming cost volume regularization makes the learned MVS hard to be applied to highresolution scenes. In this paper, we introduce a scalable multi-view stereo framework based on the recurrent neural network. Instead of regularizing the entire 3D cost volume in one go, the proposed Recurrent Multi-view Stereo Network (R-MVSNet) sequentially regularizes the 2D cost maps along the depth direction via the gated recurrent unit (GRU). This reduces dramatically the memory consumption and makes high-resolution reconstruction feasible. We first show the state-of-the-art performance achieved by the proposed R-MVSNet on the recent MVS benchmarks. Then, we further demonstrate the scalability of the proposed method on several large-scale scenarios, where previous learned approaches often fail due to the memory constraint. Code is available at https://github.com/ YoYo000/MVSNet.
translated by 谷歌翻译
我们提出了HRF-NET,这是一种基于整体辐射场的新型视图合成方法,该方法使用一组稀疏输入来呈现新视图。最近的概括视图合成方法还利用了光辉场,但渲染速度不是实时的。现有的方法可以有效地训练和呈现新颖的观点,但它们无法概括地看不到场景。我们的方法解决了用于概括视图合成的实时渲染问题,并由两个主要阶段组成:整体辐射场预测指标和基于卷积的神经渲染器。该架构不仅基于隐式神经场的一致场景几何形状,而且还可以使用单个GPU有效地呈现新视图。我们首先在DTU数据集的多个3D场景上训练HRF-NET,并且网络只能仅使用光度损耗就看不见的真实和合成数据产生合理的新视图。此外,我们的方法可以利用单个场景的密集参考图像集来产生准确的新颖视图,而无需依赖其他明确表示,并且仍然保持了预训练模型的高速渲染。实验结果表明,HRF-NET优于各种合成和真实数据集的最先进的神经渲染方法。
translated by 谷歌翻译