We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.
translated by 谷歌翻译
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel-and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our singleview reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.
translated by 谷歌翻译
We present a learnt system for multi-view stereopsis. In contrast to recent learning based methods for 3D reconstruction, we leverage the underlying 3D geometry of the problem through feature projection and unprojection along viewing rays. By formulating these operations in a differentiable manner, we are able to learn the system end-to-end for the task of metric 3D reconstruction. End-to-end learning allows us to jointly reason about shape priors while conforming to geometric constraints, enabling reconstruction from much fewer images (even a single image) than required by classical approaches as well as completion of unseen surfaces. We thoroughly evaluate our approach on the ShapeNet dataset and demonstrate the benefits over classical approaches and recent learning based methods.
translated by 谷歌翻译
人类可以从少量的2D视图中从3D中感知场景。对于AI代理商,只有几个图像的任何视点识别场景的能力使它们能够有效地与场景及其对象交互。在这项工作中,我们试图通过这种能力赋予机器。我们提出了一种模型,它通过将新场景的几个RGB图像进行输入,并通过将其分割为语义类别来识别新的视点中的场景。所有这一切都没有访问这些视图的RGB图像。我们将2D场景识别与隐式3D表示,并从数百个场景的多视图2D注释中学习,而无需超出相机姿势的3D监督。我们试验具有挑战性的数据集,并展示我们模型的能力,共同捕捉新颖场景的语义和几何形状,具有不同的布局,物体类型和形状。
translated by 谷歌翻译
综合照片 - 现实图像和视频是计算机图形的核心,并且是几十年的研究焦点。传统上,使用渲染算法(如光栅化或射线跟踪)生成场景的合成图像,其将几何形状和材料属性的表示为输入。统称,这些输入定义了实际场景和呈现的内容,并且被称为场景表示(其中场景由一个或多个对象组成)。示例场景表示是具有附带纹理的三角形网格(例如,由艺术家创建),点云(例如,来自深度传感器),体积网格(例如,来自CT扫描)或隐式曲面函数(例如,截短的符号距离)字段)。使用可分辨率渲染损耗的观察结果的这种场景表示的重建被称为逆图形或反向渲染。神经渲染密切相关,并将思想与经典计算机图形和机器学习中的思想相结合,以创建用于合成来自真实观察图像的图像的算法。神经渲染是朝向合成照片现实图像和视频内容的目标的跨越。近年来,我们通过数百个出版物显示了这一领域的巨大进展,这些出版物显示了将被动组件注入渲染管道的不同方式。这种最先进的神经渲染进步的报告侧重于将经典渲染原则与学习的3D场景表示结合的方法,通常现在被称为神经场景表示。这些方法的一个关键优势在于它们是通过设计的3D-一致,使诸如新颖的视点合成捕获场景的应用。除了处理静态场景的方法外,我们还涵盖了用于建模非刚性变形对象的神经场景表示...
translated by 谷歌翻译
Understanding the 3D world without supervision is currently a major challenge in computer vision as the annotations required to supervise deep networks for tasks in this domain are expensive to obtain on a large scale. In this paper, we address the problem of unsupervised viewpoint estimation. We formulate this as a self-supervised learning task, where image reconstruction provides the supervision needed to predict the camera viewpoint. Specifically, we make use of pairs of images of the same object at training time, from unknown viewpoints, to self-supervise training by combining the viewpoint information from one image with the appearance information from the other. We demonstrate that using a perspective spatial transformer allows efficient viewpoint learning, outperforming existing unsupervised approaches on synthetic data, and obtains competitive results on the challenging PASCAL3D+ dataset.
translated by 谷歌翻译
我们介绍了Amazon Berkeley对象(ABO),这是一个新的大型数据集,旨在帮助弥合真实和虚拟3D世界之间的差距。ABO包含产品目录图像,元数据和艺术家创建的3D模型,具有复杂的几何形状和与真实的家用物体相对应的物理基础材料。我们得出了具有挑战性的基准,这些基准利用ABO的独特属性,并测量最先进的对象在三个开放问题上的最新限制,以了解实际3D对象:单视3D 3D重建,材料估计和跨域多视图对象检索。
translated by 谷歌翻译
Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2). The network learns a mapping from images of objects to their underlying 3D shapes from a large collection of synthetic data [1]. Our network takes in one or more images of an object instance from arbitrary viewpoints and outputs a reconstruction of the object in the form of a 3D occupancy grid. Unlike most of the previous works, our network does not require any image annotations or object class labels for training or testing. Our extensive experimental analysis shows that our reconstruction framework i) outperforms the state-of-theart methods for single view reconstruction, and ii) enables the 3D reconstruction of objects in situations when traditional SFM/SLAM methods fail (because of lack of texture and/or wide baseline).
translated by 谷歌翻译
我们提出了IM2NERF,这是一个学习框架,该框架可以预测在野生中给出单个输入图像的连续神经对象表示,仅通过现成的识别方法进行分割输出而受到监督。构建神经辐射场的标准方法利用了多视图的一致性,需要对场景的许多校准视图,这一要求在野外学习大规模图像数据时无法满足。我们通过引入一个模型将输入图像编码到包含对象形状的代码,对象外观代码以及捕获对象图像的估计相机姿势的模型来迈出解决此缺点的一步。我们的模型条件在预测的对象表示上nerf,并使用卷渲染来从新视图中生成图像。我们将模型端到端训练大量输入图像。由于该模型仅配有单视图像,因此问题高度不足。因此,除了在合成的输入视图上使用重建损失外,我们还对新颖的视图使用辅助对手损失。此外,我们利用对象对称性和循环摄像头的姿势一致性。我们在Shapenet数据集上进行了广泛的定量和定性实验,并在开放图像数据集上进行了定性实验。我们表明,在所有情况下,IM2NERF都从野外的单视图像中实现了新视图合成的最新性能。
translated by 谷歌翻译
where the highest resolution is required, using facial performance capture as a case in point.
translated by 谷歌翻译
Input: 3 views of held-out scene NeRF pixelNeRF Output: Rendered new views Input Novel views Input Novel views Input Novel views Figure 1: NeRF from one or few images. We present pixelNeRF, a learning framework that predicts a Neural Radiance Field (NeRF) representation from a single (top) or few posed images (bottom). PixelNeRF can be trained on a set of multi-view images, allowing it to generate plausible novel view synthesis from very few input images without test-time optimization (bottom left). In contrast, NeRF has no generalization capabilities and performs poorly when only three input views are available (bottom right).
translated by 谷歌翻译
代表物体粒度的场景是场景理解和决策的先决条件。我们提出PrisMoNet,一种基于先前形状知识的新方法,用于学习多对象3D场景分解和来自单个图像的表示。我们的方法学会在平面曲面上分解具有多个对象的合成场景的图像,进入其组成场景对象,并从单个视图推断它们的3D属性。经常性编码器从输入的RGB图像中回归3D形状,姿势和纹理的潜在表示。通过可差异化的渲染,我们培训我们的模型以自我监督方式从RGB-D图像中分解场景。 3D形状在功能空间中连续表示,作为我们以监督方式从示例形状预先训练的符号距离函数。这些形状的前沿提供弱监管信号,以更好地条件挑战整体学习任务。我们评估我们模型在推断3D场景布局方面的准确性,展示其生成能力,评估其对真实图像的概括,并指出了学习的表示的益处。
translated by 谷歌翻译
在本文中,我们解决了多视图3D形状重建的问题。尽管最近与隐式形状表示相关的最新可区分渲染方法提供了突破性的表现,但它们仍然在计算上很重,并且在估计的几何形状上通常缺乏精确性。为了克服这些局限性,我们研究了一种基于体积的新型表示形式建立的新计算方法,就像在最近的可区分渲染方法中一样,但是用深度图进行了参数化,以更好地实现形状表面。与此表示相关的形状能量可以评估给定颜色图像的3D几何形状,并且不需要外观预测,但在优化时仍然受益于体积整合。在实践中,我们提出了一个隐式形状表示,SRDF基于签名距离,我们通过沿摄像头射线进行参数化。相关的形状能量考虑了深度预测一致性和光度一致性之间的一致性,这是在体积表示内的3D位置。可以考虑各种照片一致先验的基础基线,或者像学习功能一样详细的标准。该方法保留具有深度图的像素准确性,并且可行。我们对标准数据集进行的实验表明,它提供了有关具有隐式形状表示的最新方法以及传统的多视角立体方法的最新结果。
translated by 谷歌翻译
我们为RGB视频提供了基于变压器的神经网络体系结构,用于多对象3D重建。它依赖于表示知识的两种替代方法:作为特征的全局3D网格和一系列特定的2D网格。我们通过专用双向注意机制在两者之间逐步交换信息。我们利用有关图像形成过程的知识,以显着稀疏注意力重量矩阵,从而使我们的体系结构在记忆和计算方面可行。我们在3D特征网格的顶部附上一个detr风格的头,以检测场景中的对象并预测其3D姿势和3D形状。与以前的方法相比,我们的体系结构是单阶段,端到端可训练,并且可以从整体上考虑来自多个视频帧的场景,而无需脆弱的跟踪步骤。我们在挑战性的SCAN2CAD数据集上评估了我们的方法,在该数据集中,我们的表现要优于RGB视频的3D对象姿势估算的最新最新方法; (2)将多视图立体声与RGB-D CAD对齐结合的强大替代方法。我们计划发布我们的源代码。
translated by 谷歌翻译
We propose a differentiable sphere tracing algorithm to bridge the gap between inverse graphics methods and the recently proposed deep learning based implicit signed distance function. Due to the nature of the implicit function, the rendering process requires tremendous function queries, which is particularly problematic when the function is represented as a neural network. We optimize both the forward and backward passes of our rendering layer to make it run efficiently with affordable memory consumption on a commodity graphics card. Our rendering method is fully differentiable such that losses can be directly computed on the rendered 2D observations, and the gradients can be propagated backwards to optimize the 3D geometry. We show that our rendering method can effectively reconstruct accurate 3D shapes from various inputs, such as sparse depth and multi-view images, through inverse optimization. With the geometry based reasoning, our 3D shape prediction methods show excellent generalization capability and robustness against various noises. * Work done while Shaohui Liu was an academic guest at ETH Zurich.
translated by 谷歌翻译
我们介绍了一种基于神经辐射场的生成3D模型的方法,仅从每个对象的单个视图训练。虽然产生现实图像不再是一项艰巨的任务,产生相应的3D结构,使得它们可以从不同视图呈现是非微不足道的。我们表明,与现有方法不同,一个不需要多视图数据来实现这一目标。具体而言,我们表明,通过将许多图像对齐,与在共享潜在空间上的单个网络调节的近似规范姿势对齐,您可以学习模型为一类对象的形状和外观的辐射字段的空间。我们通过培训模型来展示这一点,以使用仅包含每个拍摄对象的一个视图的数据集重建对象类别而没有深度或几何信息。我们的实验表明,我们实现最先进的导致单眼深度预测的综合合成和竞争结果。
translated by 谷歌翻译
由于真实的3D注释的类别数据的不可用,在合成数据集中,传统的学习3D对象类别的方法主要受到培训和评估。我们的主要目标是通过在与现有的合成对应物类似的幅度下收集现实世界数据来促进该领域的进步。因此,这项工作的主要贡献是一个大型数据集,称为3D中的常见对象,具有使用相机姿势和地面真相3D点云注释的对象类别的真实多视图图像。 DataSet总共包含从50 MS-Coco类别的近19,000个视频中捕获对象的150万帧,因此,在类别和对象的数量方面,它比替代更大。我们利用这款新数据集进行了几个新型综合和以类别为中心的3D重建方法的第一个大规模“野外”评估。最后,我们贡献了一种新型的神经渲染方法,它利用强大的变压器来重建对象,给出少量的视图。 CO3D DataSet可在HTTPS://github.com/facebookResearch/co3d获取。
translated by 谷歌翻译
神经隐式表示在新的视图合成和来自多视图图像的高质量3D重建方面显示了其有效性。但是,大多数方法都集中在整体场景表示上,但忽略了其中的各个对象,从而限制了潜在的下游应用程序。为了学习对象组合表示形式,一些作品将2D语义图作为训练中的提示,以掌握对象之间的差异。但是他们忽略了对象几何和实例语义信息之间的牢固联系,这导致了单个实例的不准确建模。本文提出了一个新颖的框架ObjectsDF,以在3D重建和对象表示中构建具有高保真度的对象复合神经隐式表示。观察常规音量渲染管道的歧义,我们通过组合单个对象的签名距离函数(SDF)来对场景进行建模,以发挥明确的表面约束。区分不同实例的关键是重新审视单个对象的SDF和语义标签之间的牢固关联。特别是,我们将语义信息转换为对象SDF的函数,并为场景和对象开发统一而紧凑的表示形式。实验结果表明,ObjectSDF框架在表示整体对象组合场景和各个实例方面的优越性。可以在https://qianyiwu.github.io/objectsdf/上找到代码
translated by 谷歌翻译
传统上,本征成像或内在图像分解被描述为将图像分解为两层:反射率,材料的反射率;和一个阴影,由光和几何之间的相互作用产生。近年来,深入学习技术已广泛应用,以提高这些分离的准确性。在本调查中,我们概述了那些在知名内在图像数据集和文献中使用的相关度量的结果,讨论了预测所需的内在图像分解的适用性。虽然Lambertian的假设仍然是许多方法的基础,但我们表明,对图像形成过程更复杂的物理原理组件的潜力越来越意识到,这是光学准确的材料模型和几何形状,更完整的逆轻型运输估计。考虑使用的前瞻和模型以及驾驶分解过程的学习架构和方法,我们将这些方法分类为分解的类型。考虑到最近神经,逆和可微分的渲染技术的进步,我们还提供了关于未来研究方向的见解。
translated by 谷歌翻译