近年来,由于其表达力和灵活性,神经隐式表示在3D重建中获得了普及。然而,神经隐式表示的隐式性质导致缓慢的推理时间并且需要仔细初始化。在本文中,我们重新审视经典且无处不在的点云表示,并使用泊松表面重建(PSR)的可分辨率配方引入可分化的点对网格层,其允许给予定向的GPU加速的指示灯的快速解决方案点云。可微分的PSR层允许我们通过隐式指示器字段有效地和分散地桥接与3D网格的显式3D点表示,从而实现诸如倒角距离的表面重建度量的端到端优化。因此,点和网格之间的这种二元性允许我们以面向点云表示形状,这是显式,轻量级和富有表现力的。与神经内隐式表示相比,我们的形状 - 点(SAP)模型更具可解释,轻量级,并通过一个级别加速推理时间。与其他显式表示相比,如点,补丁和网格,SA​​P产生拓扑无关的水密歧管表面。我们展示了SAP对无知点云和基于学习的重建的表面重建任务的有效性。
translated by 谷歌翻译
从无调点云中重建3D非紧密网格是计算机视觉和计算机图形中未探索的区域。在这个项目中,我们试图通过扩展纸张“ Shape As Points”中呈现的基于学习的水密网状重建管道来解决此问题。我们方法的核心是将问题作为语义分割问题提出,该问题识别3D体积中的区域,其中网格表面位于所在的区域并从检测到的区域提取表面。与基线技术相比,我们的方法取得了令人信服的结果。
translated by 谷歌翻译
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fullyconnected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
translated by 谷歌翻译
在视觉计算中,3D几何形状以许多不同的形式表示,包括网格,点云,体素电网,水平集和深度图像。每个表示都适用于不同的任务,从而使一个表示形式转换为另一个表示(前向地图)是一个重要且常见的问题。我们提出了全向距离字段(ODF),这是一种新的3D形状表示形式,该表示通过将深度从任何观看方向从任何3D位置存储到对象的表面来编码几何形状。由于射线是ODF的基本单元,因此可以轻松地从通用的3D表示和点云等常见的3D表示。与限制代表封闭表面的水平集方法不同,ODF是未签名的,因此可以对开放表面进行建模(例如服装)。我们证明,尽管在遮挡边界处存在固有的不连续性,但可以通过神经网络(Neururodf)有效地学习ODF。我们还引入了有效的前向映射算法,以转换odf to&从常见的3D表示。具体而言,我们引入了一种有效的跳跃立方体算法,用于从ODF生成网格。实验表明,神经模型可以通过过度拟合单个对象学会学会捕获高质量的形状,并学会概括对共同的形状类别。
translated by 谷歌翻译
尽管通过自学意识到,基于多层感知的方法在形状和颜色恢复方面取得了令人鼓舞的结果,但在学习深层隐式表面表示方面通常会遭受沉重的计算成本。由于渲染每个像素需要一个向前的网络推断,因此合成整个图像是非常密集的。为了应对这些挑战,我们提出了一种有效的粗到精细方法,以从本文中从多视图中恢复纹理网格。具体而言,采用可区分的泊松求解器来表示对象的形状,该求解器能够产生拓扑 - 敏捷和水密表面。为了说明深度信息,我们通过最小化渲染网格与多视图立体声预测深度之间的差异来优化形状几何形状。与形状和颜色的隐式神经表示相反,我们引入了一种基于物理的逆渲染方案,以共同估计环境照明和对象的反射率,该方案能够实时呈现高分辨率图像。重建的网格的质地是从可学习的密集纹理网格中插值的。我们已经对几个多视图立体数据集进行了广泛的实验,其有希望的结果证明了我们提出的方法的功效。该代码可在https://github.com/l1346792580123/diff上找到。
translated by 谷歌翻译
最近对隐含形状表示的兴趣日益增长。与明确的陈述相反,他们没有解决局限性,他们很容易处理各种各样的表面拓扑。为了了解这些隐式表示,电流方法依赖于一定程度的形状监督(例如,内部/外部信息或距离形状知识),或者至少需要密集点云(以近似距离 - 到 - 到 - 形状)。相比之下,我们介绍{\方法},一种用于学习形状表示的自我监督方法,从可能极其稀疏的点云。就像在水牛的针问题一样,我们在点云上“掉落”(样本)针头,认为,静统计地靠近表面,针端点位于表面的相对侧。不需要形状知识,点云可以高稀疏,例如,作为车辆获取的Lidar点云。以前的自我监督形状表示方法未能在这种数据上产生良好的结果。我们获得定量结果与现有的形状重建数据集上现有的监督方法标准,并在Kitti等硬自动驾驶数据集中显示有前途的定性结果。
translated by 谷歌翻译
Training parts from ShapeNet. (b) t-SNE plot of part embeddings. (c) Reconstructing entire scenes with Local Implicit Grids Figure 1:We learn an embedding of parts from objects in ShapeNet [3] using a part autoencoder with an implicit decoder. We show that this representation of parts is generalizable across object categories, and easily scalable to large scenes. By localizing implicit functions in a grid, we are able to reconstruct entire scenes from points via optimization of the latent grid.
translated by 谷歌翻译
通过扫描真实世界对象或场景采集的3D点云人已经发现了广泛的应用,包括融入式远程呈现,自动驾驶,监视等。它们通常是由噪声扰动或由低密度,这妨碍下游的任务,如表面重建遭受和理解。在本文中,我们提出了点集的二次采样恢复,这获知会聚点朝向下方的表面的点云的连续梯度场的新型范例。特别是,我们表示经由其梯度场点云 - 对数概率密度函数的梯度,和执行梯度场是连续的,这样就保证了模型可解优化的连续性。基于经由提出的神经网络估计出的连续梯度场,重新采样点云量对输入噪声或稀疏的点云执行基于梯度的马尔可夫链蒙特卡洛(MCMC)。此外,我们提出了点云恢复,基本上迭代地细化中间重采样点云,并在重采样过程容纳各种先验期间引入正则化到基于梯度的MCMC。大量的实验结果表明,该点集重采样实现了代表恢复工作,包括点云去噪和采样的国家的最先进的性能。
translated by 谷歌翻译
This work introduces alternating latent topologies (ALTO) for high-fidelity reconstruction of implicit 3D surfaces from noisy point clouds. Previous work identifies that the spatial arrangement of latent encodings is important to recover detail. One school of thought is to encode a latent vector for each point (point latents). Another school of thought is to project point latents into a grid (grid latents) which could be a voxel grid or triplane grid. Each school of thought has tradeoffs. Grid latents are coarse and lose high-frequency detail. In contrast, point latents preserve detail. However, point latents are more difficult to decode into a surface, and quality and runtime suffer. In this paper, we propose ALTO to sequentially alternate between geometric representations, before converging to an easy-to-decode latent. We find that this preserves spatial expressiveness and makes decoding lightweight. We validate ALTO on implicit 3D recovery and observe not only a performance improvement over the state-of-the-art, but a runtime improvement of 3-10$\times$. Project website at https://visual.ee.ucla.edu/alto.htm/.
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
隐式神经网络已成功用于点云的表面重建。然而,它们中的许多人面临着可扩展性问题,因为它们将整个对象或场景的异构面功能编码为单个潜在载体。为了克服这种限制,一些方法在粗略普通的3D网格或3D补丁上推断潜伏向量,并将它们插入以应对占用查询。在这样做时,它们可以与对象表面上采样的输入点进行直接连接,并且它们在空间中均匀地附加信息,而不是其最重要的信息,即在表面附近。此外,依赖于固定的补丁大小可能需要离散化调整。要解决这些问题,我们建议使用点云卷积并计算每个输入点的潜伏向量。然后,我们使用推断的权重在最近的邻居上执行基于学习的插值。对象和场景数据集的实验表明,我们的方法在大多数古典指标上显着优于其他方法,产生更精细的细节和更好的重建更薄的卷。代码可在https://github.com/valeoai/poco获得。
translated by 谷歌翻译
With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose Occupancy Networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.
translated by 谷歌翻译
We propose a differentiable sphere tracing algorithm to bridge the gap between inverse graphics methods and the recently proposed deep learning based implicit signed distance function. Due to the nature of the implicit function, the rendering process requires tremendous function queries, which is particularly problematic when the function is represented as a neural network. We optimize both the forward and backward passes of our rendering layer to make it run efficiently with affordable memory consumption on a commodity graphics card. Our rendering method is fully differentiable such that losses can be directly computed on the rendered 2D observations, and the gradients can be propagated backwards to optimize the 3D geometry. We show that our rendering method can effectively reconstruct accurate 3D shapes from various inputs, such as sparse depth and multi-view images, through inverse optimization. With the geometry based reasoning, our 3D shape prediction methods show excellent generalization capability and robustness against various noises. * Work done while Shaohui Liu was an academic guest at ETH Zurich.
translated by 谷歌翻译
从\ emph {nocedended}点云中重建3D几何形状可以使许多下游任务受益。最近的方法主要采用神经网络的神经形状表示,以代表签名的距离字段,并通过无签名的监督适应点云。但是,我们观察到,使用未签名的监督可能会导致严重的歧义,并且通常会导致\ emph {意外}故障,例如在重建复杂的结构并与重建准确的表面斗争时,在自由空间中产生不希望的表面。为了重建一个更好的距离距离场,我们提出了半签名的神经拟合(SSN拟合),该神经拟合(SSN拟合)由半签名的监督和基于损失的区域采样策略组成。我们的关键见解是,签名的监督更具信息性,显然可以轻松确定对象之外的区域。同时,提出了一种新颖的重要性抽样,以加速优化并更好地重建细节。具体而言,我们将对象空间弹并分配到\ emph {sign-newand}和\ emph {sign-unawern}区域,其中应用了不同的监督。此外,我们根据跟踪的重建损失自适应地调整每个体素的采样率,以便网络可以更多地关注复杂的拟合不足区域。我们进行了广泛的实验,以证明SSN拟合在多个数据集的不同设置下实现最新性能,包括清洁,密度变化和嘈杂的数据。
translated by 谷歌翻译
从嘈杂,不均匀和无知点云中的表面重建是计算机视觉和图形中的一个令人迷人但具有挑战性的问题。随着3D扫描技术的创新,强烈希望直接转换原始扫描数据,通常具有严重噪声,进入歧管三角网格。现有的基于学习的方法旨在学习零级曲面对底层形状进行的隐式功能。然而,大多数人都无法获得嘈杂和稀疏点云的理想结果,限制在实践中。在本文中,我们介绍了神经IML,一种新的方法,它直接从未引起的原始点云学习抗噪声符号距离功能(SDF)。通过最大限度地减少由隐式移动最小二乘函数获得的损耗,我们的方法通过最小化了自我监督的方式,从原始点云中从原始点云中的底层SDF,而不是明确地学习前提。 (IML)和我们的神经网络另一个,我们的预测器的梯度定义了便于计算IML的切线束。我们证明,当几个SDFS重合时,我们的神经网络可以预测符号隐式功能,其零电平集用作底层表面的良好近似。我们对各种基准进行广泛的实验,包括合成扫描和现实世界扫描,以表现出从各种投入重建忠实形状的能力,特别是对于具有噪音或间隙的点云。
translated by 谷歌翻译
神经隐式功能最近显示了来自多个视图的表面重建的有希望的结果。但是,当重建无限或复杂的场景时,当前的方法仍然遭受过度复杂性和稳健性不佳。在本文中,我们介绍了RegSDF,这表明适当的点云监督和几何正规化足以产生高质量和健壮的重建结果。具体而言,RegSDF将额外的定向点云作为输入,并优化了可区分渲染框架内的签名距离字段和表面灯场。我们还介绍了这两个关键的正规化。第一个是在给定嘈杂和不完整输入的整个距离字段中平稳扩散签名距离值的Hessian正则化。第二个是最小的表面正则化,可紧凑并推断缺失的几何形状。大量实验是在DTU,BlendenDMV以及储罐和寺庙数据集上进行的。与最近的神经表面重建方法相比,RegSDF即使对于具有复杂拓扑和非结构化摄像头轨迹的开放场景,RegSDF也能够重建表面。
translated by 谷歌翻译
Figure 1. Given input as either a 2D image or a 3D point cloud (a), we automatically generate a corresponding 3D mesh (b) and its atlas parameterization (c). We can use the recovered mesh and atlas to apply texture to the output shape (d) as well as 3D print the results (e).
translated by 谷歌翻译
The recent neural implicit representation-based methods have greatly advanced the state of the art for solving the long-standing and challenging problem of reconstructing a discrete surface from a sparse point cloud. These methods generally learn either a binary occupancy or signed/unsigned distance field (SDF/UDF) as surface representation. However, all the existing SDF/UDF-based methods use neural networks to implicitly regress the distance in a purely data-driven manner, thus limiting the accuracy and generalizability to some extent. In contrast, we propose the first geometry-guided method for UDF and its gradient estimation that explicitly formulates the unsigned distance of a query point as the learnable affine averaging of its distances to the tangent planes of neighbouring points. Besides, we model the local geometric structure of the input point clouds by explicitly learning a quadratic polynomial for each point. This not only facilitates upsampling the input sparse point cloud but also naturally induces unoriented normal, which further augments UDF estimation. Finally, to extract triangle meshes from the predicted UDF we propose a customized edge-based marching cube module. We conduct extensive experiments and ablation studies to demonstrate the significant advantages of our method over state-of-the-art methods in terms of reconstruction accuracy, efficiency, and generalizability. The source code is publicly available at https://github.com/rsy6318/GeoUDF.
translated by 谷歌翻译
我们介绍DMTET,深度3D条件生成模型,可以使用诸如粗体素的简单用户指南来合成高分辨率3D形状。它通过利用新型混合3D表示来结婚隐式和显式3D表示的优点。与当前隐含的方法相比,培训涉及符号距离值,DMTET直接针对重建的表面进行了优化,这使我们能够用更少的伪像来合成更精细的几何细节。与直接生成诸如网格之类的显式表示的深度3D生成模型不同,我们的模型可以合成具有任意拓扑的形状。 DMTET的核心包括可变形的四面体网格,其编码离散的符号距离函数和可分行的行进Tetrahedra层,其将隐式符号距离表示转换为显式谱图表示。这种组合允许使用在表面网格上明确定义的重建和对抗性损耗来联合优化表面几何形状和拓扑以及生成细分层次结构。我们的方法显着优于来自粗体素输入的条件形状合成的现有工作,培训在复杂的3D动物形状的数据集上。项目页面:https://nv-tlabs.github.io/dmtet/
translated by 谷歌翻译