Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
Implicit fields have been very effective to represent and learn 3D shapes accurately. Signed distance fields and occupancy fields are the preferred representations, both with well-studied properties, despite their restriction to closed surfaces. Several other variations and training principles have been proposed with the goal to represent all classes of shapes. In this paper, we develop a novel and yet fundamental representation by considering the unit vector field defined on 3D space: at each point in $\mathbb{R}^3$ the vector points to the closest point on the surface. We theoretically demonstrate that this vector field can be easily transformed to surface density by applying the vector field divergence. Unlike other standard representations, it directly encodes an important physical property of the surface, which is the surface normal. We further show the advantages of our vector field representation, specifically in learning general (open, closed, or multi-layered) surfaces as well as piecewise planar surfaces. We compare our method on several datasets including ShapeNet where the proposed new neural implicit field shows superior accuracy in representing any type of shape, outperforming other standard methods. The code will be released at https://github.com/edomel/ImplicitVF
translated by 谷歌翻译
Training parts from ShapeNet. (b) t-SNE plot of part embeddings. (c) Reconstructing entire scenes with Local Implicit Grids Figure 1:We learn an embedding of parts from objects in ShapeNet [3] using a part autoencoder with an implicit decoder. We show that this representation of parts is generalizable across object categories, and easily scalable to large scenes. By localizing implicit functions in a grid, we are able to reconstruct entire scenes from points via optimization of the latent grid.
translated by 谷歌翻译
We propose a differentiable sphere tracing algorithm to bridge the gap between inverse graphics methods and the recently proposed deep learning based implicit signed distance function. Due to the nature of the implicit function, the rendering process requires tremendous function queries, which is particularly problematic when the function is represented as a neural network. We optimize both the forward and backward passes of our rendering layer to make it run efficiently with affordable memory consumption on a commodity graphics card. Our rendering method is fully differentiable such that losses can be directly computed on the rendered 2D observations, and the gradients can be propagated backwards to optimize the 3D geometry. We show that our rendering method can effectively reconstruct accurate 3D shapes from various inputs, such as sparse depth and multi-view images, through inverse optimization. With the geometry based reasoning, our 3D shape prediction methods show excellent generalization capability and robustness against various noises. * Work done while Shaohui Liu was an academic guest at ETH Zurich.
translated by 谷歌翻译
最近对隐含形状表示的兴趣日益增长。与明确的陈述相反,他们没有解决局限性,他们很容易处理各种各样的表面拓扑。为了了解这些隐式表示,电流方法依赖于一定程度的形状监督(例如,内部/外部信息或距离形状知识),或者至少需要密集点云(以近似距离 - 到 - 到 - 形状)。相比之下,我们介绍{\方法},一种用于学习形状表示的自我监督方法,从可能极其稀疏的点云。就像在水牛的针问题一样,我们在点云上“掉落”(样本)针头,认为,静统计地靠近表面,针端点位于表面的相对侧。不需要形状知识,点云可以高稀疏,例如,作为车辆获取的Lidar点云。以前的自我监督形状表示方法未能在这种数据上产生良好的结果。我们获得定量结果与现有的形状重建数据集上现有的监督方法标准,并在Kitti等硬自动驾驶数据集中显示有前途的定性结果。
translated by 谷歌翻译
机器学习的最近进步已经创造了利用一类基于坐标的神经网络来解决视觉计算问题的兴趣,该基于坐标的神经网络在空间和时间跨空间和时间的场景或对象的物理属性。我们称之为神经领域的这些方法已经看到在3D形状和图像的合成中成功应用,人体的动画,3D重建和姿势估计。然而,由于在短时间内的快速进展,许多论文存在,但尚未出现全面的审查和制定问题。在本报告中,我们通过提供上下文,数学接地和对神经领域的文学进行广泛综述来解决这一限制。本报告涉及两种维度的研究。在第一部分中,我们通过识别神经字段方法的公共组件,包括不同的表示,架构,前向映射和泛化方法来专注于神经字段的技术。在第二部分中,我们专注于神经领域的应用在视觉计算中的不同问题,超越(例如,机器人,音频)。我们的评论显示了历史上和当前化身的视觉计算中已覆盖的主题的广度,展示了神经字段方法所带来的提高的质量,灵活性和能力。最后,我们展示了一个伴随着贡献本综述的生活版本,可以由社区不断更新。
translated by 谷歌翻译
将3D坐标映射到签名距离函数(SDF)或占用值的神经网络具有启用对象形状的高保真隐式表示。本文开发了一种新的形状模型,允许通过优化连续符号定向距离功能(SDDF)来合成新颖距离视图。与Deep SDF模型类似,我们的SDDF配方可以代表整个类别的形状并从部分输入数据中跨越形状填写或插入。与SDF不同,该SDF在任何方向上测量到最近表面的距离,SDDF测量给定方向的距离。这允许训练没有3D形状监控的SDDF模型,仅使用距离测量,从深度相机或激光雷达传感器易获得。我们的模型还通过直接在任意位置和观察方向上直接预测距离,去除像表面提取或渲染的后处理步骤。与深色视角综合技术不同,例如培训高容量黑盒型号的神经辐射字段,我们的模型通过构造SDDF值沿着观察方向线性降低的性质。这种结构约束不仅导致维度降低,而且还提供了关于SDDF预测的准确性的分析信心,无论到物体表面的距离如何。
translated by 谷歌翻译
在视觉计算中,3D几何形状以许多不同的形式表示,包括网格,点云,体素电网,水平集和深度图像。每个表示都适用于不同的任务,从而使一个表示形式转换为另一个表示(前向地图)是一个重要且常见的问题。我们提出了全向距离字段(ODF),这是一种新的3D形状表示形式,该表示通过将深度从任何观看方向从任何3D位置存储到对象的表面来编码几何形状。由于射线是ODF的基本单元,因此可以轻松地从通用的3D表示和点云等常见的3D表示。与限制代表封闭表面的水平集方法不同,ODF是未签名的,因此可以对开放表面进行建模(例如服装)。我们证明,尽管在遮挡边界处存在固有的不连续性,但可以通过神经网络(Neururodf)有效地学习ODF。我们还引入了有效的前向映射算法,以转换odf to&从常见的3D表示。具体而言,我们引入了一种有效的跳跃立方体算法,用于从ODF生成网格。实验表明,神经模型可以通过过度拟合单个对象学会学会捕获高质量的形状,并学会概括对共同的形状类别。
translated by 谷歌翻译
深度生成模型的最新进展导致了3D形状合成的巨大进展。虽然现有模型能够合成表示为体素,点云或隐式功能的形状,但这些方法仅间接强制执行最终3D形状表面的合理性。在这里,我们提出了一种直接将对抗训练施加到物体表面的3D形状合成框架(Surfgen)。我们的方法使用可分解的球面投影层来捕获并表示隐式3D发生器的显式零IsoSurface作为在单元球上定义的功能。通过在对手设置中用球形CNN处理3D对象表面的球形表示,我们的发电机可以更好地学习自然形状表面的统计数据。我们在大规模形状数据集中评估我们的模型,并证明了端到端训练的模型能够产生具有不同拓扑的高保真3D形状。
translated by 谷歌翻译
近年来,由于其表达力和灵活性,神经隐式表示在3D重建中获得了普及。然而,神经隐式表示的隐式性质导致缓慢的推理时间并且需要仔细初始化。在本文中,我们重新审视经典且无处不在的点云表示,并使用泊松表面重建(PSR)的可分辨率配方引入可分化的点对网格层,其允许给予定向的GPU加速的指示灯的快速解决方案点云。可微分的PSR层允许我们通过隐式指示器字段有效地和分散地桥接与3D网格的显式3D点表示,从而实现诸如倒角距离的表面重建度量的端到端优化。因此,点和网格之间的这种二元性允许我们以面向点云表示形状,这是显式,轻量级和富有表现力的。与神经内隐式表示相比,我们的形状 - 点(SAP)模型更具可解释,轻量级,并通过一个级别加速推理时间。与其他显式表示相比,如点,补丁和网格,SA​​P产生拓扑无关的水密歧管表面。我们展示了SAP对无知点云和基于学习的重建的表面重建任务的有效性。
translated by 谷歌翻译
Figure 1. Given input as either a 2D image or a 3D point cloud (a), we automatically generate a corresponding 3D mesh (b) and its atlas parameterization (c). We can use the recovered mesh and atlas to apply texture to the output shape (d) as well as 3D print the results (e).
translated by 谷歌翻译
Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task.
translated by 谷歌翻译
Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep AutoEncoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation, as well as shape completion. We perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space of our AEs, and Gaussian Mixture Models (GMMs). To quantitatively evaluate generative models we introduce measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs yield the best results overall.
translated by 谷歌翻译
隐式神经网络已成功用于点云的表面重建。然而,它们中的许多人面临着可扩展性问题,因为它们将整个对象或场景的异构面功能编码为单个潜在载体。为了克服这种限制,一些方法在粗略普通的3D网格或3D补丁上推断潜伏向量,并将它们插入以应对占用查询。在这样做时,它们可以与对象表面上采样的输入点进行直接连接,并且它们在空间中均匀地附加信息,而不是其最重要的信息,即在表面附近。此外,依赖于固定的补丁大小可能需要离散化调整。要解决这些问题,我们建议使用点云卷积并计算每个输入点的潜伏向量。然后,我们使用推断的权重在最近的邻居上执行基于学习的插值。对象和场景数据集的实验表明,我们的方法在大多数古典指标上显着优于其他方法,产生更精细的细节和更好的重建更薄的卷。代码可在https://github.com/valeoai/poco获得。
translated by 谷歌翻译
Intelligent mesh generation (IMG) refers to a technique to generate mesh by machine learning, which is a relatively new and promising research field. Within its short life span, IMG has greatly expanded the generalizability and practicality of mesh generation techniques and brought many breakthroughs and potential possibilities for mesh generation. However, there is a lack of surveys focusing on IMG methods covering recent works. In this paper, we are committed to a systematic and comprehensive survey describing the contemporary IMG landscape. Focusing on 110 preliminary IMG methods, we conducted an in-depth analysis and evaluation from multiple perspectives, including the core technique and application scope of the algorithm, agent learning goals, data types, targeting challenges, advantages and limitations. With the aim of literature collection and classification based on content extraction, we propose three different taxonomies from three views of key technique, output mesh unit element, and applicable input data types. Finally, we highlight some promising future research directions and challenges in IMG. To maximize the convenience of readers, a project page of IMG is provided at \url{https://github.com/xzb030/IMG_Survey}.
translated by 谷歌翻译
We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. IM-NET is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our implicit decoder for representation learning (via IM-AE) and shape generation (via IM-GAN), we demonstrate superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality. Code and supplementary material are available at https://github.com/czq142857/implicit-decoder.
translated by 谷歌翻译
最近的工作建模3D开放表面培训深度神经网络以近似无符号距离字段(UDF)并隐含地代表形状。要将此表示转换为显式网格,它们要么使用计算上昂贵的方法来对表面的致密点云采样啮合,或者通过将其膨胀到符号距离字段(SDF)中来扭曲表面。相比之下,我们建议直接将深度UDFS直接以延伸行进立方体的开放表面,通过本地检测表面交叉。我们的方法是幅度的序列,比啮合致密点云,比膨胀开口表面更准确。此外,我们使我们的表面提取可微分,并显示它可以帮助稀疏监控信号。
translated by 谷歌翻译
Implicit shape representations, such as Level Sets, provide a very elegant formulation for performing computations involving curves and surfaces. However, including implicit representations into canonical Neural Network formulations is far from straightforward. This has consequently restricted existing approaches to shape inference, to significantly less effective representations, perhaps most commonly voxels occupancy maps or sparse point clouds.To overcome this limitation we propose a novel formulation that permits the use of implicit representations of curves and surfaces, of arbitrary topology, as individual layers in Neural Network architectures with end-to-end trainability. Specifically, we propose to represent the output as an oriented level set of a continuous and discretised embedding function. We investigate the benefits of our approach on the task of 3D shape prediction from a single image and demonstrate its ability to produce a more accurate reconstruction compared to voxel-based representations. We further show that our model is flexible and can be applied to a variety of shape inference problems.
translated by 谷歌翻译
Our method completes a partial 3D scan using a 3D Encoder-Predictor network that leverages semantic features from a 3D classification network. The predictions are correlated with a shape database, which we use in a multi-resolution 3D shape synthesis step. We obtain completed high-resolution meshes that are inferred from partial, low-resolution input scans.
translated by 谷歌翻译