The objective of this paper is to learn dense 3D shape correspondence for topology-varying generic objects in an unsupervised manner. Conventional implicit functions estimate the occupancy of a 3D point given a shape latent code. Instead, our novel implicit function produces a probabilistic embedding to represent each 3D point in a part embedding space. Assuming the corresponding points are similar in the embedding space, we implement dense correspondence through an inverse function mapping from the part embedding vector to a corresponded 3D point. Both functions are jointly learned with several effective and uncertainty-aware loss functions to realize our assumption, together with the encoder generating the shape latent code. During inference, if a user selects an arbitrary point on the source shape, our algorithm can automatically generate a confidence score indicating whether there is a correspondence on the target shape, as well as the corresponding semantic point if there is one. Such a mechanism inherently benefits man-made objects with different part constitutions. The effectiveness of our approach is demonstrated through unsupervised 3D semantic correspondence and shape segmentation.
translated by 谷歌翻译
The neural radiance field (NeRF) has shown promising results in preserving the fine details of objects and scenes. However, unlike mesh-based representations, it remains an open problem to build dense correspondences across different NeRFs of the same category, which is essential in many downstream tasks. The main difficulties of this problem lie in the implicit nature of NeRF and the lack of ground-truth correspondence annotations. In this paper, we show it is possible to bypass these challenges by leveraging the rich semantics and structural priors encapsulated in a pre-trained NeRF-based GAN. Specifically, we exploit such priors from three aspects, namely 1) a dual deformation field that takes latent codes as global structural indicators, 2) a learning objective that regards generator features as geometric-aware local descriptors, and 3) a source of infinite object-specific NeRF samples. Our experiments demonstrate that such priors lead to 3D dense correspondence that is accurate, smooth, and robust. We also show that established dense correspondence across NeRFs can effectively enable many NeRF-based downstream applications such as texture transfer.
translated by 谷歌翻译
4D隐式表示中的最新进展集中在全球控制形状和运动的情况下,低维潜在向量,这很容易缺少表面细节和累积跟踪误差。尽管许多深层的本地表示显示了3D形状建模的有希望的结果,但它们的4D对应物尚不存在。在本文中,我们通过提出一个新颖的局部4D隐性代表来填补这一空白,以动态穿衣人,名为Lord,具有4D人类建模和局部代表的优点,并实现具有详细的表面变形的高保真重建,例如衣服皱纹。特别是,我们的主要见解是鼓励网络学习本地零件级表示的潜在代码,能够解释本地几何形状和时间变形。为了在测试时间进行推断,我们首先估计内部骨架运动在每个时间步中跟踪本地零件,然后根据不同类型的观察到的数据通过自动编码来优化每个部分的潜在代码。广泛的实验表明,该提出的方法具有强大的代表4D人类的能力,并且在实际应用上胜过最先进的方法,包括从稀疏点,非刚性深度融合(质量和定量)进行的4D重建。
translated by 谷歌翻译
最近的研究表明,基于预训练的gan的可控图像生成可以使广泛的计算机视觉任务受益。但是,较少的关注专用于3D视觉任务。鉴于此,我们提出了一个新颖的图像条件神经隐式领域,该领域可以利用GAN生成的多视图图像的2D监督,并执行通用对象的单视图重建。首先,提出了一个新颖的基于脱机的发电机,以生成具有对视点的完全控制的合理伪图像。然后,我们建议利用神经隐式函数,以及可区分的渲染器,从带有对象掩模和粗糙姿势初始化的伪图像中学习3D几何形状。为了进一步检测不可靠的监督,我们引入了一个新颖的不确定性模块来预测不确定性图,该模块可以补救伪图像中不确定区域的负面影响,从而导致更好的重建性能。我们方法的有效性是通过通用对象的出色单视3D重建结果证明的。
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
我们提出了一种从一系列时间演化点云序列中对时间一致的表面序列的无监督重建的方法。它在帧之间产生了密集和语义有意义的对应关系。我们将重建的表面代表由神经网络计算的Atlases,这使我们能够在帧之间建立对应关系。使这些对应关系的关键是语义上有意义的是为了保证在相应点计算的度量张量和尽可能相似。我们设计了一种优化策略,使我们的方法能够强大地对噪声和全局动作,而无需先验的对应关系或预先对准步骤。结果,我们的方法在几个具有挑战性的数据集中占据了最先进的。该代码可在https://github.com/bednarikjan/temporally_coherent_surface_reconstruction附近获得。
translated by 谷歌翻译
Figure 1. This paper introduces Local Deep Implicit Functions, a 3D shape representation that decomposes an input shape (mesh on left in every triplet) into a structured set of shape elements (colored ellipses on right) whose contributions to an implicit surface reconstruction (middle) are represented by latent vectors decoded by a deep network. Project video and website at ldif.cs.princeton.edu.
translated by 谷歌翻译
从单个2D图像推断3D位置和多个对象的形状是计算机视觉的长期目标。大多数现有的作品都预测这些3D属性之一或专注于解决单个对象。一个基本挑战在于如何学习适合3D检测和重建的图像的有效表示。在这项工作中,我们建议从输入图像中学习3D体素特征的常规网格,其通过3D特征升降操作员与3D场景空间对齐。基于3D体素特征,我们的新型中心-3D检测头在3D空间中配制了3D检测作为关键点检测。此外,我们设计了一种高效的粗致细重建模块,包括粗级体轴和新的本地PCA-SDF形状表示,其能够精细的细节重建和比现有方法更快地推理的阶数。通过3D检测和重建的互补监督,可以使3D体素特征成为几何和上下文保留,从而通过单个对象中的3D检测和重建来证明我们的方法的有效性和多个对象场景。
translated by 谷歌翻译
本文介绍了一个名为DTNET的新颖框架,用于3D网格重建和通过Distangled Tostology生成。除了以前的工作之外,我们还学习一个特定于每个输入的拓扑感知的神经模板,然后将模板变形以重建详细的网格,同时保留学习的拓扑。一个关键的见解是将复杂的网格重建分解为两个子任务:拓扑配方和形状变形。多亏了脱钩,DT-NET隐含地学习了潜在空间中拓扑和形状的分离表示。因此,它可以启用新型的脱离控件,以支持各种形状生成应用,例如,将3D对象的拓扑混合到以前的重建作品无法实现的3D对象的拓扑结构。广泛的实验结果表明,与最先进的方法相比,我们的方法能够产生高质量的网格,尤其是具有不同拓扑结构。
translated by 谷歌翻译
在3D形状分析的区域中,长期以来已经研究了形状的几何特性。本文专用于从形状形成过程中发现独特信息,而不是使用专业设计的描述符或端到端深神经网络直接提取代表功能。具体地,用作模板的球形点云逐渐变形以以粗细的方式拟合目标形状。在形状形成过程中,插入若干检查点以便于记录和研究中间阶段。对于每个阶段,偏移字段被评估为舞台感知的描述。整个形状形成过程的偏移的求和可以在几何形状方面完全定义目标形状。在这种观点中,人们可以廉价地从模板导出从模板的点亮形状对应,这有利于各种图形应用。在本文中,提出了基于逐行变形的自动编码器(PDAE)来通过粗到细小的形状拟合任务来学习舞台感知的描述。实验结果表明,所提出的PDAE具有重建高保真度的3D形状的能力,在多级变形过程中保留了一致的拓扑。执行基于舞台感知描述的其他应用程序,展示其普遍性。
translated by 谷歌翻译
在两个非辅助变形形状之间建立对应关系是视觉计算中最根本的问题之一。当对现实世界中的挑战(例如噪声,异常值,自我结合等)挑战时,现有方法通常会显示出弱的弹性。另一方面,自动描述器在学习几何学上有意义的潜在嵌入方面表现出强大的表现力。但是,它们在\ emph {形状分析}中的使用受到限制。在本文中,我们介绍了一种基于自动码头框架的方法,该方法在固定模板上学习了一个连续形状的变形字段。通过监督点在表面上的变形场,并通过小说\ emph {签名距离正则化}(SDR)正规化点偏面的正规化,我们学习了模板和Shape \ Emph {卷}之间的对齐。经过干净的水密网眼培训,\ emph {没有}任何数据启发,我们证明了在受损的数据和现实世界扫描上表现出令人信服的性能。
translated by 谷歌翻译
铰接式3D形状重建的事先工作通常依赖于专用传感器(例如,同步的多摄像机系统)或预先构建的3D可变形模型(例如,Smal或SMPL)。这些方法无法在野外扩展到不同的各种物体。我们呈现Banmo,这是一种需要专用传感器的方法,也不需要预定义的模板形状。 Banmo在可怜的渲染框架中从许多单眼休闲视频中建立高保真,铰接式的3D模型(包括形状和动画皮肤的重量)。虽然许多视频的使用提供了更多的相机视图和对象关节的覆盖范围,但它们在建立不同背景,照明条件等方面建立了重大挑战。我们的主要洞察力是合并三所思想学校; (1)使用铰接骨骼和混合皮肤的经典可变形形状模型,(2)可容纳基于梯度的优化,(3)在像素之间产生对应关系的规范嵌入物模型。我们介绍了神经混合皮肤模型,可允许可微分和可逆的铰接变形。与规范嵌入式结合时,这些模型允许我们在跨越可通过循环一致性自我监督的视频中建立密集的对应。在真实和合成的数据集上,Banmo显示比人类和动物的先前工作更高保真3D重建,具有从新颖的观点和姿势的现实图像。项目网页:Banmo-www.github.io。
translated by 谷歌翻译
为了使3D人的头像广泛可用,我们必须能够在任意姿势中产生各种具有不同身份和形状的多种3D虚拟人。由于衣服的身体形状,复杂的关节和由此产生的丰富,随机几何细节,这项任务是挑战的挑战。因此,目前代表3D人的方法不提供服装中的人的全部生成模型。在本文中,我们提出了一种新的方法,这些方法可以学习在具有相应的剥皮重量的各种衣服中产生详细的3D形状。具体而言,我们设计了一个多主题前进的剥皮模块,这些模块只有几个受试者的未预装扫描。为了捕获服装中高频细节的随机性,我们利用对抗的侵害制定,鼓励模型捕获潜在统计数据。我们提供了经验证据,这导致了皱纹的局部细节的现实生成。我们表明我们的模型能够产生佩戴各种和详细的衣服的自然人头像。此外,我们表明我们的方法可以用于拟合人类模型到原始扫描的任务,优于以前的最先进。
translated by 谷歌翻译
Figure 1. Shapes from the ShapeNet [8] database, fit to a structured implicit template, and arranged by template parameters using t-SNE [52]. Similar shape classes, such as airplanes, cars, and chairs, naturally cluster by template parameters. 1
translated by 谷歌翻译
Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images; however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output -point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthodox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-ofthe-art methods on single image based 3d reconstruction benchmarks; but it also shows strong performance for 3d shape completion and promising ability in making multiple plausible predictions.
translated by 谷歌翻译
我们的方法从单个RGB-D观察中研究了以对象为中心的3D理解的复杂任务。由于这是一个不适的问题,因此现有的方法在3D形状和6D姿势和尺寸估计中都遭受了遮挡的复杂多对象方案的尺寸估计。我们提出了Shapo,这是一种联合多对象检测的方法,3D纹理重建,6D对象姿势和尺寸估计。 Shapo的关键是一条单杆管道,可回归形状,外观和构成潜在的代码以及每个对象实例的口罩,然后以稀疏到密集的方式进一步完善。首先学到了一种新颖的剖面形状和前景数据库,以将对象嵌入各自的形状和外观空间中。我们还提出了一个基于OCTREE的新颖的可区分优化步骤,使我们能够以分析的方式进一步改善对象形状,姿势和外观。我们新颖的联合隐式纹理对象表示使我们能够准确地识别和重建新颖的看不见的对象,而无需访问其3D网格。通过广泛的实验,我们表明我们的方法在模拟的室内场景上进行了训练,可以准确地回归现实世界中新颖物体的形状,外观和姿势,并以最小的微调。我们的方法显着超过了NOCS数据集上的所有基准,对于6D姿势估计,MAP的绝对改进为8%。项目页面:https://zubair-irshad.github.io/projects/shapo.html
translated by 谷歌翻译
最近对隐含形状表示的兴趣日益增长。与明确的陈述相反,他们没有解决局限性,他们很容易处理各种各样的表面拓扑。为了了解这些隐式表示,电流方法依赖于一定程度的形状监督(例如,内部/外部信息或距离形状知识),或者至少需要密集点云(以近似距离 - 到 - 到 - 形状)。相比之下,我们介绍{\方法},一种用于学习形状表示的自我监督方法,从可能极其稀疏的点云。就像在水牛的针问题一样,我们在点云上“掉落”(样本)针头,认为,静统计地靠近表面,针端点位于表面的相对侧。不需要形状知识,点云可以高稀疏,例如,作为车辆获取的Lidar点云。以前的自我监督形状表示方法未能在这种数据上产生良好的结果。我们获得定量结果与现有的形状重建数据集上现有的监督方法标准,并在Kitti等硬自动驾驶数据集中显示有前途的定性结果。
translated by 谷歌翻译
Point cloud completion is a generation and estimation issue derived from the partial point clouds, which plays a vital role in the applications in 3D computer vision. The progress of deep learning (DL) has impressively improved the capability and robustness of point cloud completion. However, the quality of completed point clouds is still needed to be further enhanced to meet the practical utilization. Therefore, this work aims to conduct a comprehensive survey on various methods, including point-based, convolution-based, graph-based, and generative model-based approaches, etc. And this survey summarizes the comparisons among these methods to provoke further research insights. Besides, this review sums up the commonly used datasets and illustrates the applications of point cloud completion. Eventually, we also discussed possible research trends in this promptly expanding field.
translated by 谷歌翻译
Point clouds are characterized by irregularity and unstructuredness, which pose challenges in efficient data exploitation and discriminative feature extraction. In this paper, we present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology as a completely regular 2D point geometry image (PGI) structure, in which coordinates of spatial points are captured in colors of image pixels. \mr{Intuitively, Flattening-Net implicitly approximates a locally smooth 3D-to-2D surface flattening process while effectively preserving neighborhood consistency.} \mr{As a generic representation modality, PGI inherently encodes the intrinsic property of the underlying manifold structure and facilitates surface-style point feature aggregation.} To demonstrate its potential, we construct a unified learning framework directly operating on PGIs to achieve \mr{diverse types of high-level and low-level} downstream applications driven by specific task networks, including classification, segmentation, reconstruction, and upsampling. Extensive experiments demonstrate that our methods perform favorably against the current state-of-the-art competitors. We will make the code and data publicly available at https://github.com/keeganhk/Flattening-Net.
translated by 谷歌翻译