标准化流(NFS)是灵活的显式生成模型,已被证明可以准确地对复杂的现实世界数据分布进行建模。但是,它们的可逆性限制对存在于嵌入较高维空间中的较低维歧管上的数据分布施加局限性。实际上,这种缺点通常通过在影响生成样品质量的数据中添加噪声来绕过。与先前的工作相反,我们通过从原始数据分布中生成样品来解决此问题,并有有关扰动分布和噪声模型的全部知识。为此,我们确定对受扰动数据训练的NFS隐式表示最大可能性区域中的歧管。然后,我们提出了一个优化目标,该目标从扰动分布中恢复了歧管上最有可能的点。最后,我们专注于我们利用NFS的明确性质的3D点云,即从对数似然梯度中提取的表面正态和对数类样本本身,将Poisson表面重建应用于精炼生成的点集。
translated by 谷歌翻译
最近归一化流量(NFS)在建模3D点云上已经证明了最先进的性能,同时允许在推理时间以任意分辨率进行采样。然而,这些基于流的模型仍然需要长期训练时间和大型模型来代表复杂的几何形状。这项工作通过将NFS的混合物应用于点云来增强它们的代表性。我们展示在更普遍的框架中,每个组件都学会专门以完全无监督的方式专门化对象的特定子区域。通过将每个混合组件与相对小的NF实例化,我们通过更好的细节生成点云,而与基于单流量的模型相比,使用较少的参数,并且大大减少推理运行时。我们进一步证明通过添加数据增强,各个混合组件可以学习以语义有意义的方式专注。基于ShapEnet​​ DataSet评估NFS对生成,自动编码和单视重建的混合物。
translated by 谷歌翻译
通过深度传感器捕获的点云通常被噪音污染,阻碍了进一步的分析和应用。在本文中,我们强调了点分布均匀性对下游任务的重要性。我们证明了现有基于梯度的DeNoiser产生的点云尽管取得了有希望的定量结果,但仍缺乏统一性。为此,我们提出了GPCD ++,这是一种基于梯度的DeNoiser,其超轻质网络名为UNINET,以解决均匀性。与以前的最先进方法相比,我们的方法不仅会产生竞争性甚至更好地降解结果,而且还显着改善了统一性,这在很大程度上使诸如表面重建之类的应用受益。
translated by 谷歌翻译
通过扫描真实世界对象或场景采集的3D点云人已经发现了广泛的应用,包括融入式远程呈现,自动驾驶,监视等。它们通常是由噪声扰动或由低密度,这妨碍下游的任务,如表面重建遭受和理解。在本文中,我们提出了点集的二次采样恢复,这获知会聚点朝向下方的表面的点云的连续梯度场的新型范例。特别是,我们表示经由其梯度场点云 - 对数概率密度函数的梯度,和执行梯度场是连续的,这样就保证了模型可解优化的连续性。基于经由提出的神经网络估计出的连续梯度场,重新采样点云量对输入噪声或稀疏的点云执行基于梯度的马尔可夫链蒙特卡洛(MCMC)。此外,我们提出了点云恢复,基本上迭代地细化中间重采样点云,并在重采样过程容纳各种先验期间引入正则化到基于梯度的MCMC。大量的实验结果表明,该点集重采样实现了代表恢复工作,包括点云去噪和采样的国家的最先进的性能。
translated by 谷歌翻译
从扫描设备获得的点云通常受到噪声的扰动,这会影响下游任务,例如表面重建和分析。嘈杂的点云的分布可以看作是一组无噪声样品的分布$ p(x)$与某些噪声模型$ n $卷积,导致$(p * n)(x)$,其模式是基础干净的表面。为了确定嘈杂的点云,我们建议通过梯度上升将每个点的日志样本从$ p * n $增加 - 迭代更新每个点的位置。由于$ p * n $在测试时间是未知的,因此我们只需要分数(即对数概率函数的梯度)来执行梯度上升,因此我们提出了一个神经网络体系结构来估计分数$ P *。 n $仅给出嘈杂的点云作为输入。我们得出了训练网络并开发估计分数利用的非授权算法的目标函数。实验表明,所提出的模型在各种噪声模型下都优于最先进的方法,并显示了应用于其他任务(例如点云上采样)的潜力。该代码可在\ url {https://github.com/luost26/score-denoise}中获得。
translated by 谷歌翻译
近年来,由于其表达力和灵活性,神经隐式表示在3D重建中获得了普及。然而,神经隐式表示的隐式性质导致缓慢的推理时间并且需要仔细初始化。在本文中,我们重新审视经典且无处不在的点云表示,并使用泊松表面重建(PSR)的可分辨率配方引入可分化的点对网格层,其允许给予定向的GPU加速的指示灯的快速解决方案点云。可微分的PSR层允许我们通过隐式指示器字段有效地和分散地桥接与3D网格的显式3D点表示,从而实现诸如倒角距离的表面重建度量的端到端优化。因此,点和网格之间的这种二元性允许我们以面向点云表示形状,这是显式,轻量级和富有表现力的。与神经内隐式表示相比,我们的形状 - 点(SAP)模型更具可解释,轻量级,并通过一个级别加速推理时间。与其他显式表示相比,如点,补丁和网格,SA​​P产生拓扑无关的水密歧管表面。我们展示了SAP对无知点云和基于学习的重建的表面重建任务的有效性。
translated by 谷歌翻译
Figure 1. Given input as either a 2D image or a 3D point cloud (a), we automatically generate a corresponding 3D mesh (b) and its atlas parameterization (c). We can use the recovered mesh and atlas to apply texture to the output shape (d) as well as 3D print the results (e).
translated by 谷歌翻译
最近对隐含形状表示的兴趣日益增长。与明确的陈述相反,他们没有解决局限性,他们很容易处理各种各样的表面拓扑。为了了解这些隐式表示,电流方法依赖于一定程度的形状监督(例如,内部/外部信息或距离形状知识),或者至少需要密集点云(以近似距离 - 到 - 到 - 形状)。相比之下,我们介绍{\方法},一种用于学习形状表示的自我监督方法,从可能极其稀疏的点云。就像在水牛的针问题一样,我们在点云上“掉落”(样本)针头,认为,静统计地靠近表面,针端点位于表面的相对侧。不需要形状知识,点云可以高稀疏,例如,作为车辆获取的Lidar点云。以前的自我监督形状表示方法未能在这种数据上产生良好的结果。我们获得定量结果与现有的形状重建数据集上现有的监督方法标准,并在Kitti等硬自动驾驶数据集中显示有前途的定性结果。
translated by 谷歌翻译
allows us to train our model in the variational inference framework. Empirically, we demonstrate that PointFlow achieves state-of-the-art performance in point cloud generation. We additionally show that our model can faithfully reconstruct point clouds and learn useful representations in an unsupervised manner. The code is available at https: //github.com/stevenygd/PointFlow.
translated by 谷歌翻译
Figure 1: DeepSDF represents signed distance functions (SDFs) of shapes via latent code-conditioned feed-forward decoder networks. Above images are raycast renderings of DeepSDF interpolating between two shapes in the learned shape latent space. Best viewed digitally.
translated by 谷歌翻译
点云降级旨在从噪音和异常值损坏的原始观察结果中恢复清洁点云,同时保留细粒细节。我们提出了一种新型的基于深度学习的DeNoising模型,该模型结合了正常的流量和噪声解散技术,以实现高降解精度。与提取点云特征以进行点校正的现有作品不同,我们从分布学习和特征分离的角度制定了denoising过程。通过将嘈杂的点云视为清洁点和噪声的联合分布,可以从将噪声对应物从潜在点表示中解​​散出来,而欧几里得和潜在空间之间的映射是通过标准化流量来建模的。我们评估了具有各种噪声设置的合成3D模型和现实世界数据集的方法。定性和定量结果表明,我们的方法表现优于先前的最先进的基于深度学习的方法。
translated by 谷歌翻译
Point Cloud升级旨在从给定的稀疏中产生密集的点云,这是一项具有挑战性的任务,这是由于点集的不规则和无序的性质。为了解决这个问题,我们提出了一种新型的基于深度学习的模型,称为PU-Flow,该模型结合了正常的流量和权重预测技术,以产生均匀分布在基础表面上的致密点。具体而言,我们利用标准化流的可逆特征来转换欧几里得和潜在空间之间的点,并将UPSMPLING过程作为潜在空间中相邻点的集合,从本地几何环境中自适应地学习。广泛的实验表明,我们的方法具有竞争力,并且在大多数测试用例中,它在重建质量,近距到表面的准确性和计算效率方面的表现优于最先进的方法。源代码将在https://github.com/unknownue/pu-flow上公开获得。
translated by 谷歌翻译
Diffusion models have shown great promise for image generation, beating GANs in terms of generation diversity, with comparable image quality. However, their application to 3D shapes has been limited to point or voxel representations that can in practice not accurately represent a 3D surface. We propose a diffusion model for neural implicit representations of 3D shapes that operates in the latent space of an auto-decoder. This allows us to generate diverse and high quality 3D surfaces. We additionally show that we can condition our model on images or text to enable image-to-3D generation and text-to-3D generation using CLIP embeddings. Furthermore, adding noise to the latent codes of existing shapes allows us to explore shape variations.
translated by 谷歌翻译
The recent neural implicit representation-based methods have greatly advanced the state of the art for solving the long-standing and challenging problem of reconstructing a discrete surface from a sparse point cloud. These methods generally learn either a binary occupancy or signed/unsigned distance field (SDF/UDF) as surface representation. However, all the existing SDF/UDF-based methods use neural networks to implicitly regress the distance in a purely data-driven manner, thus limiting the accuracy and generalizability to some extent. In contrast, we propose the first geometry-guided method for UDF and its gradient estimation that explicitly formulates the unsigned distance of a query point as the learnable affine averaging of its distances to the tangent planes of neighbouring points. Besides, we model the local geometric structure of the input point clouds by explicitly learning a quadratic polynomial for each point. This not only facilitates upsampling the input sparse point cloud but also naturally induces unoriented normal, which further augments UDF estimation. Finally, to extract triangle meshes from the predicted UDF we propose a customized edge-based marching cube module. We conduct extensive experiments and ablation studies to demonstrate the significant advantages of our method over state-of-the-art methods in terms of reconstruction accuracy, efficiency, and generalizability. The source code is publicly available at https://github.com/rsy6318/GeoUDF.
translated by 谷歌翻译
We present a probabilistic model for point cloud generation, which is fundamental for various 3D vision tasks such as shape completion, upsampling, synthesis and data augmentation. Inspired by the diffusion process in nonequilibrium thermodynamics, we view points in point clouds as particles in a thermodynamic system in contact with a heat bath, which diffuse from the original distribution to a noise distribution. Point cloud generation thus amounts to learning the reverse diffusion process that transforms the noise distribution to the distribution of a desired shape. Specifically, we propose to model the reverse diffusion process for point clouds as a Markov chain conditioned on certain shape latent. We derive the variational bound in closed form for training and provide implementations of the model. Experimental results demonstrate that our model achieves competitive performance in point cloud generation and auto-encoding. The code is available at https://github.com/luost26/diffusionpoint-cloud.
translated by 谷歌翻译
Generative models, as an important family of statistical modeling, target learning the observed data distribution via generating new instances. Along with the rise of neural networks, deep generative models, such as variational autoencoders (VAEs) and generative adversarial network (GANs), have made tremendous progress in 2D image synthesis. Recently, researchers switch their attentions from the 2D space to the 3D space considering that 3D data better aligns with our physical world and hence enjoys great potential in practice. However, unlike a 2D image, which owns an efficient representation (i.e., pixel grid) by nature, representing 3D data could face far more challenges. Concretely, we would expect an ideal 3D representation to be capable enough to model shapes and appearances in details, and to be highly efficient so as to model high-resolution data with fast speed and low memory cost. However, existing 3D representations, such as point clouds, meshes, and recent neural fields, usually fail to meet the above requirements simultaneously. In this survey, we make a thorough review of the development of 3D generation, including 3D shape generation and 3D-aware image synthesis, from the perspectives of both algorithms and more importantly representations. We hope that our discussion could help the community track the evolution of this field and further spark some innovative ideas to advance this challenging task.
translated by 谷歌翻译
Training parts from ShapeNet. (b) t-SNE plot of part embeddings. (c) Reconstructing entire scenes with Local Implicit Grids Figure 1:We learn an embedding of parts from objects in ShapeNet [3] using a part autoencoder with an implicit decoder. We show that this representation of parts is generalizable across object categories, and easily scalable to large scenes. By localizing implicit functions in a grid, we are able to reconstruct entire scenes from points via optimization of the latent grid.
translated by 谷歌翻译
显式神经表面表示可以在任意精度上精确有效地提取编码表面,以及差异几何特性(例如表面正常和曲率)的分析推导。这种理想的属性在其隐式对应物中没有,使其非常适合计算机视觉,图形和机器人技术中的各种应用。但是,SOTA的作品在可以有效描述的拓扑结构方面受到限制,它引入了重建复杂表面和模型效率的失真。在这项工作中,我们提出了最小的神经图集,这是一种新型基于地图集的显式神经表面表示。从其核心处是一个完全可学习的参数域,由在参数空间的开放正方形上定义的隐式概率占用字段给出。相比之下,先前的工作通常预定了参数域。附加的灵活性使图表能够允许任意拓扑和边界。因此,我们的表示形式可以学习3个图表的最小地图集,这些图表对任意拓扑表面的表面(包括具有任意连接的组件的闭合和开放表面),具有变形最小的参数化。我们的实验支持了这一假设,并表明我们的重建在整体几何形状方面更为准确,这是由于对拓扑和几何形状的关注所分离。
translated by 谷歌翻译
With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose Occupancy Networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.
translated by 谷歌翻译
从嘈杂,不均匀和无知点云中的表面重建是计算机视觉和图形中的一个令人迷人但具有挑战性的问题。随着3D扫描技术的创新,强烈希望直接转换原始扫描数据,通常具有严重噪声,进入歧管三角网格。现有的基于学习的方法旨在学习零级曲面对底层形状进行的隐式功能。然而,大多数人都无法获得嘈杂和稀疏点云的理想结果,限制在实践中。在本文中,我们介绍了神经IML,一种新的方法,它直接从未引起的原始点云学习抗噪声符号距离功能(SDF)。通过最大限度地减少由隐式移动最小二乘函数获得的损耗,我们的方法通过最小化了自我监督的方式,从原始点云中从原始点云中的底层SDF,而不是明确地学习前提。 (IML)和我们的神经网络另一个,我们的预测器的梯度定义了便于计算IML的切线束。我们证明,当几个SDFS重合时,我们的神经网络可以预测符号隐式功能,其零电平集用作底层表面的良好近似。我们对各种基准进行广泛的实验,包括合成扫描和现实世界扫描,以表现出从各种投入重建忠实形状的能力,特别是对于具有噪音或间隙的点云。
translated by 谷歌翻译