Diffusion models have emerged as a powerful tool for point cloud generation. A key component that drives the impressive performance for generating high-quality samples from noise is iteratively denoise for thousands of steps. While beneficial, the complexity of learning steps has limited its applications to many 3D real-world. To address this limitation, we propose Point Straight Flow (PSF), a model that exhibits impressive performance using one step. Our idea is based on the reformulation of the standard diffusion model, which optimizes the curvy learning trajectory into a straight path. Further, we develop a distillation strategy to shorten the straight path into one step without a performance loss, enabling applications to 3D real-world with latency constraints. We perform evaluations on multiple 3D tasks and find that our PSF performs comparably to the standard diffusion model, outperforming other efficient 3D point cloud generation methods. On real-world applications such as point cloud completion and training-free text-guided generation in a low-latency setup, PSF performs favorably.
translated by 谷歌翻译
Diffusion models have shown great promise for image generation, beating GANs in terms of generation diversity, with comparable image quality. However, their application to 3D shapes has been limited to point or voxel representations that can in practice not accurately represent a 3D surface. We propose a diffusion model for neural implicit representations of 3D shapes that operates in the latent space of an auto-decoder. This allows us to generate diverse and high quality 3D surfaces. We additionally show that we can condition our model on images or text to enable image-to-3D generation and text-to-3D generation using CLIP embeddings. Furthermore, adding noise to the latent codes of existing shapes allows us to explore shape variations.
translated by 谷歌翻译
我们提出了Lidargen,这是一种新型,有效且可控的生成模型,可产生逼真的LIDAR点云感觉读数。我们的方法利用强大的得分匹配基于能量的模型,并将点云生成过程作为随机降解过程在等应角视图中。该模型使我们能够采样具有保证的物理可行性和可控性的多样化和高质量点云样本。我们验证方法对挑战性Kitti-360和Nuscenes数据集的有效性。定量和定性结果表明,与其他生成模型相比,我们的方法产生的样本更现实。此外,LIDARGEN可以在不进行重新培训的情况下在输入上进行样本云。我们证明我们所提出的生成模型可直接用于致密激光点云。我们的代码可在以下网址找到:https://www.zyrianov.org/lidargen/
translated by 谷歌翻译
我们提出了整流的流程,这是一种令人惊讶的简单学习方法(神经)的普通微分方程(ODE)模型,用于在两个经验观察到的分布\ pi_0和\ pi_1之间运输,因此为生成建模和域转移提供了统一的解决方案,以及其他各种任务。涉及分配运输。整流流的想法是学习ode,以遵循尽可能多的连接从\ pi_0和\ pi_1的直径。这是通过解决直接的非线性最小二乘优化问题来实现的,该问题可以轻松地缩放到大型模型,而无需在标准监督学习之外引入额外的参数。直径是特殊的,因此是特殊的,因为它们是两个点之间的最短路径,并且可以精确模拟而无需时间离散,因此可以在计算上产生高效的模型。我们表明,从数据(称为整流)中学习的整流流的过程将\ pi_0和\ pi_1的任意耦合转变为新的确定性耦合,并证明是非侵入的凸面运输成本。此外,递归应用矫正使我们能够获得具有越来越直的路径的流动序列,可以在推理阶段进行粗略的时间离散化来准确地模拟。在实证研究中,我们表明,整流流对图像产生,图像到图像翻译和域的适应性表现出色。特别是,在图像生成和翻译上,我们的方法几乎产生了几乎直流的流,即使是单个Euler离散步骤,也会产生高质量的结果。
translated by 谷歌翻译
While recent work on text-conditional 3D object generation has shown promising results, the state-of-the-art methods typically require multiple GPU-hours to produce a single sample. This is in stark contrast to state-of-the-art generative image models, which produce samples in a number of seconds or minutes. In this paper, we explore an alternative method for 3D object generation which produces 3D models in only 1-2 minutes on a single GPU. Our method first generates a single synthetic view using a text-to-image diffusion model, and then produces a 3D point cloud using a second diffusion model which conditions on the generated image. While our method still falls short of the state-of-the-art in terms of sample quality, it is one to two orders of magnitude faster to sample from, offering a practical trade-off for some use cases. We release our pre-trained point cloud diffusion models, as well as evaluation code and models, at https://github.com/openai/point-e.
translated by 谷歌翻译
扩散模型是一类深入生成模型,在具有密集理论建立的各种任务上显示出令人印象深刻的结果。尽管与其他最先进的模型相比,扩散模型的样本合成质量和多样性令人印象深刻,但它们仍然遭受了昂贵的抽样程序和次优可能的估计。最近的研究表明,对提高扩散模型的性能的热情非常热情。在本文中,我们对扩散模型的现有变体进行了首次全面综述。具体而言,我们提供了扩散模型的第一个分类法,并将它们分类为三种类型,即采样加速增强,可能性最大化的增强和数据将来增强。我们还详细介绍了其他五个生成模型(即变异自动编码器,生成对抗网络,正常流量,自动回归模型和基于能量的模型),并阐明扩散模型与这些生成模型之间的连接。然后,我们对扩散模型的应用进行彻底研究,包括计算机视觉,自然语言处理,波形信号处理,多模式建模,分子图生成,时间序列建模和对抗性纯化。此外,我们提出了与这种生成模型的发展有关的新观点。
translated by 谷歌翻译
allows us to train our model in the variational inference framework. Empirically, we demonstrate that PointFlow achieves state-of-the-art performance in point cloud generation. We additionally show that our model can faithfully reconstruct point clouds and learn useful representations in an unsupervised manner. The code is available at https: //github.com/stevenygd/PointFlow.
translated by 谷歌翻译
基于AI的分子生成为大量生物医学科学和工程(例如抗体设计,水解酶工程或疫苗开发)提供了一种有希望的方法。由于分子受物理定律的管辖,所以关键的挑战是将先前的信息纳入训练程序中,以产生高质量和现实的分子。我们提出了一种简单而新颖的方法,以引导基于扩散的生成模型培训具有物理和统计的先验信息。这是通过构建物理知情的扩散桥,即保证在固定末端产生给定观察的随机过程来实现的。我们开发了一种基于Lyapunov函数的方法来构建和确定桥梁,并提出了许多有关高质量分子生成和均匀性促进的3D点云生成的信息丰富的先验桥的建议。通过全面的实验,我们表明我们的方法为3D生成任务提供了强大的方法,从而产生具有更好质量和稳定性得分的分子结构,并且具有更高质量的分布点云。
translated by 谷歌翻译
We present a probabilistic model for point cloud generation, which is fundamental for various 3D vision tasks such as shape completion, upsampling, synthesis and data augmentation. Inspired by the diffusion process in nonequilibrium thermodynamics, we view points in point clouds as particles in a thermodynamic system in contact with a heat bath, which diffuse from the original distribution to a noise distribution. Point cloud generation thus amounts to learning the reverse diffusion process that transforms the noise distribution to the distribution of a desired shape. Specifically, we propose to model the reverse diffusion process for point clouds as a Markov chain conditioned on certain shape latent. We derive the variational bound in closed form for training and provide implementations of the model. Experimental results demonstrate that our model achieves competitive performance in point cloud generation and auto-encoding. The code is available at https://github.com/luost26/diffusionpoint-cloud.
translated by 谷歌翻译
完成无序部分点云是一个具有挑战性的任务。依赖于解码潜在特征来恢复完整形状的现有方法,通常导致完成的点云过度平滑,丢失细节和嘈杂。我们建议首先解码和优化低分辨率(低res)点云,而不是一次性地解码和优化低分辨率(低分辨率)点云,而不是一次性地插入整个稀疏点云,这趋于失去细节。关于缺乏最初解码的低res点云的细节的可能性,我们提出了一种迭代细化,以恢复几何细节和对称化过程,以保护来自输入部分点云的值得信赖的信息。获得稀疏和完整的点云后,我们提出了一种补丁设计的上采样策略。基于补丁的上采样允许更好地恢复精细细节与整个形状不同,然而,由于数据差异(即,这里的输入稀疏数据不是来自地面真理的输入稀疏数据,现有的上采样方法不适用于完成任务。因此,我们提出了一种补丁提取方法,在稀疏和地面 - 真值云之间生成训练补丁对,以及抑制来自稀疏点云的噪声点的异常删除步骤。我们的整个方法都能实现高保真点云完成。提供综合评估以证明所提出的方法及其各个组件的有效性。
translated by 谷歌翻译
最近归一化流量(NFS)在建模3D点云上已经证明了最先进的性能,同时允许在推理时间以任意分辨率进行采样。然而,这些基于流的模型仍然需要长期训练时间和大型模型来代表复杂的几何形状。这项工作通过将NFS的混合物应用于点云来增强它们的代表性。我们展示在更普遍的框架中,每个组件都学会专门以完全无监督的方式专门化对象的特定子区域。通过将每个混合组件与相对小的NF实例化,我们通过更好的细节生成点云,而与基于单流量的模型相比,使用较少的参数,并且大大减少推理运行时。我们进一步证明通过添加数据增强,各个混合组件可以学习以语义有意义的方式专注。基于ShapEnet​​ DataSet评估NFS对生成,自动编码和单视重建的混合物。
translated by 谷歌翻译
Point cloud completion is a generation and estimation issue derived from the partial point clouds, which plays a vital role in the applications in 3D computer vision. The progress of deep learning (DL) has impressively improved the capability and robustness of point cloud completion. However, the quality of completed point clouds is still needed to be further enhanced to meet the practical utilization. Therefore, this work aims to conduct a comprehensive survey on various methods, including point-based, convolution-based, graph-based, and generative model-based approaches, etc. And this survey summarizes the comparisons among these methods to provoke further research insights. Besides, this review sums up the commonly used datasets and illustrates the applications of point cloud completion. Eventually, we also discussed possible research trends in this promptly expanding field.
translated by 谷歌翻译
3D点云是捕获真实世界3D对象的重要3D表示。但是,真正扫描的3D点云通常不完整,并且对于恢复下游应用程序的完整点云非常重要。大多数现有点云完成方法使用倒角距离(CD)训练丢失。通过搜索最近的邻居,CD损耗估计两个点云之间的对应关系,该邻居不会捕获所生成的形状上的总点密度分布,因此可能导致非均匀的点云生成。为了解决这个问题,我们提出了一个新的点扩散细化(PDR)范例,用于点云完成。 PDR包括条件生成网络(CGNET)和细化网络(RFNET)。 CGNET使用称为去噪扩散概率模型(DDPM)的条件生成模型,以在部分观察中产生粗略完成。 DDPM在生成的点云和统一的地面真理之间建立一对一的映射,然后优化平均平方误差损耗以实现均匀生成。 RFNET精制CGNet的粗输出,并进一步提高完成点云的质量。此外,我们开发了两个网络的新型双路架构。该体系结构可以(1)有效且有效地从部分观察到的点云提取多级特征以指导完成,并且(2)精确地操纵3D点的空间位置以获得平滑的表面和尖锐的细节。各种基准数据集上的广泛实验结果表明,我们的PDR范例优于以前的最先进的方法,用于点云完成。值得注意的是,在RFNET的帮助下,我们可以在没有太多的性能下降的情况下加速DDPM的迭代生成过程。
translated by 谷歌翻译
We introduce DiffRF, a novel approach for 3D radiance field synthesis based on denoising diffusion probabilistic models. While existing diffusion-based methods operate on images, latent codes, or point cloud data, we are the first to directly generate volumetric radiance fields. To this end, we propose a 3D denoising model which directly operates on an explicit voxel grid representation. However, as radiance fields generated from a set of posed images can be ambiguous and contain artifacts, obtaining ground truth radiance field samples is non-trivial. We address this challenge by pairing the denoising formulation with a rendering loss, enabling our model to learn a deviated prior that favours good image quality instead of trying to replicate fitting errors like floating artifacts. In contrast to 2D-diffusion models, our model learns multi-view consistent priors, enabling free-view synthesis and accurate shape generation. Compared to 3D GANs, our diffusion-based approach naturally enables conditional generation such as masked completion or single-view 3D synthesis at inference time.
translated by 谷歌翻译
Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep AutoEncoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation, as well as shape completion. We perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space of our AEs, and Gaussian Mixture Models (GMMs). To quantitatively evaluate generative models we introduce measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs yield the best results overall.
translated by 谷歌翻译
深度学习表现出巨大的生成任务潜力。生成模型是可以根据某些隐含参数随机生成观测值的模型类。最近,扩散模型由于其发电能力而成为一类生成模型。如今,已经取得了巨大的成就。除了计算机视觉,语音产生,生物信息学和自然语言处理外,还需要在该领域探索更多应用。但是,扩散模型具有缓慢生成过程的自然缺点,从而导致许多增强的作品。该调查总结了扩散模型的领域。我们首先说明了两项具有里程碑意义的作品的主要问题-DDPM和DSM。然后,我们提供各种高级技术,以加快扩散模型 - 训练时间表,无训练采样,混合模型以及得分和扩散统一。关于现有模型,我们还根据特定的NFE提供了FID得分的基准和NLL。此外,引入了带有扩散模型的应用程序,包括计算机视觉,序列建模,音频和科学AI。最后,该领域以及局限性和进一步的方向都进行了摘要。
translated by 谷歌翻译
本文提出了一种新的3D形状生成方法,从而在小波域中的连续隐式表示上实现了直接生成建模。具体而言,我们提出了一个带有一对粗糙和细节系数的紧凑型小波表示,通过截短的签名距离函数和多尺度的生物联盟波波隐式表示3D形状,并制定了一对神经网络:基于生成器基于扩散模型的生成器以粗糙系数的形式产生不同的形状;以及一个细节预测因子,以进一步生成兼容的细节系数量,以丰富具有精细结构和细节的生成形状。定量和定性实验结果都表现出我们的方法在产生具有复杂拓扑和结构,干净表面和细节的多样化和高质量形状方面的优势,超过了最先进的模型的3D生成能力。
translated by 谷歌翻译
随着几个行业正在朝着建模大规模的3D虚拟世界迈进,因此需要根据3D内容的数量,质量和多样性来扩展的内容创建工具的需求变得显而易见。在我们的工作中,我们旨在训练Parterant 3D生成模型,以合成纹理网格,可以通过3D渲染引擎直接消耗,因此立即在下游应用中使用。 3D生成建模的先前工作要么缺少几何细节,因此在它们可以生成的网格拓扑中受到限制,通常不支持纹理,或者在合成过程中使用神经渲染器,这使得它们在常见的3D软件中使用。在这项工作中,我们介绍了GET3D,这是一种生成模型,该模型直接生成具有复杂拓扑,丰富几何细节和高保真纹理的显式纹理3D网格。我们在可区分的表面建模,可区分渲染以及2D生成对抗网络中桥接了最新成功,以从2D图像集合中训练我们的模型。 GET3D能够生成高质量的3D纹理网格,从汽车,椅子,动物,摩托车和人类角色到建筑物,对以前的方法进行了重大改进。
translated by 谷歌翻译
In this paper, we learn a diffusion model to generate 3D data on a scene-scale. Specifically, our model crafts a 3D scene consisting of multiple objects, while recent diffusion research has focused on a single object. To realize our goal, we represent a scene with discrete class labels, i.e., categorical distribution, to assign multiple objects into semantic categories. Thus, we extend discrete diffusion models to learn scene-scale categorical distributions. In addition, we validate that a latent diffusion model can reduce computation costs for training and deploying. To the best of our knowledge, our work is the first to apply discrete and latent diffusion for 3D categorical data on a scene-scale. We further propose to perform semantic scene completion (SSC) by learning a conditional distribution using our diffusion model, where the condition is a partial observation in a sparse point cloud. In experiments, we empirically show that our diffusion models not only generate reasonable scenes, but also perform the scene completion task better than a discriminative model. Our code and models are available at https://github.com/zoomin-lee/scene-scale-diffusion
translated by 谷歌翻译
Shape completion, the problem of estimating the complete geometry of objects from partial observations, lies at the core of many vision and robotics applications. In this work, we propose Point Completion Network (PCN), a novel learning-based approach for shape completion. Unlike existing shape completion methods, PCN directly operates on raw point clouds without any structural assumption (e.g. symmetry) or annotation (e.g. semantic class) about the underlying shape. It features a decoder design that enables the generation of fine-grained completions while maintaining a small number of parameters. Our experiments show that PCN produces dense, complete point clouds with realistic structures in the missing regions on inputs with various levels of incompleteness and noise, including cars from LiDAR scans in the KITTI dataset. Code, data and trained models are available at https://wentaoyuan.github.io/pcn.
translated by 谷歌翻译