能够直接在原始点云上学习有效的语义表示已成为3D理解中的一个核心主题。尽管进步迅速,但最新的编码器仍限制了典型的点云,并且在遇到几何变形扭曲时的性能弱于必要的性能。为了克服这一挑战,我们提出了Point-Stree,这是一种通用点云编码器,对基于放松的K-D树的转换非常可靠。我们方法的关键是使用主组件分析(PCA)在K-d树中设计了分区规则。我们将放松的K-D树的结构用作我们的计算图,并将特征作为边框描述符建模,并将其与点式最大最大操作合并。除了这种新颖的体系结构设计外,我们还通过引入预先对准进一步提高了鲁棒性 - 一种简单但有效的基于PCA的标准化方案。我们的PointTree编码器与预先对齐的结合始终优于大边距的最先进方法,用于从对象分类到广泛基础的数据集的各种转换版本的语义分割的应用程序。代码和预训练模型可在https://github.com/immortalco/pointtree上找到。
translated by 谷歌翻译
We present a new deep learning architecture (called Kdnetwork) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and shares parameters of these transformations according to the subdivisions of the point clouds imposed onto them by kdtrees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform twodimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behavior. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation.
translated by 谷歌翻译
Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.
translated by 谷歌翻译
点云识别是工业机器人和自主驾驶中的重要任务。最近,几个点云处理模型已经实现了最先进的表演。然而,这些方法缺乏旋转稳健性,并且它们的性能严重降低了随机旋转,未能扩展到具有不同方向的现实情景。为此,我们提出了一种名为基于自行轮廓的转换(SCT)的方法,该方法可以灵活地集成到针对任意旋转的各种现有点云识别模型中。 SCT通过引入轮廓感知的转换(CAT)提供有效的旋转和翻译不变性,该转换(CAT)线性地将点数的笛卡尔坐标转换为翻译和旋转 - 不变表示。我们证明猫是一种基于理论分析的旋转和翻译不变的转换。此外,提出了帧对准模块来增强通过捕获轮廓并将基于自平台的帧转换为帧内帧来增强鉴别特征提取。广泛的实验结果表明,SCT在合成和现实世界基准的有效性和效率的任意旋转下表现出最先进的方法。此外,稳健性和一般性评估表明SCT是稳健的,适用于各种点云处理模型,它突出了工业应用中SCT的优势。
translated by 谷歌翻译
3D点云的卷积经过广泛研究,但在几何深度学习中却远非完美。卷积的传统智慧在3D点之间表现出特征对应关系,这是对差的独特特征学习的内在限制。在本文中,我们提出了自适应图卷积(AGCONV),以供点云分析的广泛应用。 AGCONV根据其动态学习的功能生成自适应核。与使用固定/各向同性核的解决方案相比,AGCONV提高了点云卷积的灵活性,有效,精确地捕获了不同语义部位的点之间的不同关系。与流行的注意力体重方案不同,AGCONV实现了卷积操作内部的适应性,而不是简单地将不同的权重分配给相邻点。广泛的评估清楚地表明,我们的方法优于各种基准数据集中的点云分类和分割的最新方法。同时,AGCONV可以灵活地采用更多的点云分析方法来提高其性能。为了验证其灵活性和有效性,我们探索了基于AGCONV的完成,DeNoing,Upsmpling,注册和圆圈提取的范式,它们与竞争对手相当甚至优越。我们的代码可在https://github.com/hrzhou2/adaptconv-master上找到。
translated by 谷歌翻译
Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and density functions through kernel density estimation. The most important contribution of this work is a novel reformulation proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-of-the-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.
translated by 谷歌翻译
在本文中,我们将3D点云的古典表示作为线性形状模型。我们的主要洞察力是利用深度学习,代表一种形状的集合,作为低维线性形状模型的仿射变换。每个线性模型的特征在于形状原型,低维形状基础和两个神经网络。网络以输入点云作为输入,并在线性基础中预测形状的坐标和最能近似输入的仿射变换。使用单一的重建损耗来学习线性模型和神经网络的结束。我们方法的主要优点是,与近期学习基于特征的复杂形状表示的许多深度方法相比,我们的模型是显式的,并且在3D空间中发生每个操作。结果,我们的线性形状模型可以很容易地可视化和注释,并且可以在视觉上了解故障情况。虽然我们的主要目标是引入紧凑且可解释的形状收集表示,但我们表明它导致最新的最先进结果对几次射击分割。
translated by 谷歌翻译
学习3D点云的新表示形式是3D视觉中的一个活跃研究领域,因为订单不变的点云结构仍然对神经网络体系结构的设计构成挑战。最近的作品探索了学习全球或本地功能或两者兼而有之,但是均未通过分析点的局部方向分布来捕获上下文形状信息的早期方法。在本文中,我们利用点附近的点方向分布,以获取点云的表现力局部邻里表示。我们通过将给定点的球形邻域分为预定义的锥体来实现这一目标,并将每个体积内部的统计数据用作点特征。这样,本地贴片不仅可以由所选点的最近邻居表示,还可以考虑沿该点周围多个方向定义的点密度分布。然后,我们能够构建涉及依赖MLP(多层感知器)层的Odfblock的方向分布函数(ODF)神经网络。新的ODFNET模型可实现ModelNet40和ScanObjectNN数据集的对象分类的最新精度,并在Shapenet S3DIS数据集上进行分割。
translated by 谷歌翻译
点云分析没有姿势前导者在真实应用中非常具有挑战性,因为点云的方向往往是未知的。在本文中,我们提出了一个全新的点集学习框架prin,即点亮旋转不变网络,专注于点云分析中的旋转不变特征提取。我们通过密度意识的自适应采样构建球形信号,以处理球形空间中的扭曲点分布。提出了球形Voxel卷积和点重新采样以提取每个点的旋转不变特征。此外,我们将Prin扩展到称为Sprin的稀疏版本,直接在稀疏点云上运行。 Prin和Sprin都可以应用于从对象分类,部分分割到3D特征匹配和标签对齐的任务。结果表明,在随机旋转点云的数据集上,Sprin比无任何数据增强的最先进方法表现出更好的性能。我们还为我们的方法提供了彻底的理论证明和分析,以实现我们的方法实现的点明智的旋转不变性。我们的代码可在https://github.com/qq456cvb/sprin上找到。
translated by 谷歌翻译
This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is
translated by 谷歌翻译
刚性变换相关的点云的注册是计算机视觉中的基本问题之一。然而,仍然缺乏在存在噪声存在下对准稀疏和不同采样的观察的实际情况的解决方案。我们在这种情况下接近注册,融合封闭形式的通用Mani-折叠嵌入(UME)方法和深神经网络。这两者组合成一个统一的框架,名为Deepume,训练的端到端并以无人监督的方式。为了在存在大转换的情况下成功提供全球解决方案,我们采用So(3) - 识别的坐标系来学习点云的联合重采样策略等(3) - variant功能。然后通过用于转换估计的几何UME方法来利用这些特征。使用度量进行优化的Dewume参数,旨在克服在对称形状的注册中出现的歧义问题,当考虑嘈杂的场景时。我们表明,我们的混合方法在各种场景中优于最先进的注册方法,并概括到未操作数据集。我们的代码公开提供。
translated by 谷歌翻译
点云的不规则性和混乱为点云分析带来了许多挑战。 PointMLP表明几何信息不是点云分析中唯一的关键点。它基于使用几何仿射模块的简单多层感知(MLP)结构实现了有希望的结果。但是,这些类似MLP的结构仅具有固定权重的聚合特征,而不同点特征的语义信息的差异被忽略。因此,我们提出了点特征的新的点矢量表示,以通过使用电感偏置来改善特征聚集。引入矢量表示的方向可以根据语义关系动态调节两个点特征的聚合。基于它,我们设计了一个新颖的Point2vector MLP体系结构。实验表明,与先前的最佳方法相比,它在ScanoBjectNN数据集的分类任务上实现了最新的性能,增加了1%。我们希望我们的方法可以帮助人们更好地了解语义信息在点云分析中的作用,并导致探索更多更好的特征表示或其他方式。
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
基于混合的点云增强是一种流行的大规模公共数据集可用性问题的问题。但混合点和相应的语义标签之间的不匹配会阻碍诸如部分分割的方向任务中的进一步应用。本文提出了一种点云增强方法,Pointmanifoldcut(PMC),它取代了神经网络嵌入点,而不是欧几里德空间坐标。这种方法利用了在较高级别的神经网络的点已经培训,以培训以嵌入其邻居关系并混合这些表示不会混合自身与其标签之间的关系。我们在PointManifoldCut操作后设置了空间变换模块,以对齐嵌入式空间中的新实例。本文还讨论了不同隐藏层的效果和更换点的方法。实验表明,我们的建议方法可以增强点云分类以及分段网络的性能,并为攻击和几何变换带来了额外的鲁棒性。本文的代码可用于:https://github.com/fun0515/pinityManifoldcut。
translated by 谷歌翻译
几何深度学习,即设计神经网络以处理诸如点云和图形的无处不在的几何数据,在过去十年中取得了巨大的成功。一个关键的归纳偏差是该模型可以维持朝向各种变换的不变性,例如翻译,旋转和缩放。现有的图形神经网络(GNN)方法只能维持置换不变性,不能保证与其他转换的不变性。除了GNN,其他作品设计复杂的变换不变层,这些层是计算昂贵且难以扩展的。为了解决这个问题,我们重新审视为什么在处理几何数据时,现有的神经网络无法维持转换不变性。我们的研究结果表明,变换不变和距离保持距离初始表示足以实现变换不变性,而不是需要复杂的神经层设计。通过这些发现,我们提出了转型不变神经网络(TINVNN),是几何数据的直接和一般框架。具体地,我们通过在将表示形式馈送到神经网络之前来实现通过修改多维缩放来实现转换不变和距离保留初始点表示。我们证明Tinvnn可以严格保证转型不变性,一般而灵活,足以与现有的神经网络相结合。广泛的实验结果对点云分析和组合优化展示了我们提出的方法的有效性和一般适用性。基于实验结果,我们倡导Tinvnn应该被视为新的起点和基本基准,以进一步研究转型不变几何深度学习。
translated by 谷歌翻译
我们为3D点云提出了一种自我监督的胶囊架构。我们通过置换等级的注意力计算对象的胶囊分解,并通过用对随机旋转对象的对进行自我监督处理。我们的主要思想是将注意力掩码汇总为语义关键点,并使用这些来监督满足胶囊不变性/设备的分解。这不仅能够培训语义一致的分解,而且还允许我们学习一个能够以对客观的推理的规范化操作。培训我们的神经网络,我们既不需要分类标签也没有手动对齐训练数据集。然而,通过以自我监督方式学习以对象形式的表示,我们的方法在3D点云重建,规范化和无监督的分类上表现出最先进的。
translated by 谷歌翻译
由于其高质量的对象表示和有效的获取方法,3D点云吸引了越来越多的架构,工程和构建的关注。因此,文献中已经提出了许多点云特征检测方法来自动化一些工作流,例如它们的分类或部分分割。然而,点云自动化系统的性能显着落后于图像对应物。尽管这种故障的一部分源于云云的不规则性,非结构性和混乱,这使得云特征检测的任务比图像一项更具挑战性,但我们认为,图像域缺乏灵感可能是主要的。这种差距的原因。确实,鉴于图像特征检测中卷积神经网络(CNN)的压倒性成功,设计其点云对应物似乎是合理的,但是所提出的方法都不类似于它们。具体而言,即使许多方法概括了点云中的卷积操作,但它们也无法模仿CNN的多种功能检测和汇总操作。因此,我们提出了一个基于图卷积的单元,称为收缩单元,可以垂直和水平堆叠,以设计类似CNN的3D点云提取器。鉴于点云中点之间的自我,局部和全局相关性传达了至关重要的空间几何信息,因此我们在特征提取过程中还利用它们。我们通过为ModelNet-10基准数据集设计功能提取器模型来评估我们的建议,并达到90.64%的分类精度,表明我们的创新想法是有效的。我们的代码可在github.com/albertotamajo/shrinking-unit上获得。
translated by 谷歌翻译
Raw point clouds data inevitably contains outliers or noise through acquisition from 3D sensors or reconstruction algorithms. In this paper, we present a novel endto-end network for robust point clouds processing, named PointASNL, which can deal with point clouds with noise effectively. The key component in our approach is the adaptive sampling (AS) module. It first re-weights the neighbors around the initial sampled points from farthest point sampling (FPS), and then adaptively adjusts the sampled points beyond the entire point cloud. Our AS module can not only benefit the feature learning of point clouds, but also ease the biased effect of outliers. To further capture the neighbor and long-range dependencies of the sampled point, we proposed a local-nonlocal (L-NL) module inspired by the nonlocal operation. Such L-NL module enables the learning process insensitive to noise. Extensive experiments verify the robustness and superiority of our approach in point clouds processing tasks regardless of synthesis data, indoor data, and outdoor data with or without noise. Specifically, PointASNL achieves state-of-theart robust performance for classification and segmentation tasks on all datasets, and significantly outperforms previous methods on real-world outdoor SemanticKITTI dataset with considerate noise. Our code is released through https: //github.com/yanx27/PointASNL.
translated by 谷歌翻译
Recent investigations on rotation invariance for 3D point clouds have been devoted to devising rotation-invariant feature descriptors or learning canonical spaces where objects are semantically aligned. Examinations of learning frameworks for invariance have seldom been looked into. In this work, we review rotation invariance in terms of point cloud registration and propose an effective framework for rotation invariance learning via three sequential stages, namely rotation-invariant shape encoding, aligned feature integration, and deep feature registration. We first encode shape descriptors constructed with respect to reference frames defined over different scales, e.g., local patches and global topology, to generate rotation-invariant latent shape codes. Within the integration stage, we propose Aligned Integration Transformer to produce a discriminative feature representation by integrating point-wise self- and cross-relations established within the shape codes. Meanwhile, we adopt rigid transformations between reference frames to align the shape codes for feature consistency across different scales. Finally, the deep integrated feature is registered to both rotation-invariant shape codes to maximize feature similarities, such that rotation invariance of the integrated feature is preserved and shared semantic information is implicitly extracted from shape codes. Experimental results on 3D shape classification, part segmentation, and retrieval tasks prove the feasibility of our work. Our project page is released at: https://rotation3d.github.io/.
translated by 谷歌翻译