使用胶囊网络的原始点云处理在分类,重建和分割中被广泛采用,因为它能够保留输入数据的空间协议。然而,基于现有的大多数基于胶囊的网络方法是计算繁重的,并且在将整个点云作为单个胶囊代表整个点云。我们通过提出具有参数共享的小说卷积胶囊架构,通过提出Pointcaps来解决现有的胶囊网络基础方法的这些限制。除了点击措施之外,我们提出了一种新颖的欧几里德距离路由算法和独立于独立的潜在潜在表示。潜在的表示捕获了点云的物理解释的几何参数,具有动态欧几里德路由,Pointcaps阱 - 代表点的空间(点对部分)关系。 Pointcaps的参数具有显着较低的参数,并且需要显着较低的拖鞋,同时实现与最先进的胶囊网络相比,对原始点云的可比分类和分割精度实现更好的重建。
translated by 谷歌翻译
3D点云的卷积经过广泛研究,但在几何深度学习中却远非完美。卷积的传统智慧在3D点之间表现出特征对应关系,这是对差的独特特征学习的内在限制。在本文中,我们提出了自适应图卷积(AGCONV),以供点云分析的广泛应用。 AGCONV根据其动态学习的功能生成自适应核。与使用固定/各向同性核的解决方案相比,AGCONV提高了点云卷积的灵活性,有效,精确地捕获了不同语义部位的点之间的不同关系。与流行的注意力体重方案不同,AGCONV实现了卷积操作内部的适应性,而不是简单地将不同的权重分配给相邻点。广泛的评估清楚地表明,我们的方法优于各种基准数据集中的点云分类和分割的最新方法。同时,AGCONV可以灵活地采用更多的点云分析方法来提高其性能。为了验证其灵活性和有效性,我们探索了基于AGCONV的完成,DeNoing,Upsmpling,注册和圆圈提取的范式,它们与竞争对手相当甚至优越。我们的代码可在https://github.com/hrzhou2/adaptconv-master上找到。
translated by 谷歌翻译
胶囊网络在了解与视觉相关任务的2D数据中的空间关系方面表现出色。即使它们并非旨在捕获一维时间关系,但在时间表中,我们证明了鉴于能力,胶囊网络在理解时间关系方面表现出色。为此,我们沿时间和频道尺寸生成胶囊,从而创建两个时间特征检测器,以学习对比关系。时间代表通过在识别13个心电图(ECG)信号拍打类别方面达到96.21%的精度,超过了最新结果,同时在确定30类短音频命令时获得了AN-PAR结果。此外,胶囊网络固有学到的实例化参数使我们能够完全参数化1D信号,从而在信号处理中打开各种可能性。
translated by 谷歌翻译
Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.
translated by 谷歌翻译
Point cloud completion is a generation and estimation issue derived from the partial point clouds, which plays a vital role in the applications in 3D computer vision. The progress of deep learning (DL) has impressively improved the capability and robustness of point cloud completion. However, the quality of completed point clouds is still needed to be further enhanced to meet the practical utilization. Therefore, this work aims to conduct a comprehensive survey on various methods, including point-based, convolution-based, graph-based, and generative model-based approaches, etc. And this survey summarizes the comparisons among these methods to provoke further research insights. Besides, this review sums up the commonly used datasets and illustrates the applications of point cloud completion. Eventually, we also discussed possible research trends in this promptly expanding field.
translated by 谷歌翻译
A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.
translated by 谷歌翻译
随着3D扫描技术的发展,3D视觉任务已成为一个流行的研究区域。由于由传感器获得的大量数据,无监督的学习对于理解和利用点云而没有昂贵的注释过程至关重要。在本文中,我们提出了一种新颖的框架和一个名为“PSG-Net”的有效自动编码器架构,用于重建基于点云的学习。与使用固定或随机2D点使用的现有研究不同,我们的框架为潜在集合生成输入依赖的点亮功能。 PSG-Net使用编码输入来通过种子生成模块产生点明智的特征,并通过逐渐应用种子特征传播模块逐渐增加分辨率的多个阶段中提取更丰富的特征。我们通过实验证明PSG-Net的有效性; PSG-Net显示了点云重建和无监督分类的最先进的性能,并在监督完成中实现了对应于对应方法的可比性。
translated by 谷歌翻译
This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is
translated by 谷歌翻译
In this work, we present Point Transformer, a deep neural network that operates directly on unordered and unstructured point sets. We design Point Transformer to extract local and global features and relate both representations by introducing the local-global attention mechanism, which aims to capture spatial point relations and shape information. For that purpose, we propose SortNet, as part of the Point Transformer, which induces input permutation invariance by selecting points based on a learned score. The output of Point Transformer is a sorted and permutation invariant feature list that can directly be incorporated into common computer vision applications. We evaluate our approach on standard classification and part segmentation benchmarks to demonstrate competitive results compared to the prior work. Code is publicly available at: https://github.com/engelnico/point-transformer INDEX TERMS 3D point processing, Artificial neural networks, Computer vision, Feedforward neural networks, Transformer
translated by 谷歌翻译
Raw point clouds data inevitably contains outliers or noise through acquisition from 3D sensors or reconstruction algorithms. In this paper, we present a novel endto-end network for robust point clouds processing, named PointASNL, which can deal with point clouds with noise effectively. The key component in our approach is the adaptive sampling (AS) module. It first re-weights the neighbors around the initial sampled points from farthest point sampling (FPS), and then adaptively adjusts the sampled points beyond the entire point cloud. Our AS module can not only benefit the feature learning of point clouds, but also ease the biased effect of outliers. To further capture the neighbor and long-range dependencies of the sampled point, we proposed a local-nonlocal (L-NL) module inspired by the nonlocal operation. Such L-NL module enables the learning process insensitive to noise. Extensive experiments verify the robustness and superiority of our approach in point clouds processing tasks regardless of synthesis data, indoor data, and outdoor data with or without noise. Specifically, PointASNL achieves state-of-theart robust performance for classification and segmentation tasks on all datasets, and significantly outperforms previous methods on real-world outdoor SemanticKITTI dataset with considerate noise. Our code is released through https: //github.com/yanx27/PointASNL.
translated by 谷歌翻译
Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and density functions through kernel density estimation. The most important contribution of this work is a novel reformulation proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-of-the-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.
translated by 谷歌翻译
点云的学习表示是3D计算机视觉中的重要任务,尤其是没有手动注释的监督。以前的方法通常会从自动编码器中获得共同的援助,以通过重建输入本身来建立自我判断。但是,现有的基于自我重建的自动编码器仅关注全球形状,而忽略本地和全球几何形状之间的层次结构背景,这是3D表示学习的重要监督。为了解决这个问题,我们提出了一个新颖的自我监督点云表示学习框架,称为3D遮挡自动编码器(3D-OAE)。我们的关键想法是随机遮住输入点云的某些局部补丁,并通过使用剩余的可见图来恢复遮挡的补丁,从而建立监督。具体而言,我们设计了一个编码器,用于学习可见的本地贴片的特征,并设计了一个用于利用这些功能预测遮挡贴片的解码器。与以前的方法相反,我们的3D-OAE可以去除大量的斑块,并仅使用少量可见斑块进行预测,这使我们能够显着加速训练并产生非平凡的自我探索性能。训练有素的编码器可以进一步转移到各种下游任务。我们证明了我们在广泛使用基准下的不同判别和生成应用中的最先进方法的表现。
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
我们提出CPT:卷积点变压器 - 一种用于处理3D点云数据的非结构化性质的新型深度学习架构。 CPT是对现有关注的卷曲神经网络以及以前的3D点云处理变压器的改进。由于其在创建基于新颖的基于注意力的点集合嵌入通过制作用于处理动态局部点设定的邻域的卷积投影层的嵌入来实现这一壮举。结果点设置嵌入对输入点的排列是强大的。我们的小说CPT块在网络结构中通过动态图计算获得的本地邻居构建。它是完全可差异的,可以像卷积层一样堆叠,以学习点的全局属性。我们评估我们的模型在ModelNet40,ShapEnet​​部分分割和S3DIS 3D室内场景语义分割数据集等标准基准数据集上,以显示我们的模型可以用作各种点云处理任务的有效骨干,与现有状态相比 - 艺术方法。
translated by 谷歌翻译
Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-toend deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7% parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http://www.merl.com/research/ license#FoldingNet
translated by 谷歌翻译
Point Cloud升级旨在从给定的稀疏中产生密集的点云,这是一项具有挑战性的任务,这是由于点集的不规则和无序的性质。为了解决这个问题,我们提出了一种新型的基于深度学习的模型,称为PU-Flow,该模型结合了正常的流量和权重预测技术,以产生均匀分布在基础表面上的致密点。具体而言,我们利用标准化流的可逆特征来转换欧几里得和潜在空间之间的点,并将UPSMPLING过程作为潜在空间中相邻点的集合,从本地几何环境中自适应地学习。广泛的实验表明,我们的方法具有竞争力,并且在大多数测试用例中,它在重建质量,近距到表面的准确性和计算效率方面的表现优于最先进的方法。源代码将在https://github.com/unknownue/pu-flow上公开获得。
translated by 谷歌翻译
Shape completion, the problem of estimating the complete geometry of objects from partial observations, lies at the core of many vision and robotics applications. In this work, we propose Point Completion Network (PCN), a novel learning-based approach for shape completion. Unlike existing shape completion methods, PCN directly operates on raw point clouds without any structural assumption (e.g. symmetry) or annotation (e.g. semantic class) about the underlying shape. It features a decoder design that enables the generation of fine-grained completions while maintaining a small number of parameters. Our experiments show that PCN produces dense, complete point clouds with realistic structures in the missing regions on inputs with various levels of incompleteness and noise, including cars from LiDAR scans in the KITTI dataset. Code, data and trained models are available at https://wentaoyuan.github.io/pcn.
translated by 谷歌翻译
点云上采样是为了使从3D传感器获得的稀疏点集致密,从而为基础表面提供了密度的表示。现有方法将输入点划分为小贴片,并分别对每个贴片进行整理,但是,忽略了补丁之间的全局空间一致性。在本文中,我们提出了一种新颖的方法PC $^2 $ -PU,该方法探讨了贴片对点和点对点相关性,以实现更有效和强大的点云上采样。具体而言,我们的网络有两个吸引人的设计:(i)我们将相邻的补丁作为补充输入来补偿单个补丁中的损失结构信息,并引入一个补丁相关模块以捕获补丁之间的差异和相似性。 (ii)在增强每个贴片的几何形状后,我们进一步引入了一个点相关模块,以揭示每个贴片内部的关系以维持局部空间一致性。对合成和真实扫描数据集进行的广泛实验表明,我们的方法超过了以前的上采样方法,尤其是在嘈杂的输入中。代码和数据位于\ url {https://github.com/chenlongwhu/pc2-pu.git}。
translated by 谷歌翻译
Affordance detection from visual input is a fundamental step in autonomous robotic manipulation. Existing solutions to the problem of affordance detection rely on convolutional neural networks. However, these networks do not consider the spatial arrangement of the input data and miss parts-to-whole relationships. Therefore, they fall short when confronted with novel, previously unseen object instances or new viewpoints. One solution to overcome such limitations can be to resort to capsule networks. In this paper, we introduce the first affordance detection network based on dynamic tree-structured capsules for sparse 3D point clouds. We show that our capsule-based network outperforms current state-of-the-art models on viewpoint invariance and parts-segmentation of new object instances through a novel dataset we only used for evaluation and it is publicly available from github.com/gipfelen/DTCG-Net. In the experimental evaluation we will show that our algorithm is superior to current affordance detection methods when faced with grasping previously unseen objects thanks to our Capsule Network enforcing a parts-to-whole representation.
translated by 谷歌翻译