许多应用程序,例如移动机器人或自动车辆,使用LIDAR传感器获得有关其三维周围环境的详细信息。许多方法使用图像类似的凸起以有效地处理这些激光雷达测量并使用深卷积神经网络来预测扫描中的每个点的语义类。空间固定假设能够使用卷曲。然而,LIDAR扫描在垂直轴上表现出大的差异。因此,我们提出了半本地卷积(SLC),卷积层,沿垂直尺寸减少的重量分配量减少。我们首先要调查这种层独立于任何其他模型变化的层。我们的实验在细分或准确性方面没有显示出传统卷积层的任何改善。
translated by 谷歌翻译
Unlike images which are represented in regular dense grids, 3D point clouds are irregular and unordered, hence applying convolution on them can be difficult. In this paper, we extend the dynamic filter to a new convolution operation, named PointConv. PointConv can be applied on point clouds to build deep convolutional networks. We treat convolution kernels as nonlinear functions of the local coordinates of 3D points comprised of weight and density functions. With respect to a given point, the weight functions are learned with multi-layer perceptron networks and density functions through kernel density estimation. The most important contribution of this work is a novel reformulation proposed for efficiently computing the weight functions, which allowed us to dramatically scale up the network and significantly improve its performance. The learned convolution kernel can be used to compute translation-invariant and permutation-invariant convolution on any point set in the 3D space. Besides, PointConv can also be used as deconvolution operators to propagate features from a subsampled point cloud back to its original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that deep convolutional neural networks built on PointConv are able to achieve state-of-the-art on challenging semantic segmentation benchmarks on 3D point clouds. Besides, our experiments converting CIFAR-10 into a point cloud showed that networks built on PointConv can match the performance of convolutional networks in 2D images of a similar structure.
translated by 谷歌翻译
适当的重量初始化是成功培训神经网络的重要意义。最近,批量归一化通过基于批处理统计数据量化每层来判定权重初始化的作用。遗憾的是,批量归一化在应用于小批量尺寸时具有多个缺点,因为在点云上学习时需要应对内存限制。虽然良好的重量初始化策略可以不需要呈现批量归一化,从而避免这些缺点,没有提出这种方法对于点卷积网络。为了填补这一差距,我们提出了一个框架来统一众多持续卷积。这实现了我们的主要贡献,方差感知权重初始化。我们表明,此初始化可以避免批量标准化,同时实现相似,并且在某些情况下更好的性能。
translated by 谷歌翻译
Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.
translated by 谷歌翻译
点云的Panoptic分割是一种重要的任务,使自动车辆能够使用高精度可靠的激光雷达传感器来理解其附近。现有的自上而下方法通过将独立的任务特定网络或转换方法从图像域转换为忽略激光雷达数据的复杂性,因此通常会导致次优性性能来解决这个问题。在本文中,我们提出了新的自上而下的高效激光乐光线分割(有效的LID)架构,该架构解决了分段激光雷达云中的多种挑战,包括距离依赖性稀疏性,严重的闭塞,大规模变化和重新投影误差。高效地板包括一种新型共享骨干,可以通过加强的几何变换建模容量进行编码,并聚合语义丰富的范围感知多尺度特征。它结合了新的不变语义和实例分段头以及由我们提出的Panoptic外围损耗功能监督的Panoptic Fusion模块。此外,我们制定了正则化的伪标签框架,通过对未标记数据的培训进行进一步提高高效性的性能。我们在两个大型LIDAR数据集中建议模型基准:NUSCENES,我们还提供了地面真相注释和Semantickitti。值得注意的是,高效地将在两个数据集上设置新的最先进状态。
translated by 谷歌翻译
准确而快速的场景理解是自动驾驶的挑战性任务之一,它需要充分利用LiDar Point云进行语义细分。在本文中,我们提出了一个\ textbf {concise}和\ textbf {有效}基于图像的语义分割网络,名为\ textbf {cenet}。为了提高学习能力的描述能力并降低计算和时间复杂性,我们的CENET将卷积与较大的内核大小而不是MLP相结合。定量和定性实验是根据公开可用的基准测试和Semanticposs进行的,这表明我们的管道与最先进的模型相比,我们的管道取得了更好的MIOU和推理性能。该代码将在https://github.com/huixiancheng/cenet上找到。
translated by 谷歌翻译
LIDAR数据的实时语义分割对于自动驾驶车辆至关重要,这通常配备有嵌入式平台并具有有限的计算资源。直接在点云上运行的方法使用复杂的空间聚合操作,这非常昂贵,难以优化嵌入式平台。因此,它们不适用于嵌入式系统的实时应用。作为替代方案,基于投影的方法更有效并且可以在嵌入式平台上运行。然而,目前基于最先进的投影的方法不会达到与基于点的方法相同的准确性并使用数百万个参数。因此,我们提出了一种基于投影的方法,称为多尺度交互网络(Minet),这是非常有效和准确的。该网络使用具有不同尺度的多个路径并余额尺度之间的计算资源。尺度之间的额外密集相互作用避免了冗余计算并使网络高效。在准确度,参数数量和运行时,所提出的网络以基于点为基础的基于图像和基于投影的方法。此外,网络处理在嵌入式平台上每秒超过24个扫描,该嵌入式平台高于激光雷达传感器的帧。因此,网络适用于自动车辆。
translated by 谷歌翻译
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CI-FAR10 and rotated MNIST.
translated by 谷歌翻译
多层erceptron(MLP),作为出现的第一个神经网络结构,是一个大的击中。但是由硬件计算能力和数据集的大小限制,它一旦沉没了数十年。在此期间,我们目睹了从手动特征提取到带有局部接收领域的CNN的范式转变,以及基于自我关注机制的全球接收领域的变换。今年(2021年),随着MLP混合器的推出,MLP已重新进入敏捷,并吸引了计算机视觉界的广泛研究。与传统的MLP进行比较,它变得更深,但改变了完全扁平化以补丁平整的输入。鉴于其高性能和较少的需求对视觉特定的感应偏见,但社区无法帮助奇迹,将MLP,最简单的结构与全球接受领域,但没有关注,成为一个新的电脑视觉范式吗?为了回答这个问题,本调查旨在全面概述视觉深层MLP模型的最新发展。具体而言,我们从微妙的子模块设计到全局网络结构,我们审查了这些视觉深度MLP。我们比较了不同网络设计的接收领域,计算复杂性和其他特性,以便清楚地了解MLP的开发路径。调查表明,MLPS的分辨率灵敏度和计算密度仍未得到解决,纯MLP逐渐发展朝向CNN样。我们建议,目前的数据量和计算能力尚未准备好接受纯的MLP,并且人工视觉指导仍然很重要。最后,我们提供了开放的研究方向和可能的未来作品的分析。我们希望这项努力能够点燃社区的进一步兴趣,并鼓励目前为神经网络进行更好的视觉量身定制设计。
translated by 谷歌翻译
2D CNN和视觉变压器(VIT)的最新进展表明,大型内核对于足够的接受场和高性能至关重要。受这些文献的启发,我们研究了3D大型设计的可行性和挑战。我们证明,在3D CNN中应用大型卷积内核在性能和效率方面都有更多困难。在2D CNN中运行良好的现有技术在3D网络中无效,包括流行的深度卷积。为了克服这些障碍,我们介绍了空间团体卷积及其大内核模块(SW-LK块)。它避免了幼稚3D大核的优化和效率问题。我们的大型内核3D CNN网络,即grounkernel3d,对各种3D任务(包括语义分割和对象检测)产生了非平凡的改进。值得注意的是,它在ScannETV2语义细分和72.8%的NDS NUSCENES对象检测基准上获得了73.9%的MIOU,在Nuscenes Lidar Leadar排行榜上排名第一。具有简单的多模式融合,将其进一步提高到74.2%NDS。与其CNN和Transformer对应物相比,bamekernel3d获得了可比或优越的结果。我们第一次表明,大型内核是可行的,对于3D网络至关重要。
translated by 谷歌翻译
由于从输入方面互补的方式,RGB-D语义细分引发了研究的兴趣。现有作品通常采用两流体系结构,该体系结构并行处理光度法和几何信息,很少有方法明确利用深度线索的贡献来调整RGB图像上的采样位置。在本文中,我们提出了一个新颖的框架,以将深度信息纳入RGB卷积神经网络(CNN),称为Z-ACN(深度适应的CNN)。具体而言,我们的Z-ACN生成了一个2D适应的偏移量,该偏移完全受到低级功能的约束,以指导RGB图像上的特征提取。通过生成的偏移,我们引入了两个直观有效的操作,以取代基本的CNN操作员:深度适应的卷积和深度适应的平均池。对室内和室外语义分割任务的广泛实验证明了我们方法的有效性。
translated by 谷歌翻译
我们分析了旋转模糊性在应​​用于球形图像的卷积神经网络(CNN)中的作用。我们比较了被称为S2CNN的组等效网络的性能和经过越来越多的数据增强量的标准非等级CNN。所选的体系结构可以视为相应设计范式的基线参考。我们的模型对投影到球体的MNIST或FashionMnist数据集进行了训练和评估。对于固有旋转不变的图像分类的任务,我们发现,通过大大增加数据增强量和网络的大小,标准CNN可以至少达到与Equivariant网络相同的性能。相比之下,对于固有的等效性语义分割任务,非等级网络的表现始终超过具有较少参数的模棱两可的网络。我们还分析和比较了不同网络的推理潜伏期和培训时间,从而实现了对等效架构和数据扩展之间的详细权衡考虑,以解决实际问题。实验中使用的均衡球网络可在https://github.com/janegerken/sem_seg_s2cnn上获得。
translated by 谷歌翻译
基于2D图像的3D对象的推理由于从不同方向查看对象引起的外观差异很大,因此具有挑战性。理想情况下,我们的模型将是对物体姿势变化的不变或等效的。不幸的是,对于2D图像输入,这通常是不可能的,因为我们没有一个先验模型,即在平面外对象旋转下如何改变图像。唯一的$ \ mathrm {so}(3)$ - 当前存在的模型需要点云输入而不是2D图像。在本文中,我们提出了一种基于Icosahedral群卷积的新型模型体系结构,即通过将输入图像投影到iCosahedron上,以$ \ mathrm {so(3)} $中的理由。由于此投影,该模型大致与$ \ mathrm {so}(3)$中的旋转大致相当。我们将此模型应用于对象构成估计任务,并发现它的表现优于合理的基准。
translated by 谷歌翻译
Standard convolutional neural networks assume a grid structured input is available and exploit discrete convolutions as their fundamental building blocks. This limits their applicability to many real-world applications. In this paper we propose Parametric Continuous Convolution, a new learnable operator that operates over non-grid structured data. The key idea is to exploit parameterized kernel functions that span the full continuous vector space. This generalization allows us to learn over arbitrary data structures as long as their support relationship is computable. Our experiments show significant improvement over the state-ofthe-art in point cloud segmentation of indoor and outdoor scenes, and lidar motion estimation of driving scenes.
translated by 谷歌翻译
本文介绍了一个有效的基于补丁的计算模块,基于熵的补丁编码器(EPE)模块,用于资源受限的语义分割。 EPE模块由三个轻巧的全趋验编码器组成,每个编码器都会从图像贴片中提取特征,并具有不同量的熵。编码器的参数数量最多,带有中等熵的贴片由具有中等数量的参数处理,并且具有适度的参数的编码器正在处理高熵的补丁,并且最小的编码器处理了低熵的贴片。模块背后的直觉是:由于具有高熵的补丁包含更多信息,因此它们需要具有更多参数的编码器,与低熵补丁不同,可以使用小编码器处理。因此,通过较小的编码器处理部分可以显着降低模块的计算成本。实验表明,EPE可以提高现有的实时语义分割模型的性能,并略有增加计算成本。具体而言,EPE将DFANET A的MIOU性能提高了0.9%,而参数数量仅增加1.2%,而Edanet的MIOU性能则增加了1%,而模型参数增加了10%。
translated by 谷歌翻译
Our dataset provides dense annotations for each scan of all sequences from the KITTI Odometry Benchmark [19]. Here, we show multiple scans aggregated using pose information estimated by a SLAM approach.
translated by 谷歌翻译
We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute-and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image.
translated by 谷歌翻译
We present an approach to semantic scene analysis using deep convolutional networks. Our approach is based on tangent convolutions -a new construction for convolutional networks on 3D data. In contrast to volumetric approaches, our method operates directly on surface geometry. Crucially, the construction is applicable to unstructured point clouds and other noisy real-world data. We show that tangent convolutions can be evaluated efficiently on large-scale point clouds with millions of points. Using tangent convolutions, we design a deep fully-convolutional network for semantic segmentation of 3D point clouds, and apply it to challenging real-world datasets of indoor and outdoor 3D environments. Experimental results show that the presented approach outperforms other recent deep network constructions in detailed analysis of large 3D scenes.
translated by 谷歌翻译
Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https://github.com/ msracver/Deformable-ConvNets.
translated by 谷歌翻译
We present Kernel Point Convolution 1 (KPConv), a new design of point convolution, i.e. that operates on point clouds without any intermediate representation. The convolution weights of KPConv are located in Euclidean space by kernel points, and applied to the input points close to them. Its capacity to use any number of kernel points gives KP-Conv more flexibility than fixed grid convolutions. Furthermore, these locations are continuous in space and can be learned by the network. Therefore, KPConv can be extended to deformable convolutions that learn to adapt kernel points to local geometry. Thanks to a regular subsampling strategy, KPConv is also efficient and robust to varying densities. Whether they use deformable KPConv for complex tasks, or rigid KPconv for simpler tasks, our networks outperform state-of-the-art classification and segmentation approaches on several datasets. We also offer ablation studies and visualizations to provide understanding of what has been learned by KPConv and to validate the descriptive power of deformable KPConv.
translated by 谷歌翻译