机器人的感知目前处于在有效的潜在空间中运行的现代方法与数学建立的经典方法之间的跨道路,并提供了可解释的,可信赖的结果。在本文中,我们引入了卷积的贝叶斯内核推理(Convbki)层,该层在可分离的卷积层中明确执行贝叶斯推断,以同时提高效率,同时保持可靠性。我们将层应用于3D语义映射的任务,在该任务中,我们可以实时学习激光雷达传感器信息的语义几何概率分布。我们根据KITTI数据集的最新语义映射算法评估我们的网络,并通过类似的语义结果证明了延迟的提高。
translated by 谷歌翻译
本文报告了一个动态语义映射框架,该框架将3D场景流量测量纳入封闭形式的贝叶斯推理模型中。环境中动态对象的存在可能会导致当前映射算法中的伪影和痕迹,从而导致后方地图不一致。我们利用深度学习利用最新的语义细分和3D流量估计,以提供MAP推断的测量。我们开发了一个贝叶斯模型,该模型以流量传播,并渗透3D连续(即可以在任意分辨率下查询)语义占用率图优于其静态对应物的语义占用图。使用公开数据集的广泛实验表明,所提出的框架对其前身和深度神经网络的输入测量有所改善。
translated by 谷歌翻译
这项工作通过创建具有准确而完整的动态场景的新颖户外数据集来解决语义场景完成(SSC)数据中的差距。我们的数据集是由每个时间步骤的随机采样视图形成的,该步骤可监督无需遮挡或痕迹的场景的普遍性。我们通过利用最新的3D深度学习体系结构来使用时间信息来创建最新的开源网络中的SSC基准,并构建基准实时密集的局部语义映射算法MotionsC。我们的网络表明,提出的数据集可以在存在动态对象的情况下量化和监督准确的场景完成,这可以导致改进的动态映射算法的开发。所有软件均可在https://github.com/umich-curly/3dmapping上找到。
translated by 谷歌翻译
Maps play a key role in rapidly developing area of autonomous driving. We survey the literature for different map representations and find that while the world is three-dimensional, it is common to rely on 2D map representations in order to meet real-time constraints. We believe that high levels of situation awareness require a 3D representation as well as the inclusion of semantic information. We demonstrate that our recently presented hierarchical 3D grid mapping framework UFOMap meets the real-time constraints. Furthermore, we show how it can be used to efficiently support more complex functions such as calculating the occluded parts of space and accumulating the output from a semantic segmentation network.
translated by 谷歌翻译
准确的移动对象细分是自动驾驶的重要任务。它可以为许多下游任务提供有效的信息,例如避免碰撞,路径计划和静态地图构建。如何有效利用时空信息是3D激光雷达移动对象分割(LIDAR-MOS)的关键问题。在这项工作中,我们提出了一个新型的深神经网络,利用了时空信息和不同的LiDAR扫描表示方式,以提高LIDAR-MOS性能。具体而言,我们首先使用基于图像图像的双分支结构来分别处理可以从顺序的LiDAR扫描获得的空间和时间信息,然后使用运动引导的注意模块组合它们。我们还通过3D稀疏卷积使用点完善模块来融合LIDAR范围图像和点云表示的信息,并减少对象边界上的伪像。我们验证了我们提出的方法对Semantickitti的LiDAR-MOS基准的有效性。我们的方法在LiDar-Mos IOU方面大大优于最先进的方法。从设计的粗到精细体系结构中受益,我们的方法以传感器框架速率在线运行。我们方法的实现可作为开源可用:https://github.com/haomo-ai/motionseg3d。
translated by 谷歌翻译
我们介绍了PointConvormer,这是一个基于点云的深神经网络体系结构的新颖构建块。受到概括理论的启发,PointConvormer结合了点卷积的思想,其中滤波器权重仅基于相对位置,而变形金刚则利用了基于功能的注意力。在PointConvormer中,附近点之间的特征差异是重量重量卷积权重的指标。因此,我们从点卷积操作中保留了不变,而注意力被用来选择附近的相关点进行卷积。为了验证PointConvormer的有效性,我们在点云上进行了语义分割和场景流估计任务,其中包括扫描仪,Semantickitti,FlyingThings3D和Kitti。我们的结果表明,PointConvormer具有经典的卷积,常规变压器和Voxelized稀疏卷积方法的表现,具有较小,更高效的网络。可视化表明,PointConvormer的性能类似于在平面表面上的卷积,而邻域选择效果在物体边界上更强,表明它具有两全其美。
translated by 谷歌翻译
Perception in autonomous vehicles is often carried out through a suite of different sensing modalities. Given the massive amount of openly available labeled RGB data and the advent of high-quality deep learning algorithms for image-based recognition, high-level semantic perception tasks are pre-dominantly solved using high-resolution cameras. As a result of that, other sensor modalities potentially useful for this task are often ignored. In this paper, we push the state of the art in LiDAR-only semantic segmentation forward in order to provide another independent source of semantic information to the vehicle. Our approach can accurately perform full semantic segmentation of LiDAR point clouds at sensor frame rate. We exploit range images as an intermediate representation in combination with a Convolutional Neural Network (CNN) exploiting the rotating LiDAR sensor model. To obtain accurate results, we propose a novel postprocessing algorithm that deals with problems arising from this intermediate representation such as discretization errors and blurry CNN outputs. We implemented and thoroughly evaluated our approach including several comparisons to the state of the art. Our experiments show that our approach outperforms state-of-the-art approaches, while still running online on a single embedded GPU. The code can be accessed at https://github.com/PRBonn/lidar-bonnetal.
translated by 谷歌翻译
自动驾驶汽车的主要挑战是在看不见的动态环境中导航。将移动对象与静态对象分开对于导航,姿势估计以及了解其他交通参与者在不久的将来可能如何移动至关重要。在这项工作中,我们解决了区分当前移动物体(如行人行人或驾驶汽车)的3D激光雷达点的问题,从非移动物体(如墙壁)中获得的点,但还停放了汽车。我们的方法采用了一系列观察到的激光扫描,并将它们变成素化的稀疏4D点云。我们应用计算有效的稀疏4D旋转来共同提取空间和时间特征,并预测序列中所有点的移动对象置信得分。我们制定了一种退化的地平线策略,使我们能够在线预测移动对象,并根据新观察结果对GO进行预测。我们使用二进制贝叶斯过滤器递归整合了扫描的新预测,从而产生了更强的估计。我们在Semantickitti移动对象细分挑战中评估我们的方法,并显示出比现有方法更准确的预测。由于我们的方法仅在随着时间的推移随时间范围的几何信息上运行,因此它可以很好地概括为新的,看不见的环境,我们在阿波罗数据集中评估了这些环境。
translated by 谷歌翻译
Standard convolutional neural networks assume a grid structured input is available and exploit discrete convolutions as their fundamental building blocks. This limits their applicability to many real-world applications. In this paper we propose Parametric Continuous Convolution, a new learnable operator that operates over non-grid structured data. The key idea is to exploit parameterized kernel functions that span the full continuous vector space. This generalization allows us to learn over arbitrary data structures as long as their support relationship is computable. Our experiments show significant improvement over the state-ofthe-art in point cloud segmentation of indoor and outdoor scenes, and lidar motion estimation of driving scenes.
translated by 谷歌翻译
In this work, we address the problem of unsupervised moving object segmentation (MOS) in 4D LiDAR data recorded from a stationary sensor, where no ground truth annotations are involved. Deep learning-based state-of-the-art methods for LiDAR MOS strongly depend on annotated ground truth data, which is expensive to obtain and scarce in existence. To close this gap in the stationary setting, we propose a novel 4D LiDAR representation based on multivariate time series that relaxes the problem of unsupervised MOS to a time series clustering problem. More specifically, we propose modeling the change in occupancy of a voxel by a multivariate occupancy time series (MOTS), which captures spatio-temporal occupancy changes on the voxel level and its surrounding neighborhood. To perform unsupervised MOS, we train a neural network in a self-supervised manner to encode MOTS into voxel-level feature representations, which can be partitioned by a clustering algorithm into moving or stationary. Experiments on stationary scenes from the Raw KITTI dataset show that our fully unsupervised approach achieves performance that is comparable to that of supervised state-of-the-art approaches.
translated by 谷歌翻译
LIDAR传感器对于自动驾驶汽车和智能机器人的感知系统至关重要。为了满足现实世界应用程序中的实时要求,有必要有效地分割激光扫描。以前的大多数方法将3D点云直接投影到2D球形范围图像上,以便它们可以利用有效的2D卷积操作进行图像分割。尽管取得了令人鼓舞的结果,但在球形投影中,邻里信息尚未保存得很好。此外,在单个扫描分割任务中未考虑时间信息。为了解决这些问题,我们提出了一种新型的语义分割方法,用于元素rangeseg的激光雷达序列,其中引入了新的范围残差图像表示以捕获空间时间信息。具体而言,使用元内核来提取元特征,从而减少了2D范围图像坐标输入和3D笛卡尔坐标输出之间的不一致。有效的U-NET主链用于获得多尺度功能。此外,特征聚合模块(FAM)增强了范围通道的作用,并在不同级别上汇总特征。我们已经进行了广泛的实验,以评估semantickitti和semanticposs。有希望的结果表明,我们提出的元rangeseg方法比现有方法更有效。我们的完整实施可在https://github.com/songw-zju/meta-rangeseg上公开获得。
translated by 谷歌翻译
准确而快速的场景理解是自动驾驶的挑战性任务之一,它需要充分利用LiDar Point云进行语义细分。在本文中,我们提出了一个\ textbf {concise}和\ textbf {有效}基于图像的语义分割网络,名为\ textbf {cenet}。为了提高学习能力的描述能力并降低计算和时间复杂性,我们的CENET将卷积与较大的内核大小而不是MLP相结合。定量和定性实验是根据公开可用的基准测试和Semanticposs进行的,这表明我们的管道与最先进的模型相比,我们的管道取得了更好的MIOU和推理性能。该代码将在https://github.com/huixiancheng/cenet上找到。
translated by 谷歌翻译
In this paper, we address semantic segmentation of road-objects from 3D LiDAR point clouds. In particular, we wish to detect and categorize instances of interest, such as cars, pedestrians and cyclists. We formulate this problem as a pointwise classification problem, and propose an end-to-end pipeline called SqueezeSeg based on convolutional neural networks (CNN): the CNN takes a transformed LiDAR point cloud as input and directly outputs a point-wise label map, which is then refined by a conditional random field (CRF) implemented as a recurrent layer. Instance-level labels are then obtained by conventional clustering algorithms. Our CNN model is trained on LiDAR point clouds from the KITTI [1] dataset, and our point-wise segmentation labels are derived from 3D bounding boxes from KITTI. To obtain extra training data, we built a LiDAR simulator into Grand Theft Auto V (GTA-V), a popular video game, to synthesize large amounts of realistic training data. Our experiments show that SqueezeSeg achieves high accuracy with astonishingly fast and stable runtime (8.7 ± 0.5 ms per frame), highly desirable for autonomous driving applications. Furthermore, additionally training on synthesized data boosts validation accuracy on real-world data. Our source code and synthesized data will be open-sourced.
translated by 谷歌翻译
Our dataset provides dense annotations for each scan of all sequences from the KITTI Odometry Benchmark [19]. Here, we show multiple scans aggregated using pose information estimated by a SLAM approach.
translated by 谷歌翻译
我们提出了一种生成,预测和使用时空占用网格图(SOGM)的方法,该方法嵌入了真实动态场景的未来语义信息。我们提出了一个自动标记的过程,该过程从嘈杂的真实导航数据中创建SOGM。我们使用3D-2D馈电体系结构,经过训练,可以预测SOGM的未来时间步骤,并给定3D激光镜框架作为输入。我们的管道完全是自我监督的,从而为真正的机器人提供了终身学习。该网络由一个3D后端组成,该后端提取丰富的特征并实现了激光镜框架的语义分割,以及一个2D前端,可预测SOGM表示中嵌入的未来信息,从而有可能捕获房地产的复杂性和不确定性世界多代理,多未来的互动。我们还设计了一个导航系统,该导航系统在计划中使用这些预测的SOGM在计划中,之后它们已转变为时空风险图(SRMS)。我们验证导航系统在模拟中的能力,在真实的机器人上对其进行验证,在各种情况下对真实数据进行研究SOGM预测,并提供一种新型的室内3D LIDAR数据集,该数据集在我们的实验中收集,其中包括我们的自动注释。
translated by 谷歌翻译
点云的Panoptic分割是一种重要的任务,使自动车辆能够使用高精度可靠的激光雷达传感器来理解其附近。现有的自上而下方法通过将独立的任务特定网络或转换方法从图像域转换为忽略激光雷达数据的复杂性,因此通常会导致次优性性能来解决这个问题。在本文中,我们提出了新的自上而下的高效激光乐光线分割(有效的LID)架构,该架构解决了分段激光雷达云中的多种挑战,包括距离依赖性稀疏性,严重的闭塞,大规模变化和重新投影误差。高效地板包括一种新型共享骨干,可以通过加强的几何变换建模容量进行编码,并聚合语义丰富的范围感知多尺度特征。它结合了新的不变语义和实例分段头以及由我们提出的Panoptic外围损耗功能监督的Panoptic Fusion模块。此外,我们制定了正则化的伪标签框架,通过对未标记数据的培训进行进一步提高高效性的性能。我们在两个大型LIDAR数据集中建议模型基准:NUSCENES,我们还提供了地面真相注释和Semantickitti。值得注意的是,高效地将在两个数据集上设置新的最先进状态。
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented/virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird's eye view projection. In this work, we remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN to generate detections. Experiments on the KITTI car detection benchmark show that VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin. Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR.
translated by 谷歌翻译
We propose a novel deep learning-based framework to tackle the challenge of semantic segmentation of largescale point clouds of millions of points. We argue that the organization of 3D point clouds can be efficiently captured by a structure called superpoint graph (SPG), derived from a partition of the scanned scene into geometrically homogeneous elements. SPGs offer a compact yet rich representation of contextual relationships between object parts, which is then exploited by a graph convolutional network. Our framework sets a new state of the art for segmenting outdoor LiDAR scans (+11.9 and +8.8 mIoU points for both Semantic3D test sets), as well as indoor scans (+12.4 mIoU points for the S3DIS dataset).
translated by 谷歌翻译
自动驾驶汽车广泛使用屋顶旋转的LIDAR传感器,推动了3D点序列实时处理的需求。但是,大多数激光雷达语义细分数据集和算法将这些收购分为$ 360^\ circ $框架,从而导致收购潜伏期与现实的实时应用程序和评估不符。我们通过两个关键贡献来解决这个问题。首先,我们介绍Helixnet,这是一个10亿美元的点数据集,具有细粒度的标签,时间戳和传感器旋转信息,可以准确评估分割算法的实时准备就绪。其次,我们提出了helix4d,这是一种专门设计用于旋转激光雷达点序列的紧凑而有效的时空变压器结构。 Helix4D在采集切片上运行,对应于传感器的全部旋转的一部分,从而大大降低了总延迟。我们介绍了Helixnet和Semantickitti上几种最先进模型的性能和实时准备的广泛基准。 Helix4D与最佳的分割算法达到准确性,而在延迟和型号$ 50 \ times $中,降低了$ 5 \ times $。代码和数据可在以下网址获得:https://romainloiseau.fr/helixnet
translated by 谷歌翻译