The current LiDAR SLAM (Simultaneous Localization and Mapping) system suffers greatly from low accuracy and limited robustness when faced with complicated circumstances. From our experiments, we find that current LiDAR SLAM systems have limited performance when the noise level in the obtained point clouds is large. Therefore, in this work, we propose a general framework to tackle the problem of denoising and loop closure for LiDAR SLAM in complex environments with many noises and outliers caused by reflective materials. Current approaches for point clouds denoising are mainly designed for small-scale point clouds and can not be extended to large-scale point clouds scenes. In this work, we firstly proposed a lightweight network for large-scale point clouds denoising. Subsequently, we have also designed an efficient loop closure network for place recognition in global optimization to improve the localization accuracy of the whole system. Finally, we have demonstrated by extensive experiments and benchmark studies that our method can have a significant boost on the localization accuracy of the LiDAR SLAM system when faced with noisy point clouds, with a marginal increase in computational cost.
translated by 谷歌翻译
The LiDAR and inertial sensors based localization and mapping are of great significance for Unmanned Ground Vehicle related applications. In this work, we have developed an improved LiDAR-inertial localization and mapping system for unmanned ground vehicles, which is appropriate for versatile search and rescue applications. Compared with existing LiDAR-based localization and mapping systems such as LOAM, we have two major contributions: the first is the improvement of the robustness of particle swarm filter-based LiDAR SLAM, while the second is the loop closure methods developed for global optimization to improve the localization accuracy of the whole system. We demonstrate by experiments that the accuracy and robustness of the LiDAR SLAM system are both improved. Finally, we have done systematic experimental tests at the Hong Kong science park as well as other indoor or outdoor real complicated testing circumstances, which demonstrates the effectiveness and efficiency of our approach. It is demonstrated that our system has high accuracy, robustness, as well as efficiency. Our system is of great importance to the localization and mapping of the unmanned ground vehicle in an unknown environment.
translated by 谷歌翻译
In this work, we propose a lightweight integrated LiDAR-Inertial SLAM system with high efficiency and a great loop closure capacity. We found that the current State-of-the-art LiDAR-Inertial SLAM system has poor performance in loop closure. The LiDAR-Inertial SLAM system often fails with the large drifting and suffers from limited efficiency when faced with large-scale circumstances. In this work, firstly, to improve the speed of the whole LiDAR-Inertial SLAM system, we have proposed a new data structure of the sparse voxel-hashing to enhance the efficiency of the LiDAR-Inertial SLAM system. Secondly, to improve the point cloud-based localization performance, we have integrated the loop closure algorithms to improve the localization performance. Extensive experiments on the real-scene large-scale complicated circumstances demonstrate the great effectiveness and robustness of the proposed LiDAR-Inertial SLAM system.
translated by 谷歌翻译
We propose a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimation and map-building. LIO-SAM formulates lidar-inertial odometry atop a factor graph, allowing a multitude of relative and absolute measurements, including loop closures, to be incorporated from different sources as factors into the system. The estimated motion from inertial measurement unit (IMU) pre-integration de-skews point clouds and produces an initial guess for lidar odometry optimization. The obtained lidar odometry solution is used to estimate the bias of the IMU. To ensure high performance in real-time, we marginalize old lidar scans for pose optimization, rather than matching lidar scans to a global map. Scan-matching at a local scale instead of a global scale significantly improves the real-time performance of the system, as does the selective introduction of keyframes, and an efficient sliding window approach that registers a new keyframe to a fixed-size set of prior "sub-keyframes." The proposed method is extensively evaluated on datasets gathered from three platforms over various scales and environments.
translated by 谷歌翻译
传统的LIDAR射测(LO)系统主要利用从经过的环境获得的几何信息来注册激光扫描并估算Lidar Ego-Motion,而在动态或非结构化环境中可能不可靠。本文提出了Inten-loam,一种低饮用和健壮的激光镜和映射方法,该方法完全利用激光扫描的隐式信息(即几何,强度和时间特征)。扫描点被投影到圆柱形图像上,这些图像有助于促进各种特征的有效和适应性提取,即地面,梁,立面和反射器。我们提出了一种新型基于强度的点登记算法,并将其纳入LIDAR的探光仪,从而使LO系统能够使用几何和强度特征点共同估计LIDAR EGO-MOTION。为了消除动态对象的干扰,我们提出了一种基于时间的动态对象删除方法,以在MAP更新之前过滤它们。此外,使用与时间相关的体素网格滤波器组织并缩减了本地地图,以维持当前扫描和静态局部图之间的相似性。在模拟和实际数据集上进行了广泛的实验。结果表明,所提出的方法在正常驾驶方案中实现了类似或更高的精度W.R.T,在非结构化环境中,最先进的方法优于基于几何的LO。
translated by 谷歌翻译
在这项研究中,我们提出了一种新型的视觉定位方法,以根据RGB摄像机的可视数据准确估计机器人在3D激光镜头内的六个自由度(6-DOF)姿势。使用基于先进的激光雷达的同时定位和映射(SLAM)算法,可获得3D地图,能够收集精确的稀疏图。将从相机图像中提取的功能与3D地图的点进行了比较,然后解决了几何优化问题,以实现精确的视觉定位。我们的方法允许使用配备昂贵激光雷达的侦察兵机器人一次 - 用于映射环境,并且仅使用RGB摄像头的多个操作机器人 - 执行任务任务,其本地化精度高于常见的基于相机的解决方案。该方法在Skolkovo科学技术研究所(Skoltech)收集的自定义数据集上进行了测试。在评估本地化准确性的过程中,我们设法达到了厘米级的准确性;中间翻译误差高达1.3厘米。仅使用相机实现的确切定位使使用自动移动机器人可以解决需要高度本地化精度的最复杂的任务。
translated by 谷歌翻译
循环闭合检测是同时定位和映射(SLAM)系统的重要组成部分,这减少了随时间累积的漂移。多年来,已经提出了一些深入的学习方法来解决这项任务,但是与手工制作技术相比,他们的表现一直是SubPar,特别是在处理反向环的同时。在本文中,我们通过同时识别先前访问的位置并估计当前扫描与地图之间的6-DOF相对变换,有效地检测LIDAR点云中的LINAS点云中的环闭环的新颖LCDNET。 LCDNET由共享编码器组成,一个地方识别头提取全局描述符,以及估计两个点云之间的变换的相对姿势头。我们基于不平衡的最佳运输理论介绍一种新颖的相对姿势,我们以可分散的方式实现,以便实现端到端训练。在多个现实世界自主驾驶数据集中的LCDNET广泛评估表明我们的方法优于最先进的环路闭合检测和点云登记技术,特别是在处理反向环的同时。此外,我们将所提出的循环闭合检测方法集成到LIDAR SLAM库中,以提供完整的映射系统,并在看不见的城市中使用不同的传感器设置展示泛化能力。
translated by 谷歌翻译
在本文中,我们评估了八种流行和开源的3D激光雷达和视觉大满贯(同时定位和映射)算法,即壤土,乐高壤土,lio sam,hdl graph,orb slam3,basalt vio和svo2。我们已经设计了室内和室外的实验,以研究以下项目的影响:i)传感器安装位置的影响,ii)地形类型和振动的影响,iii)运动的影响(线性和角速速度的变化)。我们根据相对和绝对姿势误差比较它们的性能。我们还提供了他们所需的计算资源的比较。我们通过我们的多摄像机和多大摄像机室内和室外数据集进行彻底分析和讨论结果,并确定环境案例的最佳性能系统。我们希望我们的发现可以帮助人们根据目标环境选择一个适合其需求的传感器和相应的SLAM算法组合。
translated by 谷歌翻译
LiDAR mapping is important yet challenging in self-driving and mobile robotics. To tackle such a global point cloud registration problem, DeepMapping converts the complex map estimation into a self-supervised training of simple deep networks. Despite its broad convergence range on small datasets, DeepMapping still cannot produce satisfactory results on large-scale datasets with thousands of frames. This is due to the lack of loop closures and exact cross-frame point correspondences, and the slow convergence of its global localization network. We propose DeepMapping2 by adding two novel techniques to address these issues: (1) organization of training batch based on map topology from loop closing, and (2) self-supervised local-to-global point consistency loss leveraging pairwise registration. Our experiments and ablation studies on public datasets (KITTI, NCLT, and Nebula) demonstrate the effectiveness of our method. Our code will be released.
translated by 谷歌翻译
循环结束是自动移动系统同时本地化和映射(SLAM)的基本组成部分。在视觉大满贯领域,单词袋(弓)在循环封闭方面取得了巨大的成功。循环搜索的弓特征也可以在随后的6-DOF环校正中使用。但是,对于3D激光雷达的猛击,最新方法可能无法实时识别循环,并且通常无法纠正完整的6-DOF回路姿势。为了解决这一限制,我们呈现了一袋新颖的单词,以实时循环在3D LIDAR大满贯中关闭,称为Bow3D。我们方法的新颖性在于,它不仅有效地识别了重新审视的环路,而且还实时纠正了完整的6型循环姿势。 BOW3D根据3D功能link3D构建单词袋,该链接有效,姿势不变,可用于准确的点对点匹配。我们将我们提出的方法嵌入了3D激光射击系统中,以评估循环闭合性能。我们在公共数据集上测试我们的方法,并将其与其他最先进的算法进行比较。在大多数情况下,BOW3D在F1 MAX和扩展精度分数方面表现出更好的性能,并具有出色的实时性能。值得注意的是,BOW3D平均需要50毫秒才能识别和纠正Kitti 00中的循环(包括4K+ 64射线激光扫描),当在使用Intel Core i7 @2.2 GHz处理器的笔记本上执行时。
translated by 谷歌翻译
同时定位和映射(SLAM)被认为是智能车辆和移动机器人的重要功能。但是,当前的大多数LiDAR SLAM方法都是基于静态环境的假设。因此,在具有多个移动对象的动态环境中的本地化实际上是不可靠的。本文提出了一个动态的SLAM框架RF-LIO,该框架在LIO-SAM上构建,该框架添加了自适应多分辨率范围图像,并使用紧密耦合的LIDAR惯性探测器首先删除移动对象,然后将激光镜扫描与子束相匹配。因此,即使在高动态环境中,它也可以获得准确的姿势。在自收集的数据集和Open UrbanLoco数据集上评估了提出的RF-LIO。高动态环境中的实验结果表明,与壤土和LIO-SAM相比,所提出的RF-LIO的绝对轨迹精度分别可以提高90%和70%。 RF-LIO是高动态环境中最先进的大满贯系统之一。
translated by 谷歌翻译
With the advanced request to employ a team of robots to perform a task collaboratively, the research community has become increasingly interested in collaborative simultaneous localization and mapping. Unfortunately, existing datasets are limited in the scale and variation of the collaborative trajectories, even though generalization between inter-trajectories among different agents is crucial to the overall viability of collaborative tasks. To help align the research community's contributions with realistic multiagent ordinated SLAM problems, we propose S3E, a large-scale multimodal dataset captured by a fleet of unmanned ground vehicles along four designed collaborative trajectory paradigms. S3E consists of 7 outdoor and 5 indoor sequences that each exceed 200 seconds, consisting of well temporal synchronized and spatial calibrated high-frequency IMU, high-quality stereo camera, and 360 degree LiDAR data. Crucially, our effort exceeds previous attempts regarding dataset size, scene variability, and complexity. It has 4x as much average recording time as the pioneering EuRoC dataset. We also provide careful dataset analysis as well as baselines for collaborative SLAM and single counterparts. Data and more up-to-date details are found at https://github.com/PengYu-Team/S3E.
translated by 谷歌翻译
同时定位和映射(SLAM)对于自主机器人(例如自动驾驶汽车,自动无人机),3D映射系统和AR/VR应用至关重要。这项工作提出了一个新颖的LIDAR惯性 - 视觉融合框架,称为R $^3 $ LIVE ++,以实现强大而准确的状态估计,同时可以随时重建光线体图。 R $^3 $ LIVE ++由LIDAR惯性探针(LIO)和视觉惯性探测器(VIO)组成,均为实时运行。 LIO子系统利用从激光雷达的测量值重建几何结构(即3D点的位置),而VIO子系统同时从输入图像中同时恢复了几何结构的辐射信息。 r $^3 $ live ++是基于r $^3 $ live开发的,并通过考虑相机光度校准(例如,非线性响应功能和镜头渐滴)和相机的在线估计,进一步提高了本地化和映射的准确性和映射接触时间。我们对公共和私人数据集进行了更广泛的实验,以将我们提出的系统与其他最先进的SLAM系统进行比较。定量和定性结果表明,我们所提出的系统在准确性和鲁棒性方面对其他系统具有显着改善。此外,为了证明我们的工作的可扩展性,{我们基于重建的辐射图开发了多个应用程序,例如高动态范围(HDR)成像,虚拟环境探索和3D视频游戏。}最后,分享我们的发现和我们的发现和为社区做出贡献,我们在GitHub上公开提供代码,硬件设计和数据集:github.com/hku-mars/r3live
translated by 谷歌翻译
本文提出了一种有效的概率自适应体素映射方法,用于激光雷达的探光法。该地图是体素的集合;每个都包含一个平面(或边缘)功能,该特征可以实现环境的概率表示以及新的LIDAR扫描的准确配置。我们进一步分析了对粗到1的体素映射的需求,然后使用哈希表和动手组织的新型体素图来有效地构建和更新地图。我们将提出的体素图应用于迭代的扩展卡尔曼滤波器,并为姿势估计构建最大后验概率问题。与其他最先进的方法相比,开放Kitti数据集的实验显示了我们方法的高精度和效率。在具有非重复扫描激光雷达的非结构化环境上进行的室外实验进一步验证了我们的映射方法对不同环境和LIDAR扫描模式的适应性。我们的代码和数据集在GitHub上开源
translated by 谷歌翻译
随着自动驾驶行业正在缓慢成熟,视觉地图本地化正在迅速成为尽可能准确定位汽车的标准方法。由于相机或激光镜等视觉传感器返回的丰富数据,研究人员能够构建具有各种细节的不同类型的地图,并使用它们来实现高水平的车辆定位准确性和在城市环境中的稳定性。与流行的SLAM方法相反,视觉地图本地化依赖于预先构建的地图,并且仅通过避免误差积累或漂移来提高定位准确性。我们将视觉地图定位定义为两个阶段的过程。在位置识别的阶段,通过将视觉传感器输出与一组地理标记的地图区域进行比较,可以确定车辆在地图中的初始位置。随后,在MAP指标定位的阶段,通过连续将视觉传感器的输出与正在遍历的MAP的当前区域进行对齐,对车辆在地图上移动时进行了跟踪。在本文中,我们调查,讨论和比较两个阶段的基于激光雷达,基于摄像头和跨模式的视觉图本地化的最新方法,以突出每种方法的优势。
translated by 谷歌翻译
同时本地化和映射(SLAM)正在现实世界应用中部署,但是在许多常见情况下,许多最先进的解决方案仍然在困难。进步的SLAM研究的关键是高质量数据集的可用性以及公平透明的基准测试。为此,我们创建了Hilti-Oxford数据集,以将最新的SLAM系统推向其极限。该数据集面临着各种挑战,从稀疏和常规的建筑工地到17世纪的新古典建筑,并具有细节和弯曲的表面。为了鼓励多模式的大满贯方法,我们设计了一个具有激光雷达,五个相机和IMU(惯性测量单元)的数据收集平台。为了对精度和鲁棒性至关重要的任务进行基准测试量算法,我们实施了一种新颖的地面真相收集方法,使我们的数据集能够以毫米精度准确地测量SLAM姿势错误。为了进一步确保准确性,我们平台的外部设备通过微米精确的扫描仪进行了验证,并使用硬件时间同步在线管理时间校准。我们数据集的多模式和多样性吸引了大量的学术和工业研究人员进入第二版《希尔蒂·斯拉姆挑战赛》,该挑战于2022年6月结束。挑战的结果表明,尽管前三名团队可以实现准确性在某些序列中的2厘米或更高的速度中,性能以更困难的序列下降。
translated by 谷歌翻译
We propose a real-time method for odometry and mapping using range measurements from a 2-axis lidar moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation can cause mis-registration of the resulting point cloud. To date, coherent 3D maps can be built by off-line batch methods, often using loop closure to correct for drift over time. Our method achieves both low-drift and low-computational complexity without the need for high accuracy ranging or inertial measurements.The key idea in obtaining this level of performance is the division of the complex problem of simultaneous localization and mapping, which seeks to optimize a large number of variables simultaneously, by two algorithms. One algorithm performs odometry at a high frequency but low fidelity to estimate velocity of the lidar. Another algorithm runs at a frequency of an order of magnitude lower for fine matching and registration of the point cloud. Combination of the two algorithms allows the method to map in real-time. The method has been evaluated by a large set of experiments as well as on the KITTI odometry benchmark. The results indicate that the method can achieve accuracy at the level of state of the art offline batch methods.
translated by 谷歌翻译
我们介绍了DLR行星立体声,固态激光雷达,惯性(S3LI)数据集,记录在埃特纳山上,西西里山(Sicily),一种类似于月球和火星的环境,使用手持式传感器套件,适用于适用于空间上的属性 - 像移动漫游器。环境的特征是关于视觉和结构外观的具有挑战性的条件:严重的视觉混叠,对视觉大满贯系统执行位置识别的能力构成了重大限制,而缺乏出色的结构细节,与有​​限的视野相连在利用的固态激光雷达传感器中,仅使用点云就挑战了传统的激光雷达大满贯。借助此数据,涵盖了在软火山斜坡上超过4公里的旅行,我们的目标是:1)提供一种工具来揭示有关环境的最先进的大满贯系统的限制,而环境并未广泛存在可用的数据集和2)激励开发新颖的本地化和映射方法,这些方法有效地依赖于两个传感器的互补功能。数据集可在以下URL上访问:https://rmc.dlr.de/s3li_dataset
translated by 谷歌翻译
基于激光传感器的同时定位和映射(SLAM)已被移动机器人和自动驾驶汽车广泛采用。这些大满贯系统需要用有限的计算资源来支持准确的本地化。特别是,点云注册,即,在全球坐标框架中在多个位置收集的多个LIDAR扫描匹配和对齐的过程被视为SLAM的瓶颈步骤。在本文中,我们提出了一种功能过滤算法Pfilter,可以过滤无效的功能,因此可以大大减轻这种瓶颈。同时,由于精心策划的特征点,总体注册精度也得到了提高。我们将PFILTER集成到公认的扫描到映射激光射击轨道框架F-LOAM,并评估其在KITTI数据集中的性能。实验结果表明,pfilter可以删除本地特征图中约48.4%的点,并将扫描中的特征点平均减少19.3%,从而节省每帧的处理时间20.9%。同时,我们将准确性提高了9.4%。
translated by 谷歌翻译
本文首先提出了一个有效的3D点云学习架构,名为PWCLO-NET的LIDAR ODOMORY。在该架构中,提出了3D点云的投影感知表示来将原始的3D点云组织成有序数据表单以实现效率。 LIDAR ODOMOMERY任务的金字塔,翘曲和成本量(PWC)结构是为估计和优化在分层和高效的粗良好方法中的姿势。建立一个投影感知的细心成本卷,以直接关联两个离散点云并获得嵌入运动模式。然后,提出了一种可训练的嵌入掩模来称量局部运动模式以回归整体姿势和过滤异常值点。可训练的姿势经线细化模块迭代地与嵌入式掩码进行分层优化,使姿势估计对异常值更加强大。整个架构是全能优化的端到端,实现成本和掩码的自适应学习,并且涉及点云采样和分组的所有操作都是通过投影感知的3D特征学习方法加速。在Kitti Ocomatry DataSet上证明了我们的激光乐队内径架构的卓越性能和有效性。我们的方法优于基于学习的所有基于学习的方法,甚至基于几何的方法,在大多数基于Kitti Odomatry数据集的序列上具有映射优化的遗传。
translated by 谷歌翻译