机器人定位是使用地图和传感器测量结果找到机器人姿势的反问题。近年来,可逆神经网络(INNS)成功地解决了各个领域的模棱两可的反问题。本文提出了一个解决旅馆本地化问题的框架。我们设计了一个在逆路径中提供隐式映射表示形式的旅馆。通过对评估中的潜在空间进行采样,局部\ _inn输出机器人以协方差构成,可用于估计不确定性。我们表明,本地\ _inn的本地化性能与延迟较低的当前方法相当。我们使用训练集的外观显示了从本地\ _inn的详细的2D和3D地图重建。我们还使用本地\ _inn提供了全球本地化算法来解决绑架问题。
translated by 谷歌翻译
Safety-critical Autonomous Systems require trustworthy and transparent decision-making process to be deployable in the real world. The advancement of Machine Learning introduces high performance but largely through black-box algorithms. We focus the discussion of explainability specifically with Autonomous Vehicles (AVs). As a safety-critical system, AVs provide the unique opportunity to utilize cutting-edge Machine Learning techniques while requiring transparency in decision making. Interpretability in every action the AV takes becomes crucial in post-hoc analysis where blame assignment might be necessary. In this paper, we provide positioning on how researchers could consider incorporating explainability and interpretability into design and optimization of separate Autonomous Vehicle modules including Perception, Planning, and Control.
translated by 谷歌翻译
Reliability is a key factor for realizing safety guarantee of full autonomous robot systems. In this paper, we focus on reliability in mobile robot localization. Monte Carlo localization (MCL) is widely used for mobile robot localization. However, it is still difficult to guarantee its safety because there are no methods determining reliability for MCL estimate. This paper presents a novel localization framework that enables robust localization, reliability estimation, and quick re-localization, simultaneously. The presented method can be implemented using similar estimation manner to that of MCL. The method can increase localization robustness to environment changes by estimating known and unknown obstacles while performing localization; however, localization failure of course occurs by unanticipated errors. The method also includes a reliability estimation function that enables us to know whether localization has failed. Additionally, the method can seamlessly integrate a global localization method via importance sampling. Consequently, quick re-localization from failures can be realized while mitigating noisy influence of global localization. Through three types of experiments, we show that reliable MCL that performs robust localization, self-failure detection, and quick failure recovery can be realized.
translated by 谷歌翻译
强大而准确的本地化是移动自主系统的基本要求。类似杆状的物体,例如交通标志,杆子和灯,由于其局部独特性和长期稳定性,经常使用地标在城市环境中定位。在本文中,我们基于在线运行并且几乎没有计算需求的几何特征,提出了一种新颖,准确,快速的杆提取方法。我们的方法直接对3D LIDAR扫描生成的范围图像执行所有计算,该图像避免了显式处理3D点云,并为每次扫描启用快速的极点提取。我们进一步使用提取的杆子作为伪标签来训练深层神经网络,以基于图像的极点分割。我们测试了我们的几何和基于学习的极点提取方法,用于在不同的扫描仪,路线和季节性变化的不同数据集上定位。实验结果表明,我们的方法表现优于其他最先进的方法。此外,通过从多个数据集提取的伪极标签增强,我们基于学习的方法可以跨不同的数据集运行,并且与基于几何的方法相比,可以实现更好的本地化结果。我们向公众发布了杆数据集,以评估杆的性能以及我们的方法的实施。
translated by 谷歌翻译
In this paper, a complete framework for Autonomous Self Driving is implemented. LIDAR, Camera and IMU sensors are used together. The entire data communication is managed using Robot Operating System which provides a robust platform for implementation of Robotics Projects. Jetson Nano is used to provide powerful on-board processing capabilities. Sensor fusion is performed on the data received from the different sensors to improve the accuracy of the decision making and inferences that we derive from the data. This data is then used to create a localized map of the environment. In this step, the position of the vehicle is obtained with respect to the Mapping done using the sensor data.The different SLAM techniques used for this purpose are Hector Mapping and GMapping which are widely used mapping techniques in ROS. Apart from SLAM that primarily uses LIDAR data, Visual Odometry is implemented using a Monocular Camera. The sensor fused data is then used by Adaptive Monte Carlo Localization for car localization. Using the localized map developed, Path Planning techniques like "TEB planner" and "Dynamic Window Approach" are implemented for autonomous navigation of the vehicle. The last step in the Project is the implantation of Control which is the final decision making block in the pipeline that gives speed and steering data for the navigation that is compatible with Ackermann Kinematics. The implementation of such a control block under a ROS framework using the three sensors, viz, LIDAR, Camera and IMU is a novel approach that is undertaken in this project.
translated by 谷歌翻译
由于范围和几何形状的直接集成,基于激光雷达的本地化和映射是许多现代机器人系统中的核心组件之一,可以实时进行精确的运动估算和​​高质量的高质量图。然而,由于场景中存在不足的环境约束,这种对几何形状的依赖可能导致定位失败,发生在隧道等自对称环境中。这项工作通过提出一种基于神经网络的估计方法来检测机器人操作过程中的(非)本地化性,从而解决了此问题。特别注意扫描到扫描登记的可靠性,因为它是许多激光射击估计管道中的关键组成部分。与以前的主要检测方法相反,该方法通过估算原始传感器测量的可定位性而无需评估基本的注册优化,可以尽早检测失败。此外,由于需要启发式的脱落检测阈值,因此以前的方法在跨环境和传感器类型的概括能力上仍然有限。提出的方法通过从不同环境的集合中学习,从而避免了这个问题,从而使网络在各种情况下运行。此外,该网络专门针对模拟数据进行培训,避免了艰苦的数据收集,以挑战性和退化(通常难以访问)环境。在跨越具有挑战性的环境和两种不同的传感器类型上进行的现场实验中,对所提出的方法进行了测试。观察到的检测性能与特定环境特异性阈值调整后的最新方法相当。
translated by 谷歌翻译
准确的本地化是大多数机器人任务的关键要求。现有工作的主体集中在被动定位上,其中假定了机器人的动作,从而从对抽样信息性观察的影响中抽象出来。尽管最近的工作表明学习动作的好处是消除机器人的姿势,但这些方法仅限于颗粒状的离散动作,直接取决于全球地图的大小。我们提出了主动粒子滤网网络(APFN),这种方法仅依赖于本地信息来进行可能的评估以及决策。为此,我们将可区分的粒子过滤器与加固学习剂进行了介绍,该材料会参与地图中最相关的部分。最终的方法继承了粒子过滤器的计算益处,并且可以直接在连续的动作空间中起作用,同时保持完全可区分,从而端到端优化以及对输入模式的不可知。我们通过在现实世界3D扫描公寓建造的影像现实主义室内环境中进行广泛的实验来证明我们的方法的好处。视频和代码可在http://apfn.cs.uni-freiburg.de上找到。
translated by 谷歌翻译
本文主要研究范围传感机器人在置信度富的地图(CRM)中的定位和映射,这是一种持续信仰的密集环境表示,然后扩展到信息理论探索以减少姿势不确定性。大多数关于主动同时定位和映射(SLAM)和探索的作品始终假设已知的机器人姿势或利用不准确的信息指标来近似姿势不确定性,从而导致不知名的环境中的勘探性能和效率不平衡。这激发了我们以可测量的姿势不确定性扩展富含信心的互信息(CRMI)。具体而言,我们为CRMS提出了一种基于Rao-Blackwellized粒子过滤器的定位和映射方案(RBPF-CLAM),然后我们开发了一种新的封闭形式的加权方法来提高本地化精度而不扫描匹配。我们通过更准确的近似值进一步计算了使用加权颗粒的不确定的CRMI(UCRMI)。仿真和实验评估显示了在非结构化和密闭场景中提出的方法的定位准确性和探索性能。
translated by 谷歌翻译
在室内运行的自主机器人和GPS拒绝的环境可以使用LIDAR进行大满贯。但是,由于循环闭合检测和计算负载以执行扫描匹配的挑战,在几何衰减的环境中,LIDAR的表现不佳。现有的WiFi基础架构可以用低硬件和计算成本来进行本地化和映射。然而,使用WiFi进行准确的姿势估计是具有挑战性的,因为由于信号传播的不可预测性,可以在同一位置测量不同的信号值。因此,我们介绍了WiFi指纹序列的使用量估计(即循环闭合)。这种方法利用移动机器人移动时获得的位置指纹的空间连贯性。这具有更好的校正探针流漂移的能力。该方法还结合了激光扫描,从而提高了大型和几何衰减环境的计算效率,同时保持LIDAR SLAM的准确性。我们在室内环境中进行了实验,以说明该方法的有效性。基于根平方误差(RMSE)评估结果,并在测试环境中达到了88m的精度。
translated by 谷歌翻译
Localization of autonomous unmanned aerial vehicles (UAVs) relies heavily on Global Navigation Satellite Systems (GNSS), which are susceptible to interference. Especially in security applications, robust localization algorithms independent of GNSS are needed to provide dependable operations of autonomous UAVs also in interfered conditions. Typical non-GNSS visual localization approaches rely on known starting pose, work only on a small-sized map, or require known flight paths before a mission starts. We consider the problem of localization with no information on initial pose or planned flight path. We propose a solution for global visual localization on a map at scale up to 100 km2, based on matching orthoprojected UAV images to satellite imagery using learned season-invariant descriptors. We show that the method is able to determine heading, latitude and longitude of the UAV at 12.6-18.7 m lateral translation error in as few as 23.2-44.4 updates from an uninformed initialization, also in situations of significant seasonal appearance difference (winter-summer) between the UAV image and the map. We evaluate the characteristics of multiple neural network architectures for generating the descriptors, and likelihood estimation methods that are able to provide fast convergence and low localization error. We also evaluate the operation of the algorithm using real UAV data and evaluate running time on a real-time embedded platform. We believe this is the first work that is able to recover the pose of an UAV at this scale and rate of convergence, while allowing significant seasonal difference between camera observations and map.
translated by 谷歌翻译
本文介绍了用于增量平滑和映射(NF-ISAM)的归一化流,这是一种新型算法,用于通过非线性测量模型和非高斯因素来推断SLAM问题中完整的后验分布。NF-ISAM利用了神经网络的表达能力,并将正常的流量训练以建模和对完整的后部进行采样。通过利用贝叶斯树,NF-ISAM启用了类似于ISAM2的有效增量更新,尽管在更具挑战性的非高斯环境中。我们证明了NF-ISAM使用数据关联模棱两可的仅范围的SLAM问题来证明NF-ISAM比最先进的点和分布估计算法的优势。NF-ISAM在描述连续变量(例如位置)和离散变量(例如数据关联)的后验信仰方面提出了卓越的准确性。
translated by 谷歌翻译
The LiDAR and inertial sensors based localization and mapping are of great significance for Unmanned Ground Vehicle related applications. In this work, we have developed an improved LiDAR-inertial localization and mapping system for unmanned ground vehicles, which is appropriate for versatile search and rescue applications. Compared with existing LiDAR-based localization and mapping systems such as LOAM, we have two major contributions: the first is the improvement of the robustness of particle swarm filter-based LiDAR SLAM, while the second is the loop closure methods developed for global optimization to improve the localization accuracy of the whole system. We demonstrate by experiments that the accuracy and robustness of the LiDAR SLAM system are both improved. Finally, we have done systematic experimental tests at the Hong Kong science park as well as other indoor or outdoor real complicated testing circumstances, which demonstrates the effectiveness and efficiency of our approach. It is demonstrated that our system has high accuracy, robustness, as well as efficiency. Our system is of great importance to the localization and mapping of the unmanned ground vehicle in an unknown environment.
translated by 谷歌翻译
在这项研究中,我们提出了一种新型的视觉定位方法,以根据RGB摄像机的可视数据准确估计机器人在3D激光镜头内的六个自由度(6-DOF)姿势。使用基于先进的激光雷达的同时定位和映射(SLAM)算法,可获得3D地图,能够收集精确的稀疏图。将从相机图像中提取的功能与3D地图的点进行了比较,然后解决了几何优化问题,以实现精确的视觉定位。我们的方法允许使用配备昂贵激光雷达的侦察兵机器人一次 - 用于映射环境,并且仅使用RGB摄像头的多个操作机器人 - 执行任务任务,其本地化精度高于常见的基于相机的解决方案。该方法在Skolkovo科学技术研究所(Skoltech)收集的自定义数据集上进行了测试。在评估本地化准确性的过程中,我们设法达到了厘米级的准确性;中间翻译误差高达1.3厘米。仅使用相机实现的确切定位使使用自动移动机器人可以解决需要高度本地化精度的最复杂的任务。
translated by 谷歌翻译
Lidar-based SLAM systems perform well in a wide range of circumstances by relying on the geometry of the environment. However, even mature and reliable approaches struggle when the environment contains structureless areas such as long hallways. To allow the use of lidar-based SLAM in such environments, we propose to add reflector markers in specific locations that would otherwise be difficult. We present an algorithm to reliably detect these markers and two approaches to fuse the detected markers with geometry-based scan matching. The performance of the proposed methods is demonstrated on real-world datasets from several industrial environments.
translated by 谷歌翻译
本文报告了一个动态语义映射框架,该框架将3D场景流量测量纳入封闭形式的贝叶斯推理模型中。环境中动态对象的存在可能会导致当前映射算法中的伪影和痕迹,从而导致后方地图不一致。我们利用深度学习利用最新的语义细分和3D流量估计,以提供MAP推断的测量。我们开发了一个贝叶斯模型,该模型以流量传播,并渗透3D连续(即可以在任意分辨率下查询)语义占用率图优于其静态对应物的语义占用图。使用公开数据集的广泛实验表明,所提出的框架对其前身和深度神经网络的输入测量有所改善。
translated by 谷歌翻译
大多数现实世界情景的环境,如商场和超市始终变化。预构建的地图,不会占这些变化的内容容易过时。因此,有必要具有环境的最新模型,以促进机器人的长期运行。为此,本文呈现了一般终身同时定位和映射(SLAM)框架。我们的框架使用多个会话映射表示,并利用一个有效的地图更新策略,包括地图建筑,姿势图形细化和稀疏化。为了减轻内存使用情况的无限性增加,我们提出了一种基于Chow-Liu最大相互信息生成树的地图修剪方法。在真正的超市环境中,通过一个月的机器人部署全面验证了拟议的SLAM框架。此外,我们释放了从室内和户外变化环境中收集的数据集,希望加速在社区中的终身猛烈的Slam研究。我们的数据集可在https://github.com/sanduan168/lifelong-slam-dataset中获得。
translated by 谷歌翻译
This paper presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments -- outdoors, from urban to woodland, and indoors in warehouses and mines - without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach CFEAR, we present an in-depth investigation on a wider range of data sets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar SLAM and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160Hz.
translated by 谷歌翻译
Conventional sensor-based localization relies on high-precision maps, which are generally built using specialized mapping techniques involving high labor and computational costs. In the architectural, engineering and construction industry, Building Information Models (BIM) are available and can provide informative descriptions of environments. This paper explores an effective way to localize a mobile 3D LiDAR sensor on BIM-generated maps considering both geometric and semantic properties. First, original BIM elements are converted to semantically augmented point cloud maps using categories and locations. After that, a coarse-to-fine semantic localization is performed to align laser points to the map based on iterative closest point registration. The experimental results show that the semantic localization can track the pose successfully with only one LiDAR sensor, thus demonstrating the feasibility of the proposed mapping-free localization framework. The results also show that using semantic information can help reduce localization errors on BIM-generated maps.
translated by 谷歌翻译
我们提出了一种生成,预测和使用时空占用网格图(SOGM)的方法,该方法嵌入了真实动态场景的未来语义信息。我们提出了一个自动标记的过程,该过程从嘈杂的真实导航数据中创建SOGM。我们使用3D-2D馈电体系结构,经过训练,可以预测SOGM的未来时间步骤,并给定3D激光镜框架作为输入。我们的管道完全是自我监督的,从而为真正的机器人提供了终身学习。该网络由一个3D后端组成,该后端提取丰富的特征并实现了激光镜框架的语义分割,以及一个2D前端,可预测SOGM表示中嵌入的未来信息,从而有可能捕获房地产的复杂性和不确定性世界多代理,多未来的互动。我们还设计了一个导航系统,该导航系统在计划中使用这些预测的SOGM在计划中,之后它们已转变为时空风险图(SRMS)。我们验证导航系统在模拟中的能力,在真实的机器人上对其进行验证,在各种情况下对真实数据进行研究SOGM预测,并提供一种新型的室内3D LIDAR数据集,该数据集在我们的实验中收集,其中包括我们的自动注释。
translated by 谷歌翻译
The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed.
translated by 谷歌翻译