农业行业不断寻求农业生产中涉及的不同过程的自动化,例如播种,收获和杂草控制。使用移动自主机器人执行这些任务引起了极大的兴趣。耕地面向同时定位和映射(SLAM)系统(移动机器人技术的关键)面临着艰巨的挑战,这是由于视觉上的难度,这是由于高度重复的场景而引起的。近年来,已经开发了几种视觉惯性遗传(VIO)和SLAM系统。事实证明,它们在室内和室外城市环境中具有很高的准确性。但是,在农业领域未正确评估它们。在这项工作中,我们从可耕地上的准确性和处理时间方面评估了最相关的最新VIO系统,以便更好地了解它们在这些环境中的行为。特别是,该评估是在我们的车轮机器人记录的大豆领域记录的传感器数据集中进行的,该田间被公开发行为Rosario数据集。评估表明,环境的高度重复性外观,崎terrain的地形产生的强振动以及由风引起的叶子的运动,暴露了当前最新的VIO和SLAM系统的局限性。我们分析了系统故障并突出观察到的缺点,包括初始化故障,跟踪损失和对IMU饱和的敏感性。最后,我们得出的结论是,即使某些系统(例如Orb-Slam3和S-MSCKF)在其他系统方面表现出良好的结果,但应采取更多改进,以使其在某些申请中的农业领域可靠,例如作物行的土壤耕作和农药喷涂。 。
translated by 谷歌翻译
This paper presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models.The first main novelty is a feature-based tightly-integrated visual-inertial SLAM system that fully relies on Maximum-a-Posteriori (MAP) estimation, even during the IMU initialization phase. The result is a system that operates robustly in real time, in small and large, indoor and outdoor environments, and is two to ten times more accurate than previous approaches.The second main novelty is a multiple map system that relies on a new place recognition method with improved recall. Thanks to it, ORB-SLAM3 is able to survive to long periods of poor visual information: when it gets lost, it starts a new map that will be seamlessly merged with previous maps when revisiting mapped areas. Compared with visual odometry systems that only use information from the last few seconds, ORB-SLAM3 is the first system able to reuse in all the algorithm stages all previous information. This allows to include in bundle adjustment co-visible keyframes, that provide high parallax observations boosting accuracy, even if they are widely separated in time or if they come from a previous mapping session.Our experiments show that, in all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature, and significantly more accurate. Notably, our stereo-inertial SLAM achieves an average accuracy of 3.5 cm in the EuRoC drone and 9 mm under quick hand-held motions in the room of TUM-VI dataset, a setting representative of AR/VR scenarios. For the benefit of the community we make public the source code.
translated by 谷歌翻译
在本文中,我们评估了八种流行和开源的3D激光雷达和视觉大满贯(同时定位和映射)算法,即壤土,乐高壤土,lio sam,hdl graph,orb slam3,basalt vio和svo2。我们已经设计了室内和室外的实验,以研究以下项目的影响:i)传感器安装位置的影响,ii)地形类型和振动的影响,iii)运动的影响(线性和角速速度的变化)。我们根据相对和绝对姿势误差比较它们的性能。我们还提供了他们所需的计算资源的比较。我们通过我们的多摄像机和多大摄像机室内和室外数据集进行彻底分析和讨论结果,并确定环境案例的最佳性能系统。我们希望我们的发现可以帮助人们根据目标环境选择一个适合其需求的传感器和相应的SLAM算法组合。
translated by 谷歌翻译
A monocular visual-inertial system (VINS), consisting of a camera and a low-cost inertial measurement unit (IMU), forms the minimum sensor suite for metric six degreesof-freedom (DOF) state estimation. However, the lack of direct distance measurement poses significant challenges in terms of IMU processing, estimator initialization, extrinsic calibration, and nonlinear optimization. In this work, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization and failure recovery. A tightly-coupled, nonlinear optimization-based method is used to obtain high accuracy visual-inertial odometry by fusing pre-integrated IMU measurements and feature observations. A loop detection module, in combination with our tightly-coupled formulation, enables relocalization with minimum computation overhead. We additionally perform four degrees-of-freedom pose graph optimization to enforce global consistency. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform onboard closed-loop autonomous flight on the MAV platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy localization. We open source our implementations for both PCs 1 and iOS mobile devices 2 .
translated by 谷歌翻译
本文通过讨论参加了为期三年的SubT竞赛的六支球队的不同大满贯策略和成果,报道了地下大满贯的现状。特别是,本文有四个主要目标。首先,我们审查团队采用的算法,架构和系统;特别重点是以激光雷达以激光雷达为中心的SLAM解决方案(几乎所有竞争中所有团队的首选方法),异质的多机器人操作(包括空中机器人和地面机器人)和现实世界的地下操作(从存在需要处理严格的计算约束的晦涩之处)。我们不会回避讨论不同SubT SLAM系统背后的肮脏细节,这些系统通常会从技术论文中省略。其次,我们通过强调当前的SLAM系统的可能性以及我们认为与一些良好的系统工程有关的范围来讨论该领域的成熟度。第三,我们概述了我们认为是基本的开放问题,这些问题可能需要进一步的研究才能突破。最后,我们提供了在SubT挑战和相关工作期间生产的开源SLAM实现和数据集的列表,并构成了研究人员和从业人员的有用资源。
translated by 谷歌翻译
The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed.
translated by 谷歌翻译
事件摄像机是运动激活的传感器,可捕获像素级照明的变化,而不是具有固定帧速率的强度图像。与标准摄像机相比,它可以在高速运动和高动态范围场景中提供可靠的视觉感知。但是,当相机和场景之间的相对运动受到限制时,例如在静态状态下,事件摄像机仅输出一点信息甚至噪音。尽管标准相机可以在大多数情况下,尤其是在良好的照明条件下提供丰富的感知信息。这两个相机完全是互补的。在本文中,我们提出了一种具有鲁棒性,高智能和实时优化的基于事件的视觉惯性镜(VIO)方法,具有事件角度,基于线的事件功能和基于点的图像功能。提出的方法旨在利用人为场景中的自然场景和基于线路的功能中的基于点的功能,以通过设计良好设计的功能管理提供更多其他结构或约束信息。公共基准数据集中的实验表明,与基于图像或基于事件的VIO相比,我们的方法可以实现卓越的性能。最后,我们使用我们的方法演示了机上闭环自动驾驶四极管飞行和大规模室外实验。评估的视频在我们的项目网站上介绍:https://b23.tv/oe3qm6j
translated by 谷歌翻译
我们提供了一种基于因子图优化的多摄像性视觉惯性内径系统,该系统通过同时使用所有相机估计运动,同时保留固定的整体特征预算。我们专注于在挑战环境中的运动跟踪,例如狭窄的走廊,具有侵略性动作的黑暗空间,突然的照明变化。这些方案导致传统的单眼或立体声测量失败。在理论上,使用额外的相机跟踪运动,但它会导致额外的复杂性和计算负担。为了克服这些挑战,我们介绍了两种新的方法来改善多相机特征跟踪。首先,除了从一体相机移动到另一个相机时,我们连续地跟踪特征的代替跟踪特征。这提高了准确性并实现了更紧凑的因子图表示。其次,我们选择跨摄像机的跟踪功能的固定预算,以降低反向结束优化时间。我们发现,使用较小的信息性功能可以保持相同的跟踪精度。我们所提出的方法使用由IMU和四个摄像机(前立体网和两个侧面)组成的硬件同步装置进行广泛测试,包括:地下矿,大型开放空间,以及带狭窄楼梯和走廊的建筑室内设计。与立体声最新的视觉惯性内径测量方法相比,我们的方法将漂移率,相对姿势误差,高达80%的翻译和旋转39%降低。
translated by 谷歌翻译
同时定位和映射(SLAM)对于自主机器人(例如自动驾驶汽车,自动无人机),3D映射系统和AR/VR应用至关重要。这项工作提出了一个新颖的LIDAR惯性 - 视觉融合框架,称为R $^3 $ LIVE ++,以实现强大而准确的状态估计,同时可以随时重建光线体图。 R $^3 $ LIVE ++由LIDAR惯性探针(LIO)和视觉惯性探测器(VIO)组成,均为实时运行。 LIO子系统利用从激光雷达的测量值重建几何结构(即3D点的位置),而VIO子系统同时从输入图像中同时恢复了几何结构的辐射信息。 r $^3 $ live ++是基于r $^3 $ live开发的,并通过考虑相机光度校准(例如,非线性响应功能和镜头渐滴)和相机的在线估计,进一步提高了本地化和映射的准确性和映射接触时间。我们对公共和私人数据集进行了更广泛的实验,以将我们提出的系统与其他最先进的SLAM系统进行比较。定量和定性结果表明,我们所提出的系统在准确性和鲁棒性方面对其他系统具有显着改善。此外,为了证明我们的工作的可扩展性,{我们基于重建的辐射图开发了多个应用程序,例如高动态范围(HDR)成像,虚拟环境探索和3D视频游戏。}最后,分享我们的发现和我们的发现和为社区做出贡献,我们在GitHub上公开提供代码,硬件设计和数据集:github.com/hku-mars/r3live
translated by 谷歌翻译
This paper presents ORB-SLAM, a feature-based monocular SLAM system that operates in real time, in small and large, indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.
translated by 谷歌翻译
高保真大满贯系统的开发过程取决于它们对可靠数据集的验证。为了实现这一目标,我们提出了IBiscape,这是一种模拟基准,其中包括来自异质传感器的数据同步和获取API:立体声 - RGB/DVS,深度,IMU和GPS,以及地面真相场景场景细分和车辆自我摄入量。我们的基准是建立在卡拉模拟器上的,后端是虚幻的引擎,呈现出模拟现实世界的高动态风景。此外,我们提供34个适用于自动驾驶汽车导航的多模式数据集,包括用于场景理解等情况,例如事故等,以及基于与API集成的动态天气模拟类别的广泛框架质量。我们还将第一个校准目标引入了Carla图,以解决CARLA模拟DVS和RGB摄像机的未知失真参数问题。最后,使用IBISCAPE序列,我们评估了四个ORB-SLAM 3系统(单眼RGB,立体RGB,立体声视觉惯性(SVI)和RGB-D)的性能和玄武岩视觉惯性轴测计(VIO)系统,这些系统在模拟的大型大型序列上收集的各种序列 - 规模动态环境。关键字:基准,多模式,数据集,探针,校准,DVS,SLAM
translated by 谷歌翻译
a) Stereo input: trajectory and sparse reconstruction of an urban environment with multiple loop closures. (b) RGB-D input: keyframes and dense pointcloud of a room scene with one loop closure. The pointcloud is rendered by backprojecting the sensor depth maps from estimated keyframe poses. No fusion is performed.
translated by 谷歌翻译
本文介绍了在线本地化和彩色网格重建(OLCMR)ROS感知体系结构,用于地面探索机器人,旨在在具有挑战性的未知环境中执行强大的同时定位和映射(SLAM),并实时提供相关的彩色3D网格表示。它旨在被远程人类操作员使用在任务或之后或之后轻松地可视化映射的环境,或作为在勘探机器人技术领域进行进一步研究的开发基础。该体系结构主要由精心挑选的基于激光雷达的SLAM算法的开源ROS实现以及使用点云和RGB摄像机图像投影到3D空间中的彩色表面重建过程。在较新的大学手持式LIDAR-VISION参考数据集上评估了整体表演,并在分别在城市和乡村户外环境中分别在代表性的车轮机器人上收集的两个实验轨迹。索引术语:现场机器人,映射,猛击,彩色表面重建
translated by 谷歌翻译
Integration of multiple sensor modalities and deep learning into Simultaneous Localization And Mapping (SLAM) systems are areas of significant interest in current research. Multi-modality is a stepping stone towards achieving robustness in challenging environments and interoperability of heterogeneous multi-robot systems with varying sensor setups. With maplab 2.0, we provide a versatile open-source platform that facilitates developing, testing, and integrating new modules and features into a fully-fledged SLAM system. Through extensive experiments, we show that maplab 2.0's accuracy is comparable to the state-of-the-art on the HILTI 2021 benchmark. Additionally, we showcase the flexibility of our system with three use cases: i) large-scale (approx. 10 km) multi-robot multi-session (23 missions) mapping, ii) integration of non-visual landmarks, and iii) incorporating a semantic object-based loop closure module into the mapping framework. The code is available open-source at https://github.com/ethz-asl/maplab.
translated by 谷歌翻译
Event cameras that asynchronously output low-latency event streams provide great opportunities for state estimation under challenging situations. Despite event-based visual odometry having been extensively studied in recent years, most of them are based on monocular and few research on stereo event vision. In this paper, we present ESVIO, the first event-based stereo visual-inertial odometry, which leverages the complementary advantages of event streams, standard images and inertial measurements. Our proposed pipeline achieves temporal tracking and instantaneous matching between consecutive stereo event streams, thereby obtaining robust state estimation. In addition, the motion compensation method is designed to emphasize the edge of scenes by warping each event to reference moments with IMU and ESVIO back-end. We validate that both ESIO (purely event-based) and ESVIO (event with image-aided) have superior performance compared with other image-based and event-based baseline methods on public and self-collected datasets. Furthermore, we use our pipeline to perform onboard quadrotor flights under low-light environments. A real-world large-scale experiment is also conducted to demonstrate long-term effectiveness. We highlight that this work is a real-time, accurate system that is aimed at robust state estimation under challenging environments.
translated by 谷歌翻译
我们提出了一种新颖的方法,可用于快速准确的立体声视觉同时定位和映射(SLAM),独立于特征检测和匹配。通过优化3D点的规模,将单眼直接稀疏的内径术(DSO)扩展到立体声系统,以最小化立体声配置的光度误差,从而与传统立体声匹配相比产生计算有效和鲁棒的方法。我们进一步将其扩展到具有环路闭合的完整SLAM系统,以减少累积的错误。在假设前向相机运动中,我们使用从视觉径管中获得的3D点模拟LIDAR扫描,并适应LIDAR描述符以便放置识别以便于更有效地检测回路封闭件。之后,我们通过最小化潜在环封闭件的光度误差来估计使用直接对准的相对姿势。可选地,通过使用迭代最近的点(ICP)算法来实现通过直接对准的进一步改进。最后,我们优化一个姿势图,以提高全球的猛烈精度。通过避免在我们的SLAM系统中的特征检测或匹配,我们确保高计算效率和鲁棒性。与最先进的方法相比,公共数据集上的彻底实验验证展示了其有效性。
translated by 谷歌翻译
In this paper, we present a novel visual SLAM and long-term localization benchmark for autonomous driving in challenging conditions based on the large-scale 4Seasons dataset. The proposed benchmark provides drastic appearance variations caused by seasonal changes and diverse weather and illumination conditions. While significant progress has been made in advancing visual SLAM on small-scale datasets with similar conditions, there is still a lack of unified benchmarks representative of real-world scenarios for autonomous driving. We introduce a new unified benchmark for jointly evaluating visual odometry, global place recognition, and map-based visual localization performance which is crucial to successfully enable autonomous driving in any condition. The data has been collected for more than one year, resulting in more than 300 km of recordings in nine different environments ranging from a multi-level parking garage to urban (including tunnels) to countryside and highway. We provide globally consistent reference poses with up to centimeter-level accuracy obtained from the fusion of direct stereo-inertial odometry with RTK GNSS. We evaluate the performance of several state-of-the-art visual odometry and visual localization baseline approaches on the benchmark and analyze their properties. The experimental results provide new insights into current approaches and show promising potential for future research. Our benchmark and evaluation protocols will be available at https://www.4seasons-dataset.com/.
translated by 谷歌翻译
我们介绍了DLR行星立体声,固态激光雷达,惯性(S3LI)数据集,记录在埃特纳山上,西西里山(Sicily),一种类似于月球和火星的环境,使用手持式传感器套件,适用于适用于空间上的属性 - 像移动漫游器。环境的特征是关于视觉和结构外观的具有挑战性的条件:严重的视觉混叠,对视觉大满贯系统执行位置识别的能力构成了重大限制,而缺乏出色的结构细节,与有​​限的视野相连在利用的固态激光雷达传感器中,仅使用点云就挑战了传统的激光雷达大满贯。借助此数据,涵盖了在软火山斜坡上超过4公里的旅行,我们的目标是:1)提供一种工具来揭示有关环境的最先进的大满贯系统的限制,而环境并未广泛存在可用的数据集和2)激励开发新颖的本地化和映射方法,这些方法有效地依赖于两个传感器的互补功能。数据集可在以下URL上访问:https://rmc.dlr.de/s3li_dataset
translated by 谷歌翻译
尽管密集的视觉大满贯方法能够估计环境的密集重建,但它们的跟踪步骤缺乏稳健性,尤其是当优化初始化较差时。稀疏的视觉大满贯系统通过将惯性测量包括在紧密耦合的融合中,达到了高度的准确性和鲁棒性。受这一表演的启发,我们提出了第一个紧密耦合的密集RGB-D惯性大满贯系统。我们的系统在GPU上运行时具有实时功能。它共同优化了相机姿势,速度,IMU偏见和重力方向,同时建立了全球一致,完全密集的基于表面的3D重建环境。通过一系列关于合成和现实世界数据集的实验,我们表明我们密集的视觉惯性大满贯系统对于低纹理和低几何变化的快速运动和时期比仅相关的RGB-D仅相关的SLAM系统更强大。
translated by 谷歌翻译
本文介绍了Cerberus机器人系统系统,该系统赢得了DARPA Subterranean挑战最终活动。出席机器人自主权。由于其几何复杂性,降解的感知条件以及缺乏GPS支持,严峻的导航条件和拒绝通信,地下设置使自动操作变得特别要求。为了应对这一挑战,我们开发了Cerberus系统,该系统利用了腿部和飞行机器人的协同作用,再加上可靠的控制,尤其是为了克服危险的地形,多模式和多机器人感知,以在传感器退化,以及在传感器退化的条件下进行映射以及映射通过统一的探索路径计划和本地运动计划,反映机器人特定限制的弹性自主权。 Cerberus基于其探索各种地下环境及其高级指挥和控制的能力,表现出有效的探索,对感兴趣的对象的可靠检测以及准确的映射。在本文中,我们报告了DARPA地下挑战赛的初步奔跑和最终奖项的结果,并讨论了为社区带来利益的教训所面临的亮点和挑战。
translated by 谷歌翻译