Visual Inertial Odometry (VIO) is the problem of estimating a robot's trajectory by combining information from an inertial measurement unit (IMU) and a camera, and is of great interest to the robotics community. This paper develops a novel Lie group symmetry for the VIO problem and applies the recently proposed equivariant filter. The symmetry is shown to be compatible with the invariance of the VIO reference frame, lead to exact linearisation of bias-free IMU dynamics, and provide equivariance of the visual measurement function. As a result, the equivariant filter (EqF) based on this Lie group is a consistent estimator for VIO with lower linearisation error in the propagation of state dynamics and a higher order equivariant output approximation than standard formulations. Experimental results on the popular EuRoC and UZH FPV datasets demonstrate that the proposed system outperforms other state-of-the-art VIO algorithms in terms of both speed and accuracy.
translated by 谷歌翻译
用于在线状态估计的随机过滤器是自治系统的核心技术。此类过滤器的性能是系统能力的关键限制因素之一。此类过滤器的渐近行为(例如,用于常规操作)和瞬态响应(例如,对于快速初始化和重置)对于保证自主系统的稳健操作至关重要。本文使用n个方向测量值(包括车身框架和参考框架方向类型测量值)引入了陀螺仪辅助姿态估计器的新通用公式。该方法基于一种集成状态公式,该公式结合了导航,所有方向传感器的外部校准以及在单个模棱两可的几何结构中的陀螺式偏置状态。这种新提出的对称性允许模块化的不同方向测量及其外部校准,同时保持在同一对称性中包括偏置态的能力。随后使用此对称性的基于滤波器的估计量明显改善了瞬态响应,与最新方法相比,渐近偏置和外部校准估计。估计器在统计代表性的模拟中得到了验证,并在现实世界实验中进行了测试。
translated by 谷歌翻译
姿势估计对于机器人感知,路径计划等很重要。机器人姿势可以在基质谎言组上建模,并且通常通过基于滤波器的方法进行估算。在本文中,我们在存在随机噪声的情况下建立了不变扩展Kalman滤波器(IEKF)的误差公式,并将其应用于视觉辅助惯性导航。我们通过OpenVINS平台上的数值模拟和实验评估我们的算法。在Euroc公共MAV数据集上执行的仿真和实验都表明,我们的算法优于某些基于最先进的滤波器方法,例如基于Quaternion的EKF,首先估计Jacobian EKF等。
translated by 谷歌翻译
高性能跟踪四级车辆的控制是空中机器人技术的重要挑战。对称是物理系统的基本属性,并提供了为设计高性能控制算法提供工具的潜力。我们提出了一种采用任何给定对称性的设计方法,在一组坐标中将相关误差线性化,并使用LQR设计获得高性能控制;一种方法,我们将术语的调节器设计。我们表明,四极管车辆承认了几种不同的对称性:直接产物对称性,扩展姿势对称性和姿势和速度对称性,并表明每个对称性都可以用来定义全局误差。我们通过模拟比较线性化系统,发现扩展的姿势和姿势和速度对称性在存在大干扰的情况下优于直接产物对称性。这表明对称性对称性和组仿射对称性的选择有改善的线性化误差。
translated by 谷歌翻译
近几十年来,Camera-IMU(惯性测量单元)传感器融合已经过度研究。已经提出了具有自校准的运动估计的许多可观察性分析和融合方案。然而,它一直不确定是否在一般运动下观察到相机和IMU内在参数。为了回答这个问题,我们首先证明,对于全球快门Camera-IMU系统,所有内在和外在参数都可以观察到未知的地标。鉴于此,滚动快门(RS)相机的时间偏移和读出时间也证明是可观察到的。接下来,为了验证该分析并解决静止期间结构无轨滤波器的漂移问题,我们开发了一种基于关键帧的滑动窗滤波器(KSWF),用于测量和自校准,它适用于单眼RS摄像机或立体声RS摄像机。虽然关键帧概念广泛用于基于视觉的传感器融合,但对于我们的知识,KSWF是支持自我校准的首先。我们的模拟和实际数据测试验证了,可以使用不同运动的机会主义地标的观察来完全校准相机-IMU系统。实际数据测试确认了先前的典故,即保持状态矢量的地标可以弥补静止漂移,并显示基于关键帧的方案是替代治疗方法。
translated by 谷歌翻译
A monocular visual-inertial system (VINS), consisting of a camera and a low-cost inertial measurement unit (IMU), forms the minimum sensor suite for metric six degreesof-freedom (DOF) state estimation. However, the lack of direct distance measurement poses significant challenges in terms of IMU processing, estimator initialization, extrinsic calibration, and nonlinear optimization. In this work, we present VINS-Mono: a robust and versatile monocular visual-inertial state estimator. Our approach starts with a robust procedure for estimator initialization and failure recovery. A tightly-coupled, nonlinear optimization-based method is used to obtain high accuracy visual-inertial odometry by fusing pre-integrated IMU measurements and feature observations. A loop detection module, in combination with our tightly-coupled formulation, enables relocalization with minimum computation overhead. We additionally perform four degrees-of-freedom pose graph optimization to enforce global consistency. We validate the performance of our system on public datasets and real-world experiments and compare against other state-of-the-art algorithms. We also perform onboard closed-loop autonomous flight on the MAV platform and port the algorithm to an iOS-based demonstration. We highlight that the proposed work is a reliable, complete, and versatile system that is applicable for different applications that require high accuracy localization. We open source our implementations for both PCs 1 and iOS mobile devices 2 .
translated by 谷歌翻译
虽然已经提出了用于国家估计的利用现有LIE组结构的许多作品,但特别是不变的扩展卡尔曼滤波器(IEKF),少数论文解决了允许给定系统进入IEKF框架的组结构的构造,即制造动态群体染色和观察不变。在本文中,我们介绍了大量系统,包括涉及在实践中遇到的导航车辆的大多数问题。对于那些系统,我们介绍一种新的方法,系统地为状态空间提供组结构,包括诸如偏差的车身框架的载体。我们使用它来派生与线性观察者或过滤器那些类似的观察者。建议的统一和多功能框架包括IHKF已经成功的所有系统,改善了用于传感器偏差的惯性导航的最新的“不完美”IEKF,并且允许寻址新颖的示例,如GNSS天线杆臂估计。
translated by 谷歌翻译
Estimation algorithms, such as the sliding window filter, produce an estimate and uncertainty of desired states. This task becomes challenging when the problem involves unobservable states. In these situations, it is critical for the algorithm to ``know what it doesn't know'', meaning that it must maintain the unobservable states as unobservable during algorithm deployment. This letter presents general requirements for maintaining consistency in sliding window filters involving unobservable states. The value of these requirements when designing a navigation solution is experimentally shown within the context of visual-inertial SLAM making use of IMU preintegration.
translated by 谷歌翻译
我们提供了一种基于因子图优化的多摄像性视觉惯性内径系统,该系统通过同时使用所有相机估计运动,同时保留固定的整体特征预算。我们专注于在挑战环境中的运动跟踪,例如狭窄的走廊,具有侵略性动作的黑暗空间,突然的照明变化。这些方案导致传统的单眼或立体声测量失败。在理论上,使用额外的相机跟踪运动,但它会导致额外的复杂性和计算负担。为了克服这些挑战,我们介绍了两种新的方法来改善多相机特征跟踪。首先,除了从一体相机移动到另一个相机时,我们连续地跟踪特征的代替跟踪特征。这提高了准确性并实现了更紧凑的因子图表示。其次,我们选择跨摄像机的跟踪功能的固定预算,以降低反向结束优化时间。我们发现,使用较小的信息性功能可以保持相同的跟踪精度。我们所提出的方法使用由IMU和四个摄像机(前立体网和两个侧面)组成的硬件同步装置进行广泛测试,包括:地下矿,大型开放空间,以及带狭窄楼梯和走廊的建筑室内设计。与立体声最新的视觉惯性内径测量方法相比,我们的方法将漂移率,相对姿势误差,高达80%的翻译和旋转39%降低。
translated by 谷歌翻译
尽管密集的视觉大满贯方法能够估计环境的密集重建,但它们的跟踪步骤缺乏稳健性,尤其是当优化初始化较差时。稀疏的视觉大满贯系统通过将惯性测量包括在紧密耦合的融合中,达到了高度的准确性和鲁棒性。受这一表演的启发,我们提出了第一个紧密耦合的密集RGB-D惯性大满贯系统。我们的系统在GPU上运行时具有实时功能。它共同优化了相机姿势,速度,IMU偏见和重力方向,同时建立了全球一致,完全密集的基于表面的3D重建环境。通过一系列关于合成和现实世界数据集的实验,我们表明我们密集的视觉惯性大满贯系统对于低纹理和低几何变化的快速运动和时期比仅相关的RGB-D仅相关的SLAM系统更强大。
translated by 谷歌翻译
本文为自动驾驶车辆提供了基于激光雷达的同时定位和映射(SLAM)。研究了来自地标传感器的数据和自适应卡尔曼滤波器(KF)中的带状惯性测量单元(IMU)加上系统的可观察性。除了车辆的状态和具有里程碑意义的位置外,自我调整过滤器还估计IMU校准参数以及测量噪声的协方差。流程噪声,状态过渡矩阵和观察灵敏度矩阵的离散时间协方差矩阵以封闭形式得出,使其适合实时实现。检查3D SLAM系统的可观察性得出的结论是,该系统在地标对准的几何条件下仍然可以观察到。
translated by 谷歌翻译
We propose AstroSLAM, a standalone vision-based solution for autonomous online navigation around an unknown target small celestial body. AstroSLAM is predicated on the formulation of the SLAM problem as an incrementally growing factor graph, facilitated by the use of the GTSAM library and the iSAM2 engine. By combining sensor fusion with orbital motion priors, we achieve improved performance over a baseline SLAM solution. We incorporate orbital motion constraints into the factor graph by devising a novel relative dynamics factor, which links the relative pose of the spacecraft to the problem of predicting trajectories stemming from the motion of the spacecraft in the vicinity of the small body. We demonstrate the excellent performance of AstroSLAM using both real legacy mission imagery and trajectory data courtesy of NASA's Planetary Data System, as well as real in-lab imagery data generated on a 3 degree-of-freedom spacecraft simulator test-bed.
translated by 谷歌翻译
在本文中,我们提出了一种新颖的观察者来解决视觉同时定位和映射(SLAM)的问题,仅使用来自单眼摄像机和惯性测量单元(IMU)的信息。系统状态在歧管$ se(3)\ times \ mathbb {r} ^ {3n} $上演变,我们在其中仔细设计动态扩展,以便产生不变的叶片,使得问题重新加入在线\ EMPH{常量参数}识别。然后,遵循最近引入的基于参数估计的观察者(PEBO)和动态回归扩展和混合(DREM)过程,我们提供了一个新的简单解决方案。值得注意的优点是,拟议的观察者保证了几乎全局渐近稳定性,既不需要激发的持久性也不是完全可观察性,然而,在大多数现有的工作中广泛采用了保证稳定性。
translated by 谷歌翻译
This paper presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models.The first main novelty is a feature-based tightly-integrated visual-inertial SLAM system that fully relies on Maximum-a-Posteriori (MAP) estimation, even during the IMU initialization phase. The result is a system that operates robustly in real time, in small and large, indoor and outdoor environments, and is two to ten times more accurate than previous approaches.The second main novelty is a multiple map system that relies on a new place recognition method with improved recall. Thanks to it, ORB-SLAM3 is able to survive to long periods of poor visual information: when it gets lost, it starts a new map that will be seamlessly merged with previous maps when revisiting mapped areas. Compared with visual odometry systems that only use information from the last few seconds, ORB-SLAM3 is the first system able to reuse in all the algorithm stages all previous information. This allows to include in bundle adjustment co-visible keyframes, that provide high parallax observations boosting accuracy, even if they are widely separated in time or if they come from a previous mapping session.Our experiments show that, in all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature, and significantly more accurate. Notably, our stereo-inertial SLAM achieves an average accuracy of 3.5 cm in the EuRoC drone and 9 mm under quick hand-held motions in the room of TUM-VI dataset, a setting representative of AR/VR scenarios. For the benefit of the community we make public the source code.
translated by 谷歌翻译
农业行业不断寻求农业生产中涉及的不同过程的自动化,例如播种,收获和杂草控制。使用移动自主机器人执行这些任务引起了极大的兴趣。耕地面向同时定位和映射(SLAM)系统(移动机器人技术的关键)面临着艰巨的挑战,这是由于视觉上的难度,这是由于高度重复的场景而引起的。近年来,已经开发了几种视觉惯性遗传(VIO)和SLAM系统。事实证明,它们在室内和室外城市环境中具有很高的准确性。但是,在农业领域未正确评估它们。在这项工作中,我们从可耕地上的准确性和处理时间方面评估了最相关的最新VIO系统,以便更好地了解它们在这些环境中的行为。特别是,该评估是在我们的车轮机器人记录的大豆领域记录的传感器数据集中进行的,该田间被公开发行为Rosario数据集。评估表明,环境的高度重复性外观,崎terrain的地形产生的强振动以及由风引起的叶子的运动,暴露了当前最新的VIO和SLAM系统的局限性。我们分析了系统故障并突出观察到的缺点,包括初始化故障,跟踪损失和对IMU饱和的敏感性。最后,我们得出的结论是,即使某些系统(例如Orb-Slam3和S-MSCKF)在其他系统方面表现出良好的结果,但应采取更多改进,以使其在某些申请中的农业领域可靠,例如作物行的土壤耕作和农药喷涂。 。
translated by 谷歌翻译
机器人应用不断努力朝着更高的自主权努力。为了实现这一目标,高度健壮和准确的状态估计是必不可少的。事实证明,结合视觉和惯性传感器方式可以在短期应用中产生准确和局部一致的结果。不幸的是,视觉惯性状态估计器遭受长期轨迹漂移的积累。为了消除这种漂移,可以将全球测量值融合到状态估计管道中。全球测量的最著名和广泛可用的来源是全球定位系统(GPS)。在本文中,我们提出了一种新颖的方法,该方法完全结合了立体视觉惯性同时定位和映射(SLAM),包括视觉循环封闭,并在基于紧密耦合且基于优化的框架中融合了全球传感器模式。结合了测量不确定性,我们提供了一个可靠的标准来解决全球参考框架初始化问题。此外,我们提出了一个类似环路的优化方案,以补偿接收GPS信号中断电中累积的漂移。在数据集和现实世界中的实验验证表明,与现有的最新方法相比,与现有的最新方法相比,我们对GPS辍学方法的鲁棒性以及其能够估算高度准确且全球一致的轨迹的能力。
translated by 谷歌翻译
Visual Inertial Odometry (VIO) is one of the most established state estimation methods for mobile platforms. However, when visual tracking fails, VIO algorithms quickly diverge due to rapid error accumulation during inertial data integration. This error is typically modeled as a combination of additive Gaussian noise and a slowly changing bias which evolves as a random walk. In this work, we propose to train a neural network to learn the true bias evolution. We implement and compare two common sequential deep learning architectures: LSTMs and Transformers. Our approach follows from recent learning-based inertial estimators, but, instead of learning a motion model, we target IMU bias explicitly, which allows us to generalize to locomotion patterns unseen in training. We show that our proposed method improves state estimation in visually challenging situations across a wide range of motions by quadrupedal robots, walking humans, and drones. Our experiments show an average 15% reduction in drift rate, with much larger reductions when there is total vision failure. Importantly, we also demonstrate that models trained with one locomotion pattern (human walking) can be applied to another (quadruped robot trotting) without retraining.
translated by 谷歌翻译
我们呈现HYBVIO,一种新的混合方法,用于利用基于优化的SLAM结合基于滤波的视觉惯性内径术(VIO)的混合方法。我们的方法的核心是强大的,独立的VIO,具有改进的IMU偏置建模,异常值抑制,实体性检测和特征轨道选择,可调于在嵌入式硬件上运行。使用松散耦合的SLAM模块实现了长期一致性。在学术基准中,我们的解决方案在所有类别中产生了出色的性能,特别是在实时用例中,我们优于最新的最先进。我们还展示了VIO使用自定义数据集对消费类硬件的车辆跟踪的可行性,并与当前商业诉讼替代品相比,表现出良好的性能。https://github.com/spectacularai/hybvio提供了Hybvio方法的开源实现
translated by 谷歌翻译
This article proposes a method to diminish the pose (position plus attitude) drift experienced by an SVO (Semi-Direct Visual Odometry) based visual navigation system installed onboard a UAV (Unmanned Air Vehicle) by supplementing its pose estimation non linear optimizations with priors based on the outputs of a GNSS (Global Navigation Satellite System) Denied inertial navigation system. The method is inspired in a PI (Proportional Integral) control system, in which the attitude, altitude, and rate of climb inertial outputs act as targets to ensure that the visual estimations do not deviate far from their inertial counterparts. The resulting IA-VNS (Inertially Assisted Visual Navigation System) achieves major reductions in the horizontal position drift inherent to the GNSS-Denied navigation of autonomous fixed wing low SWaP (Size, Weight, and Power) UAVs. Additionally, the IA-VNS can be considered as a virtual incremental position (ground velocity) sensor capable of providing observations to the inertial filter. Stochastic high fidelity Monte Carlo simulations of two representative scenarios involving the loss of GNSS signals are employed to evaluate the results and to analyze their sensitivity to the terrain type overflown by the aircraft as well as to the quality of the onboard sensors on which the priors are based. The author releases the C ++ implementation of both the navigation algorithms and the high fidelity simulation as open-source software.
translated by 谷歌翻译
The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed.
translated by 谷歌翻译