我们提出了一种基于多普勒速度的基于群体和速度估计算法,基于FMCW利达的特性,实现了高精度,单扫描和实时运动状态检测和速度估计。我们证明了同一物体上的多普勒速度的连续性。基于这一原理,我们通过区域生长聚类算法实现了移动物体和静止背景之间的区别。所获得的固定背景将用于通过最小二乘法估计FMCW激光雷达的速度。然后,我们使用估计的LIDAR速度和通过聚类获得的移动物体的多普勒速度来估计移动物体的速度。为确保实时处理,我们设置了适当的最小二乘参数。同时,为了验证算法的有效性,我们在自动驾驶仿真平台Carla上创建FMCW激光雷达模型,用于产卵数据。结果表明,我们的算法可以在Ryzen 3600x CPU的算术功率下处理至少45米的点,并估计每秒150个移动物体的速度,运动状态检测精度超过99%,估计速度精度为0.1多发性硬化症。
translated by 谷歌翻译
LIDAR点云失真来自移动物体是自动驾驶中的一个重要问题,最近对新兴激光器的出现更加苛刻,这具有前后扫描模式。准确地估计移动物体速度不仅提供跟踪功能,而且还可以通过更准确的移动物体描述来校正点云失真。由于LIDAR测量飞行时间距离但具有稀疏角度分辨率,因此测量在径向测量中精确,但缺乏角度。另一方面,相机提供了密集的角度分辨率。本文提出了基于高斯的激光乐乐和相机融合来估计完整的速度并校正激光雷达失真。提供概率的卡尔曼滤波器框架以跟踪移动物体,估计它们的速度,并同时纠正点云扭曲。框架在真正的道路数据上进行评估,融合方法优于传统的ICP和点云的方法。完整的工作框架是开放的(https://github.com/isee-technology/lidar-with-velocity),以加速新兴激光灯的采用。
translated by 谷歌翻译
随着自动驾驶的发展,单个车辆的自动驾驶技术的提高已达到瓶颈。车辆合作自动驾驶技术的进步可以扩大车辆的感知范围,补充感知盲区并提高感知的准确性,以促进自主驾驶技术的发展并实现车辆路整合。该项目主要使用LIDAR来开发数据融合方案,以实现车辆和道路设备数据的共享和组合,并实现动态目标的检测和跟踪。同时,设计和用于测试我们的车辆道路合作意识系统的一些测试方案,这证明了车辆道路合作自动驾驶在单车自动驾驶上的优势。
translated by 谷歌翻译
下一代高分辨率汽车雷达(4D雷达)可以提供额外的高程测量和较密集的点云,从而在自动驾驶中具有3D传感的巨大潜力。在本文中,我们介绍了一个名为TJ4Dradset的数据集,其中包括4D雷达点用于自动驾驶研究。该数据集是在各种驾驶场景中收集的,连续44个序列中总共有7757个同步帧,这些序列用3D边界框和轨道ID很好地注释。我们为数据集提供了基于4D雷达的3D对象检测基线,以证明4D雷达点云的深度学习方法的有效性。可以通过以下链接访问数据集:https://github.com/tjradarlab/tj4dradset。
translated by 谷歌翻译
自动化驾驶系统(广告)开辟了汽车行业的新领域,为未来的运输提供了更高的效率和舒适体验的新可能性。然而,在恶劣天气条件下的自主驾驶已经存在,使自动车辆(AVS)长时间保持自主车辆(AVS)或更高的自主权。本文评估了天气在分析和统计方式中为广告传感器带来的影响和挑战,并对恶劣天气条件进行了解决方案。彻底报道了关于对每种天气的感知增强的最先进技术。外部辅助解决方案如V2X技术,当前可用的数据集,模拟器和天气腔室的实验设施中的天气条件覆盖范围明显。通过指出各种主要天气问题,自主驾驶场目前正在面临,近年来审查硬件和计算机科学解决方案,这项调查概述了在不利的天气驾驶条件方面的障碍和方向的障碍和方向。
translated by 谷歌翻译
在未来几十年中,自动驾驶将普遍存在。闲置在交叉点上提高自动驾驶的安全性,并通过改善交叉点的交通吞吐量来提高效率。在闲置时,路边基础设施通过卸载从车辆到路边基础设施的知觉和计划,在交叉路口远程驾驶自动驾驶汽车。为了实现这一目标,iDriving必须能够以全帧速率以较少100毫秒的尾声处理大量的传感器数据,而无需牺牲准确性。我们描述了算法和优化,使其能够使用准确且轻巧的感知组件实现此目标,该组件是从重叠传感器中得出的复合视图的原因,以及一个共同计划多个车辆的轨迹的计划者。在我们的评估中,闲置始终确保车辆的安全通过,而自动驾驶只能有27%的时间。与其他方法相比,闲置的等待时间还要低5倍,因为它可以实现无流量的交叉点。
translated by 谷歌翻译
The emergence of low-cost, small form factor and light-weight solid-state LiDAR sensors have brought new opportunities for autonomous unmanned aerial vehicles (UAVs) by advancing navigation safety and computation efficiency. Yet the successful developments of LiDAR-based UAVs must rely on extensive simulations. Existing simulators can hardly perform simulations of real-world environments due to the requirements of dense mesh maps that are difficult to obtain. In this paper, we develop a point-realistic simulator of real-world scenes for LiDAR-based UAVs. The key idea is the underlying point rendering method, where we construct a depth image directly from the point cloud map and interpolate it to obtain realistic LiDAR point measurements. Our developed simulator is able to run on a light-weight computing platform and supports the simulation of LiDARs with different resolution and scanning patterns, dynamic obstacles, and multi-UAV systems. Developed in the ROS framework, the simulator can easily communicate with other key modules of an autonomous robot, such as perception, state estimation, planning, and control. Finally, the simulator provides 10 high-resolution point cloud maps of various real-world environments, including forests of different densities, historic building, office, parking garage, and various complex indoor environments. These realistic maps provide diverse testing scenarios for an autonomous UAV. Evaluation results show that the developed simulator achieves superior performance in terms of time and memory consumption against Gazebo and that the simulated UAV flights highly match the actual one in real-world environments. We believe such a point-realistic and light-weight simulator is crucial to bridge the gap between UAV simulation and experiments and will significantly facilitate the research of LiDAR-based autonomous UAVs in the future.
translated by 谷歌翻译
Recently, Vehicle-to-Everything(V2X) cooperative perception has attracted increasing attention. Infrastructure sensors play a critical role in this research field, however, how to find the optimal placement of infrastructure sensors is rarely studied. In this paper, we investigate the problem of infrastructure sensor placement and propose a pipeline that can efficiently and effectively find optimal installation positions for infrastructure sensors in a realistic simulated environment. To better simulate and evaluate LiDAR placement, we establish a Realistic LiDAR Simulation library that can simulate the unique characteristics of different popular LiDARs and produce high-fidelity LiDAR point clouds in the CARLA simulator. Through simulating point cloud data in different LiDAR placements, we can evaluate the perception accuracy of these placements using multiple detection models. Then, we analyze the correlation between the point cloud distribution and perception accuracy by calculating the density and uniformity of regions of interest. Experiments show that the placement of infrastructure LiDAR can heavily affect the accuracy of perception. We also analyze the correlation between perception performance in the region of interest and LiDAR point cloud distribution and validate that density and uniformity can be indicators of performance.
translated by 谷歌翻译
场景流程使自动驾驶汽车可以推理多个独立对象的任意运动,这是长期移动自治的关键。尽管估计LiDAR的场景流动最近进展,但仍未知如何从4D雷达估算场景流动 - 这是一种越来越流行的汽车传感器,因为它在不利的天气和照明条件下的稳健性。与激光点云相比,雷达数据更为稀疏,嘈杂,分辨率更低。在现实世界中,雷达场景流的注释数据集也没有且昂贵。这些因素共同提出了雷达场景流量估计是一个具有挑战性的问题。这项工作旨在解决上述挑战,并通过利用自我监督的学习来估计场景从4-D雷达点云流动。稳健的场景估计架构和三个新颖损失的定制旨在应对棘手的雷达数据。现实世界实验结果验证了我们的方法能够稳健地估计野生中的雷达场景流,并有效地支持运动分割的下游任务。
translated by 谷歌翻译
当视野中有许多移动对象时,基于静态场景假设的SLAM系统会引入重大估计错误。跟踪和维护语义对象有益于场景理解,并为计划和控制模块提供丰富的决策信息。本文介绍了MLO,这是一种多对象的激光雷达探光仪,该镜像仅使用激光雷达传感器跟踪自我运动和语义对象。为了实现对多个对象的准确和强大的跟踪,我们提出了一个最小二乘估计器,该估计器融合了3D边界框和几何点云,用于对象状态更新。通过分析跟踪列表中的对象运动状态,映射模块使用静态对象和环境特征来消除累积错误。同时,它在MAP坐标中提供了连续的对象轨迹。我们的方法在公共Kitti数据集的不同情况下进行了定性和定量评估。实验结果表明,在高度动态,非结构化和未知的语义场景中,MLO的自我定位精度比最先进的系统更好。同时,与基于滤波的方法相比,具有语义几何融合的多目标跟踪方法在跟踪准确性和一致性方面也具有明显的优势。
translated by 谷歌翻译
准确可靠的传感器校准对于在自主驾驶中融合激光雷达和惯性测量至关重要。本文提出了一种新型的3D-LIDAR和姿势传感器的新型三阶段外部校准方法,用于自主驾驶。第一阶段可以通过点云表面特征快速校准传感器之间的外部参数,以便可以将外部参数从大的初始误差范围缩小到很小的时间范围。第二阶段可以基于激光映射空间占用率进一步校准外部参数,同时消除运动失真。在最后阶段,校正了由自动驾驶汽车的平面运动引起的Z轴误差,并最终获得了精确的外部参数。具体而言,该方法利用了道路场景的自然特征,使其独立且易于在大规模条件下应用。现实世界数据集的实验结果证明了我们方法的可靠性和准确性。这些代码是在GitHub网站上开源的。据我们所知,这是第一个专门为自动驾驶设计的开源代码,用于校准激光雷达和姿势传感器外部参数。代码链接是https://github.com/opencalib/lidar2ins。
translated by 谷歌翻译
准确的轨道位置是铁路支持驱动系统的重要组成部分,用于安全监控。激光雷达可以获得携带铁路环境的3D信息的点云,特别是在黑暗和可怕的天气条件下。在本文中,提出了一种基于3D点云的实时轨识别方法来解决挑战,如无序,不均匀的密度和大量点云的挑战。首先呈现Voxel Down-采样方法,用于铁路点云的密度平衡,并且金字塔分区旨在将3D扫描区域划分为具有不同卷的体素。然后,开发了一个特征编码模块以找到最近的邻点并聚合它们的局部几何特征。最后,提出了一种多尺度神经网络以产生每个体素和轨道位置的预测结果。该实验是在铁路的3D点云数据的9个序列下进行的。结果表明,该方法在检测直,弯曲和其他复杂的拓扑轨道方面具有良好的性能。
translated by 谷歌翻译
在本文中,我们使用两个无监督的学习算法的组合介绍了路边激光雷达物体检测的解决方案。 3D点云数据首先将球形坐标转换成球形坐标并使用散列函数填充到方位角网格矩阵中。之后,RAW LIDAR数据被重新排列成空间 - 时间数据结构,以存储范围,方位角和强度的信息。基于强度信道模式识别,应用动态模式分解方法将点云数据分解成低级背景和稀疏前景。三角算法根据范围信息,自动发现分割值以将移动目标与静态背景分开。在强度和范围背景减法之后,将使用基于密度的检测器检测到前景移动物体,并编码到状态空间模型中以进行跟踪。所提出的模型的输出包括车辆轨迹,可以实现许多移动性和安全应用。该方法针对商业流量数据收集平台进行了验证,并证明了对基础设施激光雷达对象检测的高效可靠的解决方案。与之前的方法相比,该方法直接处理散射和离散点云,所提出的方法可以建立3D测量数据的复杂线性关系较小,这捕获了我们经常需要的空间时间结构。
translated by 谷歌翻译
传统的LIDAR射测(LO)系统主要利用从经过的环境获得的几何信息来注册激光扫描并估算Lidar Ego-Motion,而在动态或非结构化环境中可能不可靠。本文提出了Inten-loam,一种低饮用和健壮的激光镜和映射方法,该方法完全利用激光扫描的隐式信息(即几何,强度和时间特征)。扫描点被投影到圆柱形图像上,这些图像有助于促进各种特征的有效和适应性提取,即地面,梁,立面和反射器。我们提出了一种新型基于强度的点登记算法,并将其纳入LIDAR的探光仪,从而使LO系统能够使用几何和强度特征点共同估计LIDAR EGO-MOTION。为了消除动态对象的干扰,我们提出了一种基于时间的动态对象删除方法,以在MAP更新之前过滤它们。此外,使用与时间相关的体素网格滤波器组织并缩减了本地地图,以维持当前扫描和静态局部图之间的相似性。在模拟和实际数据集上进行了广泛的实验。结果表明,所提出的方法在正常驾驶方案中实现了类似或更高的精度W.R.T,在非结构化环境中,最先进的方法优于基于几何的LO。
translated by 谷歌翻译
由于其不利风格,例如雾,下雨和下雪,汽车MMWVEAVE雷达已广泛用于汽车行业中的广泛应用于汽车行业。另一方面,其大波长也造成了对环境感知的根本挑战。最近的进展对其固有的缺点,即多路径反射和MMWAVE雷达点云的稀疏性取得了突破。然而,MM波信号的较低频率对车辆的移动性比视觉和激光信号的迁移率更敏感。这项工作侧重于频移的问题,即多普勒效应扭曲了雷达测距测量及其对公制定位的影响。我们提出了一种新的基于雷达的公制定位框架,通过恢复多普勒失真来获得更准确的位置估计。具体而言,我们首先设计一种新算法,明确地补偿了雷达扫描的多普勒失真,然后模拟了多普勒补偿点云的测量不确定性,以进一步优化度量定位。使用公共NUSCENES数据集和CARLA模拟器的广泛实验表明,我们的方法分别以19.2 \%和13.5 \%的改进优于最先进的方法,分别在翻译和旋转误差方面的改进。
translated by 谷歌翻译
Multi-modal fusion is a basic task of autonomous driving system perception, which has attracted many scholars' interest in recent years. The current multi-modal fusion methods mainly focus on camera data and LiDAR data, but pay little attention to the kinematic information provided by the bottom sensors of the vehicle, such as acceleration, vehicle speed, angle of rotation. These information are not affected by complex external scenes, so it is more robust and reliable. In this paper, we introduce the existing application fields of vehicle bottom information and the research progress of related methods, as well as the multi-modal fusion methods based on bottom information. We also introduced the relevant information of the vehicle bottom information data set in detail to facilitate the research as soon as possible. In addition, new future ideas of multi-modal fusion technology for autonomous driving tasks are proposed to promote the further utilization of vehicle bottom information.
translated by 谷歌翻译
Modeling perception sensors is key for simulation based testing of automated driving functions. Beyond weather conditions themselves, sensors are also subjected to object dependent environmental influences like tire spray caused by vehicles moving on wet pavement. In this work, a novel modeling approach for spray in lidar data is introduced. The model conforms to the Open Simulation Interface (OSI) standard and is based on the formation of detection clusters within a spray plume. The detections are rendered with a simple custom ray casting algorithm without the need of a fluid dynamics simulation or physics engine. The model is subsequently used to generate training data for object detection algorithms. It is shown that the model helps to improve detection in real-world spray scenarios significantly. Furthermore, a systematic real-world data set is recorded and published for analysis, model calibration and validation of spray effects in active perception sensors. Experiments are conducted on a test track by driving over artificially watered pavement with varying vehicle speeds, vehicle types and levels of pavement wetness. All models and data of this work are available open source.
translated by 谷歌翻译
Designing a local planner to control tractor-trailer vehicles in forward and backward maneuvering is a challenging control problem in the research community of autonomous driving systems. Considering a critical situation in the stability of tractor-trailer systems, a practical and novel approach is presented to design a non-linear MPC(NMPC) local planner for tractor-trailer autonomous vehicles in both forward and backward maneuvering. The tractor velocity and steering angle are considered to be control variables. The proposed NMPC local planner is designed to handle jackknife situations, avoiding multiple static obstacles, and path following in both forward and backward maneuvering. The challenges mentioned above are converted into a constrained problem that can be handled simultaneously by the proposed NMPC local planner. The direct multiple shooting approach is used to convert the optimal control problem(OCP) into a non-linear programming problem(NLP) that IPOPT solvers can solve in CasADi. The controller performance is evaluated through different backup and forward maneuvering scenarios in the Gazebo simulation environment in real-time. It achieves asymptotic stability in avoiding static obstacles and accurate tracking performance while respecting path constraints. Finally, the proposed NMPC local planner is integrated with an open-source autonomous driving software stack called AutowareAi.
translated by 谷歌翻译
虽然相机和激光雷达在大多数辅助和自主驾驶系统中广泛使用,但仅提出了少数作品来将用于在线传感器数据融合的摄像机和镜头的时间同步和外部校准相关联。时间和空间校准技术正面临缺乏相关性和实时的挑战。在本文中,我们介绍了姿势估计模型和环境鲁棒线的提取,以提高数据融合和即时在线校正能力的相关性。考虑到相邻力矩之间的点云匹配的对应关系,动态目标旨在寻求最佳政策。搜索优化过程旨在以计算精度和效率提供准确的参数。为了证明这种方法的好处,我们以基础真实价值在基蒂基准上进行评估。在在线实验中,与时间校准中的软同步方法相比,我们的方法提高了准确性38.5%。在空间校准时,我们的方法会在0.4秒内自动纠正干扰误差,并达到0.3度的精度。这项工作可以促进传感器融合的研究和应用。
translated by 谷歌翻译
We propose a real-time method for odometry and mapping using range measurements from a 2-axis lidar moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation can cause mis-registration of the resulting point cloud. To date, coherent 3D maps can be built by off-line batch methods, often using loop closure to correct for drift over time. Our method achieves both low-drift and low-computational complexity without the need for high accuracy ranging or inertial measurements.The key idea in obtaining this level of performance is the division of the complex problem of simultaneous localization and mapping, which seeks to optimize a large number of variables simultaneously, by two algorithms. One algorithm performs odometry at a high frequency but low fidelity to estimate velocity of the lidar. Another algorithm runs at a frequency of an order of magnitude lower for fine matching and registration of the point cloud. Combination of the two algorithms allows the method to map in real-time. The method has been evaluated by a large set of experiments as well as on the KITTI odometry benchmark. The results indicate that the method can achieve accuracy at the level of state of the art offline batch methods.
translated by 谷歌翻译