We address the theoretical and practical problems related to the trajectory generation and tracking control of tail-sitter UAVs. Theoretically, we focus on the differential flatness property with full exploitation of actual UAV aerodynamic models, which lays a foundation for generating dynamically feasible trajectory and achieving high-performance tracking control. We have found that a tail-sitter is differentially flat with accurate aerodynamic models within the entire flight envelope, by specifying coordinate flight condition and choosing the vehicle position as the flat output. This fundamental property allows us to fully exploit the high-fidelity aerodynamic models in the trajectory planning and tracking control to achieve accurate tail-sitter flights. Particularly, an optimization-based trajectory planner for tail-sitters is proposed to design high-quality, smooth trajectories with consideration of kinodynamic constraints, singularity-free constraints and actuator saturation. The planned trajectory of flat output is transformed to state trajectory in real-time with consideration of wind in environments. To track the state trajectory, a global, singularity-free, and minimally-parameterized on-manifold MPC is developed, which fully leverages the accurate aerodynamic model to achieve high-accuracy trajectory tracking within the whole flight envelope. The effectiveness of the proposed framework is demonstrated through extensive real-world experiments in both indoor and outdoor field tests, including agile SE(3) flight through consecutive narrow windows requiring specific attitude and with speed up to 10m/s, typical tail-sitter maneuvers (transition, level flight and loiter) with speed up to 20m/s, and extremely aggressive aerobatic maneuvers (Wingover, Loop, Vertical Eight and Cuban Eight) with acceleration up to 2.5g.
translated by 谷歌翻译
The emergence of low-cost, small form factor and light-weight solid-state LiDAR sensors have brought new opportunities for autonomous unmanned aerial vehicles (UAVs) by advancing navigation safety and computation efficiency. Yet the successful developments of LiDAR-based UAVs must rely on extensive simulations. Existing simulators can hardly perform simulations of real-world environments due to the requirements of dense mesh maps that are difficult to obtain. In this paper, we develop a point-realistic simulator of real-world scenes for LiDAR-based UAVs. The key idea is the underlying point rendering method, where we construct a depth image directly from the point cloud map and interpolate it to obtain realistic LiDAR point measurements. Our developed simulator is able to run on a light-weight computing platform and supports the simulation of LiDARs with different resolution and scanning patterns, dynamic obstacles, and multi-UAV systems. Developed in the ROS framework, the simulator can easily communicate with other key modules of an autonomous robot, such as perception, state estimation, planning, and control. Finally, the simulator provides 10 high-resolution point cloud maps of various real-world environments, including forests of different densities, historic building, office, parking garage, and various complex indoor environments. These realistic maps provide diverse testing scenarios for an autonomous UAV. Evaluation results show that the developed simulator achieves superior performance in terms of time and memory consumption against Gazebo and that the simulated UAV flights highly match the actual one in real-world environments. We believe such a point-realistic and light-weight simulator is crucial to bridge the gap between UAV simulation and experiments and will significantly facilitate the research of LiDAR-based autonomous UAVs in the future.
translated by 谷歌翻译
在本文中,我们解决了未知和非结构化环境中在线四型全身运动计划(SE(3)计划)的问题。我们提出了一种新颖的多分辨率搜索方法,该方法发现了需要完整的姿势计划和仅需要位置计划的正常区域的狭窄区域。结果,将四型计划问题分解为几个SE(3)(如有必要)和R^3子问题。为了飞过发现的狭窄区域,提出了一个精心设计的狭窄区域的走廊生成策略,这大大提高了计划的成功率。总体问题分解和分层计划框架大大加速了计划过程,使得可以在未知环境中进行完全的板载感应和计算在线工作。广泛的仿真基准比较表明,所提出的方法的数量级比计算时间中最先进的方法快,同时保持高计划成功率。最终将所提出的方法集成到基于激光雷达的自主四旋转器中,并在未知和非结构化环境中进行了各种现实世界实验,以证明该方法的出色性能。
translated by 谷歌翻译
准确的自我和相对状态估计是完成群体任务的关键前提,例如协作自主探索,目标跟踪,搜索和救援。本文提出了一种全面分散的状态估计方法,用于空中群体系统,其中每个无人机执行精确的自我状态估计,通过无线通信交换自我状态和相互观察信息,并估算相对状态(W.R.T.)(W.R.T.)无人机,全部实时,仅基于激光惯性测量。提出了一种基于3D激光雷达的新型无人机检测,识别和跟踪方法,以获得队友无人机的观察。然后,将相互观察测量与IMU和LIDAR测量紧密耦合,以实时和准确地估计自我状态和相对状态。广泛的现实世界实验显示了对复杂场景的广泛适应性,包括被GPS贬低的场景,摄影机的退化场景(漆黑的夜晚)或激光雷达(面对单个墙)。与运动捕获系统提供的地面真相相比,结果显示了厘米级的定位精度,该精度优于单个无人机系统的其他最先进的激光惯性射测。
translated by 谷歌翻译
学习辐射场对新型视图综合显示出了显着的结果。学习过程通常会花费大量时间,这激发了最新方法,通过没有神经网络或使用更有效的数据结构来通过学习来加快学习过程。但是,这些专门设计的方法不适用于大多数基于辐射的方法的方法。为了解决此问题,我们引入了一项一般策略,以加快几乎所有基于辐射的方法的学习过程。我们的关键想法是通过在多视图卷渲染过程中拍摄较少的射线来减少冗余,这是几乎所有基于辐射的方法的基础。我们发现,在具有巨大色彩变化的像素上的射击不仅显着减轻了训练负担,而且几乎不会影响学到的辐射场的准确性。此外,我们还根据树中每个节点的平均渲染误差将每个视图自适应地细分为Quadtree,这使我们在更复杂的区域中动态射击更多的射线,并具有较大的渲染误差。我们在广泛使用的基准下使用不同的基于辐射的方法评估我们的方法。实验结果表明,我们的方法通过更快的训练获得了与最先进的可比精度。
translated by 谷歌翻译
四型是敏捷平台。对于人类专家,他们可以在混乱的环境中进行极高的高速航班。但是,高速自主飞行仍然是一个重大挑战。在这项工作中,我们提出了一种基于走廊约束的最小控制工作轨迹优化(MINCO)框架的运动计划算法。具体而言,我们使用一系列重叠球来表示环境的自由空间,并提出了两种新型设计,使算法能够实时计划高速四轨轨迹。一种是一种基于采样的走廊生成方法,该方法在两个相邻球之间生成具有大型重叠区域(因此总走廊大小)的球体。第二个是一个后退的地平线走廊(RHC)策略,其中部分生成的走廊在每个补给中都重复使用。这两种设计一起,根据四极管的当前状态扩大走廊的空间,因此使四极管可以高速操纵。我们根据其他最先进的计划方法基准了我们的算法,以显示其在模拟中的优势。还进行了全面的消融研究,以显示这两种设计的必要性。最终在木材环境中对自动激光雷达四型二次无人机进行了评估,该方法的飞行速度超过13.7 m/s,而没有任何先前的环境或外部定位设施图。
translated by 谷歌翻译
对于大多数LIDAR惯性进程,精确的初始状态,包括LiDAR和6轴IMU之间的时间偏移和外部转换,起着重要作用,通常被视为先决条件。但是,这种信息可能不会始终在定制的激光惯性系统中获得。在本文中,我们提出了liinit:一个完整​​的实时激光惯性系统初始化过程,该过程校准了激光雷达和imus之间的时间偏移和外部参数,以及通过对齐从激光雷达估计的状态来校准重力矢量和IMU偏置通过IMU测量的测量。我们将提出的方法实现为初始化模块,如果启用了,该模块会自动检测到收集的数据的激发程度并校准,即直接偏移,外部偏移,外部,重力向量和IMU偏置,然后是这样的。用作实时激光惯性射测系统的高质量初始状态值。用不同类型的LIDAR和LIDAR惯性组合进行的实验表明我们初始化方法的鲁棒性,适应性和效率。我们的LIDAR惯性初始化过程LIINIT和测试数据的实现在GitHub上开源,并集成到最先进的激光辐射射击轨道测定系统FastLiO2中。
translated by 谷歌翻译
我们提出了一种压缩具有隐式神经表示的全分辨率视频序列的方法。每个帧表示为映射坐标位置到像素值的神经网络。我们使用单独的隐式网络来调制坐标输入,从而实现帧之间的有效运动补偿。与一个小的残余网络一起,这允许我们有效地相对于前一帧压缩p帧。通过使用学习的整数量化存储网络权重,我们进一步降低了比特率。我们呼叫隐式像素流(IPF)的方法,提供了几种超简化的既定神经视频编解码器:它不需要接收器可以访问预先磨普的神经网络,不使用昂贵的内插基翘曲操作,而不是需要单独的培训数据集。我们展示了神经隐式压缩对图像和视频数据的可行性。
translated by 谷歌翻译
我们引入基于实例自适应学习的视频压缩算法。在要传输的每个视频序列上,我们介绍了预训练的压缩模型。最佳参数与潜在代码一起发送到接收器。通过熵编码在合适的混合模型下的参数更新,我们确保可以有效地编码网络参数。该实例自适应压缩算法对于基础模型的选择是不可知的,并且具有改进任何神经视频编解码器的可能性。在UVG,HEVC和XIPH数据集上,我们的CODEC通过21%至26%的BD速率节省,提高了低延迟尺度空间流量模型的性能,以及最先进的B帧模型17至20%的BD速率储蓄。我们还证明了实例 - 自适应FineTuning改善了域移位的鲁棒性。最后,我们的方法降低了压缩模型的容量要求。我们表明它即使在将网络大小减少72%之后也能实现最先进的性能。
translated by 谷歌翻译
ImageNet pre-training has enabled state-of-the-art results on many tasks. In spite of its recognized contribution to generalization, we observed in this study that ImageNet pre-training also transfers adversarial non-robustness from pre-trained model into fine-tuned model in the downstream classification tasks. We first conducted experiments on various datasets and network backbones to uncover the adversarial non-robustness in fine-tuned model. Further analysis was conducted on examining the learned knowledge of fine-tuned model and standard model, and revealed that the reason leading to the non-robustness is the non-robust features transferred from ImageNet pre-trained model. Finally, we analyzed the preference for feature learning of the pre-trained model, explored the factors influencing robustness, and introduced a simple robust ImageNet pre-training solution. Our code is available at \url{https://github.com/jiamingzhang94/ImageNet-Pretraining-transfers-non-robustness}.
translated by 谷歌翻译