Traversability estimation for mobile robots in off-road environments requires more than conventional semantic segmentation used in constrained environments like on-road conditions. Recently, approaches to learning a traversability estimation from past driving experiences in a self-supervised manner are arising as they can significantly reduce human labeling costs and labeling errors. However, the self-supervised data only provide supervision for the actually traversed regions, inducing epistemic uncertainty according to the scarcity of negative information. Negative data are rarely harvested as the system can be severely damaged while logging the data. To mitigate the uncertainty, we introduce a deep metric learning-based method to incorporate unlabeled data with a few positive and negative prototypes in order to leverage the uncertainty, which jointly learns using semantic segmentation and traversability regression. To firmly evaluate the proposed framework, we introduce a new evaluation metric that comprehensively evaluates the segmentation and regression. Additionally, we construct a driving dataset `Dtrail' in off-road environments with a mobile robot platform, which is composed of a wide variety of negative data. We examine our method on Dtrail as well as the publicly available SemanticKITTI dataset.
translated by 谷歌翻译
为了在非结构化环境中安全,成功地导航自动驾驶汽车,地形的穿越性应根据车辆的驾驶能力而变化。实际的驾驶经验可以以自我监督的方式使用来学习特定的轨迹。但是,现有的学习自我监督的方法对于学习各种车辆的遍历性并不可扩展。在这项工作中,我们引入了一个可扩展的框架,用于学习自我监督的遍历性,该框架可以直接从车辆 - 泰林的互动中学习遍历性,而无需任何人类监督。我们训练一个神经网络,该神经网络可以预测车辆从3D点云中经历的本体感受体验。使用一种新颖的PU学习方法,网络同时确定了不可转化的区域,其中估计可以过度自信。通过从模拟和现实世界中收集的各种车辆的驾驶数据,我们表明我们的框架能够学习各种车辆的自我监督的越野性。通过将我们的框架与模型预测控制器整合在一起,我们证明了估计的遍历性会导致有效的导航,从而根据车辆的驾驶特性实现了不同的操作。此外,实验结果验证了我们方法识别和避免不可转化区域的能力。
translated by 谷歌翻译
已经使用基于物理学的模型对非全面车辆运动进行了广泛的研究。使用这些模型时,使用线性轮胎模型来解释车轮/接地相互作用时的通用方法,因此可能无法完全捕获各种环境下的非线性和复杂动力学。另一方面,神经网络模型已在该域中广泛使用,证明了功能强大的近似功能。但是,这些黑盒学习策略完全放弃了现有的知名物理知识。在本文中,我们无缝将深度学习与完全不同的物理模型相结合,以赋予神经网络具有可用的先验知识。所提出的模型比大边距的香草神经网络模型显示出更好的概括性能。我们还表明,我们的模型的潜在特征可以准确地表示侧向轮胎力,而无需进行任何其他训练。最后,我们使用从潜在特征得出的本体感受信息开发了一种风险感知的模型预测控制器。我们在未知摩擦下的两个自动驾驶任务中验证了我们的想法,表现优于基线控制框架。
translated by 谷歌翻译
神经网络已越来越多地用于模型预测控制器(MPC)来控制非线性动态系统。但是,MPC仍然提出一个问题,即可实现的更新率不足以应对模型不确定性和外部干扰。在本文中,我们提出了一种新颖的控制方案,该方案可以使用MPC的神经网络动力学设计最佳的跟踪控制器,从而使任何现有基于模型的Feedforward Controller的插件扩展程序都可以应用于插件。我们还描述了我们的方法如何处理包含历史信息的神经网络,该信息不遵循一般的动态形式。该方法通过其在外部干扰的经典控制基准中的性能进行评估。我们还扩展了控制框架,以应用于具有未知摩擦的积极自主驾驶任务。在所有实验中,我们的方法的表现都优于比较的方法。我们的控制器还显示出低控制的水平,表明我们的反馈控制器不会干扰MPC的最佳命令。
translated by 谷歌翻译
这里,我们提出了一种新方法,在没有任何额外的平滑算法的模型预测路径积分控制(MPPI)任务中产生平滑控制序列。我们的方法有效地减轻了抽样中的喋喋不休,而MPPI的信息定位仍然是相同的。我们展示了具有不同算法的定量评估的挑战性自主驾驶任务中的提出方法。还提出了一种用于估算不同道路摩擦条件下的系统动态的神经网络车辆模型。我们的视频可以找到:\ url {https://youtu.be/o3nmi0ujfqg}。
translated by 谷歌翻译
光度一致性损失是常用于自我监督单眼深度估计的代表性目标函数之一。然而,由于引导不正确,这种损失往往导致Textureless或遮挡区域中的不稳定深度预测。最近的自我监督学习方法通过利用从自动编码器明确学习的特征表示来解决这个问题,期望比输入图像更好的差异性。尽管使用自动编码的功能,但我们观察到该方法不会将功能嵌入为判别作为自动编码的功能。在本文中,我们提出了剩余的引导损耗,使得深度估计网络能够通过传输自动编码特征的可辨性来嵌入辨别特征。我们对基蒂基准进行了实验,并验证了我们对其他最先进的方法的方法的优势和正交性。
translated by 谷歌翻译
目前,移动机器人正在迅速发展,并在工业中寻找许多应用。然而,仍然存在与其实际使用相关的一些问题,例如对昂贵的硬件及其高功耗水平的需要。在本研究中,我们提出了一种导航系统,该导航系统可在具有RGB-D相机的低端计算机上操作,以及用于操作集成自动驱动系统的移动机器人平台。建议的系统不需要Lidars或GPU。我们的原始深度图像接地分割方法提取用于低体移动机器人的安全驾驶的遍历图。它旨在保证具有集成的SLAM,全局路径规划和运动规划的低成本现成单板计算机上的实时性能。我们使用Traversability Map应用基于规则的基于学习的导航策略。同时运行传感器数据处理和其他自主驾驶功能,我们的导航策略以18Hz的刷新率为控制命令而迅速执行,而其他系统则具有较慢的刷新率。我们的方法在有限的计算资源中优于当前最先进的导航方法,如3D模拟测试所示。此外,我们通过在室内环境中成功的自动驾驶来展示移动机器人系统的适用性。我们的整个作品包括硬件和软件在开源许可(https://github.com/shinkansan/2019-ugrp-doom)下发布。我们的详细视频是https://youtu.be/mf3iufuhppm提供的。
translated by 谷歌翻译
Modern Deep Learning (DL) models have grown to sizes requiring massive clusters of specialized, high-end nodes to train. Designing such clusters to maximize both performance and utilization to amortize their steep cost is a challenging task requiring careful balance of compute, memory, and network resources. Moreover, a plethora of each model's tuning knobs drastically affect the performance, with optimal values often depending on the underlying cluster's characteristics, which necessitates a complex cluster-workload co-design process. To facilitate the design space exploration of such massive DL training clusters, we introduce COMET a holistic cluster design methodology and workflow to jointly study the impact of parallelization strategies and key cluster resource provisioning on the performance of distributed DL training. We develop a step-by-step process to establish a reusable and flexible methodology, and demonstrate its application with a case study of training a Transformer-1T model on a cluster of variable compute, memory, and network resources. Our case study demonstrates COMET's utility in identifying promising architectural optimization directions and guiding system designers in configuring key model and cluster parameters.
translated by 谷歌翻译
The 3D-aware image synthesis focuses on conserving spatial consistency besides generating high-resolution images with fine details. Recently, Neural Radiance Field (NeRF) has been introduced for synthesizing novel views with low computational cost and superior performance. While several works investigate a generative NeRF and show remarkable achievement, they cannot handle conditional and continuous feature manipulation in the generation procedure. In this work, we introduce a novel model, called Class-Continuous Conditional Generative NeRF ($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated photorealistic 3D-consistent images by projecting conditional features to the generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows strong 3D-consistency with fine details and smooth interpolation in conditional feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a $\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated 3D-aware images of each class of the datasets as it is possible to synthesize class-conditional images with $\text{C}^{3}$G-NeRF.
translated by 谷歌翻译
In both terrestrial and marine ecology, physical tagging is a frequently used method to study population dynamics and behavior. However, such tagging techniques are increasingly being replaced by individual re-identification using image analysis. This paper introduces a contrastive learning-based model for identifying individuals. The model uses the first parts of the Inception v3 network, supported by a projection head, and we use contrastive learning to find similar or dissimilar image pairs from a collection of uniform photographs. We apply this technique for corkwing wrasse, Symphodus melops, an ecologically and commercially important fish species. Photos are taken during repeated catches of the same individuals from a wild population, where the intervals between individual sightings might range from a few days to several years. Our model achieves a one-shot accuracy of 0.35, a 5-shot accuracy of 0.56, and a 100-shot accuracy of 0.88, on our dataset.
translated by 谷歌翻译