失明和低视力(PBLV)的人在定位最终目的地或针对陌生环境中的特定物体时面临重大挑战。此外,除了最初定位和定位目标对象外,从目前的立场接近最终目标通常是令人沮丧和挑战,尤其是当人们摆脱最初的计划途径以避免障碍时。在本文中,我们开发了一种新颖的可穿戴导航解决方案,以为用户提供实时指导,以便在不熟悉的环境中有效地接近感兴趣的目标对象。我们的系统包含两个关键的视觉计算函数:在3D中以3D为中的初始目标对象定位以及对用户轨迹的连续估计,这既基于由用户胸部前面安装在用户胸前的低成本单眼相机捕获的2D视频。这些功能使系统能够提出初始导航路径,在用户移动时不断更新路径,并及时提供有关用户路径校正的建议。我们的实验表明,我们的系统能够以室外和室内的误差小于0.5米的误差操作。该系统完全基于视觉,并且不需要其他传感器进行导航,并且可以使用可穿戴系统中的Jetson处理器进行计算以促进实时导航辅助。
translated by 谷歌翻译
Based on WHO statistics, many individuals are suffering from visual problems, and their number is increasing yearly. One of the most critical needs they have is the ability to navigate safely, which is why researchers are trying to create and improve various navigation systems. This paper provides a navigation concept based on the visual slam and Yolo concepts using monocular cameras. Using the ORB-SLAM algorithm, our concept creates a map from a predefined route that a blind person most uses. Since visually impaired people are curious about their environment and, of course, to guide them properly, obstacle detection has been added to the system. As mentioned earlier, safe navigation is vital for visually impaired people, so our concept has a path-following part. This part consists of three steps: obstacle distance estimation, path deviation detection, and next-step prediction, done by monocular cameras.
translated by 谷歌翻译
现在,基于视觉的本地化方法为来自机器人技术到辅助技术的无数用例提供了新出现的导航管道。与基于传感器的解决方案相比,基于视觉的定位不需要预安装的传感器基础架构,这是昂贵,耗时和/或通常不可行的。本文中,我们为特定用例提出了一个基于视觉的本地化管道:针对失明和低视力的最终用户的导航支持。给定最终用户在移动应用程序上拍摄的查询图像,该管道利用视觉位置识别(VPR)算法在目标空间的参考图像数据库中找到相似的图像。这些相似图像的地理位置用于采用加权平均方法来估计最终用户的位置和透视N点(PNP)算法的下游任务中,以估计最终用户的方向。此外,该系统实现了Dijkstra的算法,以根据包括Trip Origin和目的地的可通航地图计算最短路径。用于本地化和导航的层压映射是使用定制的图形用户界面构建的,该图形用户界面投影了3D重建的稀疏映射,从一系列图像构建到相应的先验2D楼平面图。用于地图构造的顺序图像可以在预映射步骤中收集,也可以通过公共数据库/公民科学清除。端到端系统可以使用带有自定义移动应用程序的相机安装在任何可互联网的设备上。出于评估目的,在复杂的医院环境中测试了映射和定位。评估结果表明,我们的系统可以以少于1米的平均误差来实现本地化,而无需了解摄像机的固有参数,例如焦距。
translated by 谷歌翻译
In recent decades, several assistive technologies for visually impaired and blind (VIB) people have been developed to improve their ability to navigate independently and safely. At the same time, simultaneous localization and mapping (SLAM) techniques have become sufficiently robust and efficient to be adopted in the development of assistive technologies. In this paper, we first report the results of an anonymous survey conducted with VIB people to understand their experience and needs; we focus on digital assistive technologies that help them with indoor and outdoor navigation. Then, we present a literature review of assistive technologies based on SLAM. We discuss proposed approaches and indicate their pros and cons. We conclude by presenting future opportunities and challenges in this domain.
translated by 谷歌翻译
Visually impaired people usually find it hard to travel independently in many public places such as airports and shopping malls due to the problems of obstacle avoidance and guidance to the desired location. Therefore, in the highly dynamic indoor environment, how to improve indoor navigation robot localization and navigation accuracy so that they guide the visually impaired well becomes a problem. One way is to use visual SLAM. However, typical visual SLAM either assumes a static environment, which may lead to less accurate results in dynamic environments or assumes that the targets are all dynamic and removes all the feature points above, sacrificing computational speed to a large extent with the available computational power. This paper seeks to explore marginal localization and navigation systems for indoor navigation robotics. The proposed system is designed to improve localization and navigation accuracy in highly dynamic environments by identifying and tracking potentially moving objects and using vector field histograms for local path planning and obstacle avoidance. The system has been tested on a public indoor RGB-D dataset, and the results show that the new system improves accuracy and robustness while reducing computation time in highly dynamic indoor scenes.
translated by 谷歌翻译
我们使用从环境物体中提取的语义标志物,用于具有固定固定单眼相机的地面机器人,提出了一种视觉教学和重复(VTR)算法。所提出的算法对摄像机/机器人的起始姿势的变化具有鲁棒性,其中姿势定义为平面位置以及垂直轴周围的方向。 VTR由一个教学阶段组成,其中机器人在规定的路径中移动,以及一个重复阶段,在该阶段中,机器人试图从相同或其他姿势开始重复相同的路径。大多数可用的VTR算法是姿势依赖性的,并且从远离教学阶段的初始姿势开始时,在重复阶段无法表现良好。为了实现更强大的姿势独立性,关键是在教学阶段生成包含摄像头轨迹和周围物体位置的环境的3D语义图。对于特定的实现,我们使用Orb-Slam收集相机姿势和环境的3D点云,而Yolov3则检测环境中的对象。然后,我们组合两个输出以构建语义图。在重复阶段,我们基于检测到的对象和存储的语义映射重新定位机器人。然后,机器人能够朝教学路径移动,并在向前和向后重复。我们已经在不同的情况下测试了所提出的算法,并将其与两项最相关的研究进行了比较。另外,我们将算法与两种基于图像的重新定位方法进行了比较。一个纯粹基于球形 - 萨克,另一个纯粹是结合了超级胶水和兰萨克。结果表明,我们的算法在姿势变化和环境改变方面更加强大。我们的代码和数据可在以下github页面上获得:https://github.com/mmahdavian/semantic_visual_teach_repeat。
translated by 谷歌翻译
在这项工作中,我们探讨了对物体在看不见的世界中同时本地化和映射中的使用,并提出了一个对象辅助系统(OA-Slam)。更确切地说,我们表明,与低级点相比,物体的主要好处在于它们的高级语义和歧视力。相反,要点比代表对象(Cuboid或椭圆形)的通用粗模型具有更好的空间定位精度。我们表明,将点和对象组合非常有趣,可以解决相机姿势恢复的问题。我们的主要贡献是:(1)我们使用高级对象地标提高了SLAM系统的重新定位能力; (2)我们构建了一个能够使用3D椭圆形识别,跟踪和重建对象的自动系统; (3)我们表明,基于对象的本地化可用于重新初始化或恢复相机跟踪。我们的全自动系统允许对象映射和增强姿势跟踪恢复,我们认为这可以极大地受益于AR社区。我们的实验表明,可以从经典方法失败的视点重新定位相机。我们证明,尽管跟踪损失损失,但这种本地化使SLAM系统仍可以继续工作,而这种损失可能会经常发生在不理会的用户中。我们的代码和测试数据在gitlab.inria.fr/tangram/oa-slam上发布。
translated by 谷歌翻译
根据世界卫生组织的数据,估计视觉障碍会影响全球约22亿人。目前,视力障碍必须依靠导航辅助工具来替代其视觉感,例如基于白色的甘蔗或GPS(全球定位系统)导航,两者都无法在室内工作。白色的甘蔗不能用于确定用户在房间内的位置,而GPS通常可以在室内失去连接,并且不提供方向信息,这两种方法都不适合室内使用。因此,这项研究试图开发3D成像解决方案,该解决方案能够通过复杂的室内环境实现非接触式导航。与以前的方法相比,该设备可以查明用户的位置和方向,同时仅需要53.1%的内存,并且处理速度更快125%。该设备还可以比以前的最新模型检测到60.2%的障碍,同时仅需要41%的内存和处理速度260%。在与人类参与者进行测试时,该设备允许与环境障碍物的碰撞减少94.5%,并允许步行速度提高48.3%,这表明我的设备可以使视力受损更安全,更快地导航。总而言之,这项研究表明了一个基于3D的导航系统,用于视力障碍。该方法可以由多种移动低功率设备(例如手机)使用,以确保所有人都可以使用这项研究。
translated by 谷歌翻译
结合同时定位和映射(SLAM)估计和动态场景建模可以高效地在动态环境中获得机器人自主权。机器人路径规划和障碍避免任务依赖于场景中动态对象运动的准确估计。本文介绍了VDO-SLAM,这是一种强大的视觉动态对象感知SLAM系统,用于利用语义信息,使得能够在场景中进行准确的运动估计和跟踪动态刚性物体,而无需任何先前的物体形状或几何模型的知识。所提出的方法识别和跟踪环境中的动态对象和静态结构,并将这些信息集成到统一的SLAM框架中。这导致机器人轨迹的高度准确估计和对象的全部SE(3)运动以及环境的时空地图。该系统能够从对象的SE(3)运动中提取线性速度估计,为复杂的动态环境中的导航提供重要功能。我们展示了所提出的系统对许多真实室内和室外数据集的性能,结果表明了对最先进的算法的一致和实质性的改进。可以使用源代码的开源版本。
translated by 谷歌翻译
Simultaneous Localization & Mapping (SLAM) is the process of building a mutual relationship between localization and mapping of the subject in its surrounding environment. With the help of different sensors, various types of SLAM systems have developed to deal with the problem of building the relationship between localization and mapping. A limitation in the SLAM process is the lack of consideration of dynamic objects in the mapping of the environment. We propose the Dynamic Object Tracking SLAM (DyOb-SLAM), which is a Visual SLAM system that can localize and map the surrounding dynamic objects in the environment as well as track the dynamic objects in each frame. With the help of a neural network and a dense optical flow algorithm, dynamic objects and static objects in an environment can be differentiated. DyOb-SLAM creates two separate maps for both static and dynamic contents. For the static features, a sparse map is obtained. For the dynamic contents, a trajectory global map is created as output. As a result, a frame to frame real-time based dynamic object tracking system is obtained. With the pose calculation of the dynamic objects and camera, DyOb-SLAM can estimate the speed of the dynamic objects with time. The performance of DyOb-SLAM is observed by comparing it with a similar Visual SLAM system, VDO-SLAM and the performance is measured by calculating the camera and object pose errors as well as the object speed error.
translated by 谷歌翻译
This paper presents ORB-SLAM, a feature-based monocular SLAM system that operates in real time, in small and large, indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.
translated by 谷歌翻译
Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry / SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti
translated by 谷歌翻译
在这项研究中,我们提出了一种新型的视觉定位方法,以根据RGB摄像机的可视数据准确估计机器人在3D激光镜头内的六个自由度(6-DOF)姿势。使用基于先进的激光雷达的同时定位和映射(SLAM)算法,可获得3D地图,能够收集精确的稀疏图。将从相机图像中提取的功能与3D地图的点进行了比较,然后解决了几何优化问题,以实现精确的视觉定位。我们的方法允许使用配备昂贵激光雷达的侦察兵机器人一次 - 用于映射环境,并且仅使用RGB摄像头的多个操作机器人 - 执行任务任务,其本地化精度高于常见的基于相机的解决方案。该方法在Skolkovo科学技术研究所(Skoltech)收集的自定义数据集上进行了测试。在评估本地化准确性的过程中,我们设法达到了厘米级的准确性;中间翻译误差高达1.3厘米。仅使用相机实现的确切定位使使用自动移动机器人可以解决需要高度本地化精度的最复杂的任务。
translated by 谷歌翻译
The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed.
translated by 谷歌翻译
a) Stereo input: trajectory and sparse reconstruction of an urban environment with multiple loop closures. (b) RGB-D input: keyframes and dense pointcloud of a room scene with one loop closure. The pointcloud is rendered by backprojecting the sensor depth maps from estimated keyframe poses. No fusion is performed.
translated by 谷歌翻译
Visual perception plays an important role in autonomous driving. One of the primary tasks is object detection and identification. Since the vision sensor is rich in color and texture information, it can quickly and accurately identify various road information. The commonly used technique is based on extracting and calculating various features of the image. The recent development of deep learning-based method has better reliability and processing speed and has a greater advantage in recognizing complex elements. For depth estimation, vision sensor is also used for ranging due to their small size and low cost. Monocular camera uses image data from a single viewpoint as input to estimate object depth. In contrast, stereo vision is based on parallax and matching feature points of different views, and the application of deep learning also further improves the accuracy. In addition, Simultaneous Location and Mapping (SLAM) can establish a model of the road environment, thus helping the vehicle perceive the surrounding environment and complete the tasks. In this paper, we introduce and compare various methods of object detection and identification, then explain the development of depth estimation and compare various methods based on monocular, stereo, and RDBG sensors, next review and compare various methods of SLAM, and finally summarize the current problems and present the future development trends of vision technologies.
translated by 谷歌翻译
电动汽车越来越普遍,具有电感折射板被认为是充电电动车辆的方便和有效的手段。然而,驾驶员通常较差,使车辆对准到必要的电感充电的必要精度时,使得两个充电板的自动对准是所需的。与车辆队列的电气化平行,利用环保相机系统的自动停车系统越来越受欢迎。在这项工作中,我们提出了一种基于环绕式摄像机架构的系统来检测,本地化,并自动将车辆与电感充电板对齐。费用板的视觉设计不标准化,并不一定事先已知。因此,依赖离线培训的系统将在某些情况下失败。因此,我们提出了一种在线学习方法,在手动将车辆用ChartionPad手动对准时,利用驾驶员的行动,并将其与语义分割和深度的弱监督相结合,以学习分类器以自动注释视频中的电荷工作以进行进一步培训。通过这种方式,当面对先前的未持代币支付板时,驾驶员只需手动对准车辆即可。由于电荷板在地上平坦,从远处检测到它并不容易。因此,我们建议使用Visual Slam管道来学习相对于ChiftPad的地标,以实现从更大范围的对齐。我们展示了自动化车辆上的工作系统,如视频HTTPS://youtu.BE/_CLCMKW4UYO所示。为了鼓励进一步研究,我们将分享在这项工作中使用的费用数据集。
translated by 谷歌翻译
Ego-pose estimation and dynamic object tracking are two critical problems for autonomous driving systems. The solutions to these problems are generally based on their respective assumptions, \ie{the static world assumption for simultaneous localization and mapping (SLAM) and the accurate ego-pose assumption for object tracking}. However, these assumptions are challenging to hold in dynamic road scenarios, where SLAM and object tracking become closely correlated. Therefore, we propose DL-SLOT, a dynamic LiDAR SLAM and object tracking method, to simultaneously address these two coupled problems. This method integrates the state estimations of both the autonomous vehicle and the stationary and dynamic objects in the environment into a unified optimization framework. First, we used object detection to identify all points belonging to potentially dynamic objects. Subsequently, a LiDAR odometry was conducted using the filtered point cloud. Simultaneously, we proposed a sliding window-based object association method that accurately associates objects according to the historical trajectories of tracked objects. The ego-states and those of the stationary and dynamic objects are integrated into the sliding window-based collaborative graph optimization. The stationary objects are subsequently restored from the potentially dynamic object set. Finally, a global pose-graph is implemented to eliminate the accumulated error. Experiments on KITTI datasets demonstrate that our method achieves better accuracy than SLAM and object tracking baseline methods. This confirms that solving SLAM and object tracking simultaneously is mutually advantageous, dramatically improving the robustness and accuracy of SLAM and object tracking in dynamic road scenarios.
translated by 谷歌翻译
单眼同时定位和映射(SLAM)在先进的驾驶员辅助系统和自主驾驶中出现,因为单个相机便宜且易于安装。传统的单眼猛击有两个主要挑战,导致定位和映射不准确。首先,估计本地化和映射中的尺度是挑战性的。其次,传统单眼SLAM在映射中使用诸如动态对象和低视差区域的不适当的映射因子。本文提出了一种改进的实时单眼血液,通过有效地使用基于深度学习的语义分割来解决上述挑战。为了实现所提出的方法的实时执行,我们仅用映射进程并行地应用于下采样的关键帧的语义分段。此外,所提出的方法校正相机姿势和三维(3D)点的尺度,使用从道路标记的3D点和真实相机高度的估计接地平面。该方法还删除了标记为移动对象和低视差区域的不恰当的角色功能。八个视频序列的实验表明,与现有的最先进的单眼和立体声猛击系统相比,所提出的单眼血压系统达到显着提高和可比的轨迹跟踪精度。该建议的系统可以通过标准GPU支持,在标准CPU上实现实时跟踪,而现有的分段辅助单眼血液则不会。
translated by 谷歌翻译
单眼语义同时定位和映射(SLAM)的有效对象级别表示仍然缺乏广泛接受的解决方案。在本文中,我们提出了基于结构点的有效表示的使用,以基于姿势格式的配方在单眼语义大满贯系统中用作地标的几何形状。特别是,为姿势图中的地标节点提出了一个反深度参数化,以存储对象位置,方向和大小/比例。所提出的配方是一般的,可以应用于不同的几何形状。在本文中,我们关注的是室内环境,其中人工制品通常具有平面矩形形状,例如窗户,门,橱柜等。模拟中的实验表现出良好的性能,尤其是在对象几何重建中。
translated by 谷歌翻译