基于深度学习的图像检索技术,用于环路闭合检测呈现令人满意的性能。然而,在不同地理区域的先前经过训练的模型,实现高级别性能仍然挑战。本文讨论了在新环境中同时定位和映射(SLAM)系统的部署问题。普通基线方法使用其他信息,例如GPS,顺序关键帧跟踪,并重新培训整个环境,以增强召回率。我们提出了一种基于先前训练的模型来改善图像检索的新方法。我们提出了一种智能方法MAQBool,用于放大预先训练的模型的功率,以便更好的图像召回及其在实时多轴SLAM系统中的应用。与最先进的方法的高描述符尺寸(4096-D)相比,我们在低描述符维度(512-D)上实现了可比的图像检索结果。我们使用空间信息来提高预先训练模型的图像检索中的召回速率。
translated by 谷歌翻译
近年来,机器人社区已经广泛检查了关于同时定位和映射应用范围内的地点识别任务的方法。这篇文章提出了一种基于外观的循环闭合检测管道,命名为“fild ++”(快速和增量环闭合检测) .First,系统由连续图像馈送,并且通过通过单个卷积神经网络通过两次,通过单个卷积神经网络来提取全局和局部深度特征。灵活,分级导航的小世界图逐步构建表示机器人遍历路径的可视数据库基于计算的全局特征。最后,每个时间步骤抓取查询映像,被设置为在遍历的路线上检索类似的位置。遵循的图像到图像配对,它利用本地特征来评估空间信息。因此,在拟议的文章中,我们向全球和本地特征提取提出了一个网络与我们之前的一个网络工作(FILD),而在生成的深度本地特征上采用了彻底搜索验证过程,避免利用哈希代码。关于11个公共数据集的详尽实验表现出系统的高性能(实现其中八个的最高召回得分)和低执行时间(在新学院平均22.05毫秒,这是与其他国家相比包含52480图像的最大版本) - 最艺术方法。
translated by 谷歌翻译
Visual Place Recognition is an essential component of systems for camera localization and loop closure detection, and it has attracted widespread interest in multiple domains such as computer vision, robotics and AR/VR. In this work, we propose a faster, lighter and stronger approach that can generate models with fewer parameters and can spend less time in the inference stage. We designed RepVGG-lite as the backbone network in our architecture, it is more discriminative than other general networks in the Place Recognition task. RepVGG-lite has more speed advantages while achieving higher performance. We extract only one scale patch-level descriptors from global descriptors in the feature extraction stage. Then we design a trainable feature matcher to exploit both spatial relationships of the features and their visual appearance, which is based on the attention mechanism. Comprehensive experiments on challenging benchmark datasets demonstrate the proposed method outperforming recent other state-of-the-art learned approaches, and achieving even higher inference speed. Our system has 14 times less params than Patch-NetVLAD, 6.8 times lower theoretical FLOPs, and run faster 21 and 33 times in feature extraction and feature matching. Moreover, the performance of our approach is 0.5\% better than Patch-NetVLAD in Recall@1. We used subsets of Mapillary Street Level Sequences dataset to conduct experiments for all other challenging conditions.
translated by 谷歌翻译
Image descriptors based on activations of Convolutional Neural Networks (CNNs) have become dominant in image retrieval due to their discriminative power, compactness of representation, and search efficiency. Training of CNNs, either from scratch or fine-tuning, requires a large amount of annotated data, where a high quality of annotation is often crucial. In this work, we propose to fine-tune CNNs for image retrieval on a large collection of unordered images in a fully automated manner. Reconstructed 3D models obtained by the state-of-the-art retrieval and structure-from-motion methods guide the selection of the training data. We show that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval. CNN descriptor whitening discriminatively learned from the same training data outperforms commonly used PCA whitening. We propose a novel trainable Generalized-Mean (GeM) pooling layer that generalizes max and average pooling and show that it boosts retrieval performance. Applying the proposed method to the VGG network achieves state-of-the-art performance on the standard benchmarks: Oxford Buildings, Paris, and Holidays datasets.
translated by 谷歌翻译
We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the "Vector of Locally Aggregated Descriptors" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current stateof-the-art compact image representations on standard image retrieval benchmarks.
translated by 谷歌翻译
近年来,已经产生了大量的视觉内容,并从许多领域共享,例如社交媒体平台,医学成像和机器人。这种丰富的内容创建和共享引入了新的挑战,特别是在寻找类似内容内容的图像检索(CBIR)-A的数据库中,即长期建立的研究区域,其中需要改进的效率和准确性来实时检索。人工智能在CBIR中取得了进展,并大大促进了实例搜索过程。在本调查中,我们审查了最近基于深度学习算法和技术开发的实例检索工作,通过深网络架构类型,深度功能,功能嵌入方法以及网络微调策略组织了调查。我们的调查考虑了各种各样的最新方法,在那里,我们识别里程碑工作,揭示各种方法之间的联系,并呈现常用的基准,评估结果,共同挑战,并提出未来的未来方向。
translated by 谷歌翻译
由于其对环境变化的鲁棒性,视觉猛感的间接方法是受欢迎的。 ORB-SLAM2 \ CITE {ORBSLM2}是该域中的基准方法,但是,除非选择帧作为关键帧,否则它会消耗从未被重用的描述符。轻量级和高效,因为它跟踪相邻帧之间的关键点而不计算描述符。为此,基于稀疏光流提出了一种两个级粗到微小描述符独立的Keypoint匹配方法。在第一阶段,我们通过简单但有效的运动模型预测初始关键点对应,然后通过基于金字塔的稀疏光流跟踪鲁棒地建立了对应关系。在第二阶段,我们利用运动平滑度和末端几何形状的约束来改进对应关系。特别是,我们的方法仅计算关键帧的描述符。我们在\ texit {tum}和\ texit {icl-nuim} RGB-D数据集上测试Fastorb-Slam,并将其准确性和效率与九种现有的RGB-D SLAM方法进行比较。定性和定量结果表明,我们的方法实现了最先进的准确性,并且大约是ORB-SLAM2的两倍。
translated by 谷歌翻译
This paper presents ORB-SLAM, a feature-based monocular SLAM system that operates in real time, in small and large, indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.
translated by 谷歌翻译
地理定位的概念是指确定地球上的某些“实体”的位置的过程,通常使用全球定位系统(GPS)坐标。感兴趣的实体可以是图像,图像序列,视频,卫星图像,甚至图像中可见的物体。由于GPS标记媒体的大规模数据集由于智能手机和互联网而迅速变得可用,而深入学习已经上升以提高机器学习模型的性能能力,因此由于其显着影响而出现了视觉和对象地理定位的领域广泛的应用,如增强现实,机器人,自驾驶车辆,道路维护和3D重建。本文提供了对涉及图像的地理定位的全面调查,其涉及从捕获图像(图像地理定位)或图像内的地理定位对象(对象地理定位)的地理定位的综合调查。我们将提供深入的研究,包括流行算法的摘要,对所提出的数据集的描述以及性能结果的分析来说明每个字段的当前状态。
translated by 谷歌翻译
循环结束是自动移动系统同时本地化和映射(SLAM)的基本组成部分。在视觉大满贯领域,单词袋(弓)在循环封闭方面取得了巨大的成功。循环搜索的弓特征也可以在随后的6-DOF环校正中使用。但是,对于3D激光雷达的猛击,最新方法可能无法实时识别循环,并且通常无法纠正完整的6-DOF回路姿势。为了解决这一限制,我们呈现了一袋新颖的单词,以实时循环在3D LIDAR大满贯中关闭,称为Bow3D。我们方法的新颖性在于,它不仅有效地识别了重新审视的环路,而且还实时纠正了完整的6型循环姿势。 BOW3D根据3D功能link3D构建单词袋,该链接有效,姿势不变,可用于准确的点对点匹配。我们将我们提出的方法嵌入了3D激光射击系统中,以评估循环闭合性能。我们在公共数据集上测试我们的方法,并将其与其他最先进的算法进行比较。在大多数情况下,BOW3D在F1 MAX和扩展精度分数方面表现出更好的性能,并具有出色的实时性能。值得注意的是,BOW3D平均需要50毫秒才能识别和纠正Kitti 00中的循环(包括4K+ 64射线激光扫描),当在使用Intel Core i7 @2.2 GHz处理器的笔记本上执行时。
translated by 谷歌翻译
循环闭合检测是在复杂环境中长期机器人导航的关键技术。在本文中,我们提出了一个全局描述符,称为正态分布描述符(NDD),用于3D点云循环闭合检测。描述符编码点云的概率密度分数和熵作为描述符。我们还提出了快速旋转对准过程,并将相关系数用作描述符之间的相似性。实验结果表明,我们的方法在准确性和效率上都优于最新点云描述符。源代码可用,可以集成到现有的LIDAR射测和映射(壤土)系统中。
translated by 谷歌翻译
在这项研究中,我们提出了一种新型的视觉定位方法,以根据RGB摄像机的可视数据准确估计机器人在3D激光镜头内的六个自由度(6-DOF)姿势。使用基于先进的激光雷达的同时定位和映射(SLAM)算法,可获得3D地图,能够收集精确的稀疏图。将从相机图像中提取的功能与3D地图的点进行了比较,然后解决了几何优化问题,以实现精确的视觉定位。我们的方法允许使用配备昂贵激光雷达的侦察兵机器人一次 - 用于映射环境,并且仅使用RGB摄像头的多个操作机器人 - 执行任务任务,其本地化精度高于常见的基于相机的解决方案。该方法在Skolkovo科学技术研究所(Skoltech)收集的自定义数据集上进行了测试。在评估本地化准确性的过程中,我们设法达到了厘米级的准确性;中间翻译误差高达1.3厘米。仅使用相机实现的确切定位使使用自动移动机器人可以解决需要高度本地化精度的最复杂的任务。
translated by 谷歌翻译
视觉同时定位和映射(VSLAM)在计算机视觉和机器人社区中取得了巨大进展,并已成功用于许多领域,例如自主机器人导航和AR/VR。但是,VSLAM无法在动态和复杂的环境中实现良好的定位。许多出版物报告说,通过与VSLAM结合语义信息,语义VSLAM系统具有近年来解决上述问题的能力。然而,尚无关于语义VSLAM的全面调查。为了填补空白,本文首先回顾了语义VSLAM的发展,并明确着眼于其优势和差异。其次,我们探讨了语义VSLAM的三个主要问题:语义信息的提取和关联,语义信息的应用以及语义VSLAM的优势。然后,我们收集和分析已广泛用于语义VSLAM系统的当前最新SLAM数据集。最后,我们讨论未来的方向,该方向将为语义VSLAM的未来发展提供蓝图。
translated by 谷歌翻译
视觉地点识别(VPR)是一个具有挑战性的任务,具有巨大的计算成本与高识别性能之间的不平衡。由于轻质卷积神经网络(CNNS)和局部聚合描述符(VLAD)层向量的火车能力的实用特征提取能力,我们提出了一种由前部组成的轻量级弱监管的端到端神经网络-anded的感知模型称为ghostcnn和学习的VLAD层作为后端。 Ghostcnn基于幽灵模块,这些模块是基于重量的CNN架构。它们可以使用线性操作而不是传统的卷积过程生成冗余特征映射,从而在计算资源和识别准确性之间进行良好的权衡。为了进一步增强我们提出的轻量级模型,我们将扩张的卷曲添加到Ghost模块中,以获取包含更多空间语义信息的功能,提高准确性。最后,在常用的公共基准和我们的私人数据集上进行的丰富实验验证了所提出的神经网络,分别将VGG16-NetVlad的拖鞋和参数减少了99.04%和80.16%。此外,两种模型都达到了类似的准确性。
translated by 谷歌翻译
尽管外观和观点的显着变化,视觉地点识别(VPR)通常是能够识别相同的地方。 VPR是空间人工智能的关键组成部分,使机器人平台和智能增强平台,例如增强现实设备,以察觉和理解物理世界。在本文中,我们观察到有三个“驱动程序”,它对空间智能代理有所要求,因此vpr系统:1)特定代理包括其传感器和计算资源,2)该代理的操作环境,以及3)人造工具执行的具体任务。在本文中,考虑到这些驱动因素,包括他们的位置代表和匹配选择,在VPR区域中表征和调查关键作品。我们还基于视觉重叠的VPR提供了一种新的VPR - 类似于大脑中的空间视图单元格 - 这使我们能够找到对机器人和计算机视觉领域的其他研究领域的相似之处和差异。我们确定了许多开放的挑战,并建议未来工作需要更深入的关注的领域。
translated by 谷歌翻译
在本文中,引入了两种半监督外观循环闭合检测技术,HGCN-FABMAP和HGCN弓。此外,还提出了对艺术本地化的当前状态的扩展。提出的HGCN-FABMAP方法是以离线方式实施的,该方法结合了贝叶斯概率模式进行循环检测决策。具体而言,我们让双曲线图卷积神经网络(HGCN)在冲浪中运行,并在SLAM过程中执行矢量量化部分。先前使用HKMeans,Kmeans ++等算法以无监督的方式进行此部分。使用HGCN的主要优点是它在图形边数的数量上线性缩放。实验结果表明,HGCN-FABMAP算法比HGCN-ORB需要更多的簇质心,否则无法检测到环的封闭。因此,我们认为HGCN-ORB在记忆消耗方面更有效率,同样,我们得出了HGCN-BOW和HGCN-FABMAP相对于其他算法的优越性。
translated by 谷歌翻译
Advanced visual localization techniques encompass image retrieval challenges and 6 Degree-of-Freedom (DoF) camera pose estimation, such as hierarchical localization. Thus, they must extract global and local features from input images. Previous methods have achieved this through resource-intensive or accuracy-reducing means, such as combinatorial pipelines or multi-task distillation. In this study, we present a novel method called SuperGF, which effectively unifies local and global features for visual localization, leading to a higher trade-off between localization accuracy and computational efficiency. Specifically, SuperGF is a transformer-based aggregation model that operates directly on image-matching-specific local features and generates global features for retrieval. We conduct experimental evaluations of our method in terms of both accuracy and efficiency, demonstrating its advantages over other methods. We also provide implementations of SuperGF using various types of local features, including dense and sparse learning-based or hand-crafted descriptors.
translated by 谷歌翻译
基于深度学习的视觉位置识别技术近年来将自己作为最先进的技术,并不能很好地概括与训练集在视觉上不同的环境。因此,为了达到最佳性能,有时有必要将网络调整到目标环境中。为此,我们根据同时定位和映射(SLAM)作为监督信号而不需要GPS或手动标记,提出了一个基于强大的姿势图优化的自我监督域校准程序。此外,我们利用该程序来改善在安全关键应用中很重要的位置识别匹配的不确定性估计。我们表明,我们的方法可以改善目标环境与训练集不同的最先进技术的性能,并且我们可以获得不确定性估计。我们认为,这种方法将帮助从业者在现实世界应用中部署健壮的位置识别解决方案。我们的代码公开可用:https://github.com/mistlab/vpr-calibration-and-uncrightity
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
位置识别是可以协助同时定位和映射(SLAM)进行循环闭合检测和重新定位以进行长期导航的基本模块。在过去的20美元中,该地点认可社区取得了惊人的进步,这吸引了在计算机视觉和机器人技术等多个领域的广泛研究兴趣和应用。但是,在复杂的现实世界情景中,很少有方法显示出有希望的位置识别性能,在复杂的现实世界中,长期和大规模的外观变化通常会导致故障。此外,在最先进的方法之间缺乏集成框架,可以应对所有挑战,包括外观变化,观点差异,对未知区域的稳健性以及现实世界中的效率申请。在这项工作中,我们调查针对长期本地化并讨论未来方向和机会的最先进方法。首先,我们研究了长期自主权中的位置识别以及在现实环境中面临的主要挑战。然后,我们回顾了最新的作品,以应对各种位置识别挑战的不同传感器方式和当前的策略的认可。最后,我们回顾了现有的数据集以进行长期本地化,并为不同的方法介绍了我们的数据集和评估API。本文可以成为该地点识别界新手的研究人员以及关心长期机器人自主权的研究人员。我们还对机器人技术中的常见问题提供了意见:机器人是否需要准确的本地化来实现长期自治?这项工作以及我们的数据集和评估API的摘要可向机器人社区公开,网址为:https://github.com/metaslam/gprs。
translated by 谷歌翻译