In this paper, we propose an end-to-end framework that jointly learns keypoint detection, descriptor representation and cross-frame matching for the task of image-based 3D localization. Prior art has tackled each of these components individually, purportedly aiming to alleviate difficulties in effectively train a holistic network. We design a self-supervised image warping correspondence loss for both feature detection and matching, a weakly-supervised epipolar constraints loss on relative camera pose learning, and a directional matching scheme that detects key-point features in a source image and performs coarse-to-fine correspondence search on the target image. We leverage this framework to enforce cycle consistency in our matching module. In addition, we propose a new loss to robustly handle both definite inlier/outlier matches and less-certain matches. The integration of these learning mechanisms enables end-to-end training of a single network performing all three localization components. Bench-marking our approach on public data-sets, exemplifies how such an end-to-end framework is able to yield more accurate localization that out-performs both traditional methods as well as state-of-the-art weakly supervised methods.
translated by 谷歌翻译
弱监督学习可以帮助本地特征方法来克服以密集标记的对应关系获取大规模数据集的障碍。然而,由于弱监管无法区分检测和描述步骤造成的损失,因此直接在联合描述 - 然后检测管道内进行弱监督的学习,其性能受到限制。在本文中,我们提出了一种针对弱监督当地特征学习量身定制的解耦描述的管道。在我们的管道内,检测步骤与描述步骤分离并推迟直到学习判别和鲁棒描述符。此外,我们介绍了一条线到窗口搜索策略,以明确地使用相机姿势信息以获得更好的描述符学习。广泛的实验表明,我们的方法,即POSFEAT(相机姿势监督特征),以前完全和弱监督的方法优异,在各种下游任务上实现了最先进的性能。
translated by 谷歌翻译
在本文中,我们建议超越建立的基于视觉的本地化方法,该方法依赖于查询图像和3D点云之间的视觉描述符匹配。尽管通过视觉描述符匹配关键点使本地化高度准确,但它具有重大的存储需求,提出了隐私问题,并需要长期对描述符进行更新。为了优雅地应对大规模定位的实用挑战,我们提出了Gomatch,这是基于视觉的匹配的替代方法,仅依靠几何信息来匹配图像键点与地图的匹配,这是轴承矢量集。我们的新型轴承矢量表示3D点,可显着缓解基于几何的匹配中的跨模式挑战,这阻止了先前的工作在现实环境中解决本地化。凭借额外的仔细建筑设计,Gomatch在先前的基于几何的匹配工作中改善了(1067m,95.7升)和(1.43m,34.7摄氏度),平均中位数姿势错误,同时需要7个尺寸,同时需要7片。与最佳基于视觉的匹配方法相比,几乎1.5/1.7%的存储容量。这证实了其对现实世界本地化的潜力和可行性,并为不需要存储视觉描述符的城市规模的视觉定位方法打开了未来努力的大门。
translated by 谷歌翻译
Sparse local feature extraction is usually believed to be of important significance in typical vision tasks such as simultaneous localization and mapping, image matching and 3D reconstruction. At present, it still has some deficiencies needing further improvement, mainly including the discrimination power of extracted local descriptors, the localization accuracy of detected keypoints, and the efficiency of local feature learning. This paper focuses on promoting the currently popular sparse local feature learning with camera pose supervision. Therefore, it pertinently proposes a Shared Coupling-bridge scheme with four light-weight yet effective improvements for weakly-supervised local feature (SCFeat) learning. It mainly contains: i) the \emph{Feature-Fusion-ResUNet Backbone} (F2R-Backbone) for local descriptors learning, ii) a shared coupling-bridge normalization to improve the decoupling training of description network and detection network, iii) an improved detection network with peakiness measurement to detect keypoints and iv) the fundamental matrix error as a reward factor to further optimize feature detection training. Extensive experiments prove that our SCFeat improvement is effective. It could often obtain a state-of-the-art performance on classic image matching and visual localization. In terms of 3D reconstruction, it could still achieve competitive results. For sharing and communication, our source codes are available at https://github.com/sunjiayuanro/SCFeat.git.
translated by 谷歌翻译
Erroneous feature matches have severe impact on subsequent camera pose estimation and often require additional, time-costly measures, like RANSAC, for outlier rejection. Our method tackles this challenge by addressing feature matching and pose optimization jointly. To this end, we propose a graph attention network to predict image correspondences along with confidence weights. The resulting matches serve as weighted constraints in a differentiable pose estimation. Training feature matching with gradients from pose optimization naturally learns to down-weight outliers and boosts pose estimation on image pairs compared to SuperGlue by 6.7% on ScanNet. At the same time, it reduces the pose estimation time by over 50% and renders RANSAC iterations unnecessary. Moreover, we integrate information from multiple views by spanning the graph across multiple frames to predict the matches all at once. Multi-view matching combined with end-to-end training improves the pose estimation metrics on Matterport3D by 18.8% compared to SuperGlue.
translated by 谷歌翻译
本地图像功能匹配,旨在识别图像对的识别和相应的相似区域,是计算机视觉中的重要概念。大多数现有的图像匹配方法遵循一对一的分配原则,并采用共同最近的邻居来确保跨图像之间本地特征之间的独特对应关系。但是,来自不同条件的图像可能会容纳大规模变化或观点多样性,以便一对一的分配可能在密集匹配中导致模棱两可或丢失的表示形式。在本文中,我们介绍了一种新颖的无探测器本地特征匹配方法Adamatcher,该方法首先通过轻巧的特征交互模块与密集的特征相关联,并估算了配对图像的可见面积,然后执行贴片级多到 - 一个分配可以预测匹配建议,并最终根据一对一的完善模块进行完善。广泛的实验表明,Adamatcher的表现优于固体基线,并在许多下游任务上实现最先进的结果。此外,多对一分配和一对一的完善模块可以用作其他匹配方法(例如Superglue)的改进网络,以进一步提高其性能。代码将在出版时提供。
translated by 谷歌翻译
Visual localization is the task of estimating camera pose in a known scene, which is an essential problem in robotics and computer vision. However, long-term visual localization is still a challenge due to the environmental appearance changes caused by lighting and seasons. While techniques exist to address appearance changes using neural networks, these methods typically require ground-truth pose information to generate accurate image correspondences or act as a supervisory signal during training. In this paper, we present a novel self-supervised feature learning framework for metric visual localization. We use a sequence-based image matching algorithm across different sequences of images (i.e., experiences) to generate image correspondences without ground-truth labels. We can then sample image pairs to train a deep neural network that learns sparse features with associated descriptors and scores without ground-truth pose supervision. The learned features can be used together with a classical pose estimator for visual stereo localization. We validate the learned features by integrating with an existing Visual Teach & Repeat pipeline to perform closed-loop localization experiments under different lighting conditions for a total of 22.4 km.
translated by 谷歌翻译
在统一功能对应模型中建模稀疏和致密的图像匹配最近引起了研究的兴趣。但是,现有的努力主要集中于提高匹配的准确性,同时忽略其效率,这对于现实世界的应用至关重要。在本文中,我们提出了一种有效的结构,该结构以粗到精细的方式找到对应关系,从而显着提高了功能对应模型的效率。为了实现这一目标,多个变压器块是阶段范围连接的,以逐步完善共享的多尺度特征提取网络上的预测坐标。给定一对图像和任意查询坐标,所有对应关系均在单个进纸传球内预测。我们进一步提出了一种自适应查询聚类策略和基于不确定性的离群检测模块,以与提出的框架合作,以进行更快,更好的预测。对各种稀疏和密集的匹配任务进行的实验证明了我们方法在效率和有效性上对现有的最新作品的优势。
translated by 谷歌翻译
尽管提取了通过手工制作和基于学习的描述符实现的本地特征的进步,但它们仍然受到不符合非刚性转换的不变性的限制。在本文中,我们提出了一种计算来自静止图像的特征的新方法,该特征对于非刚性变形稳健,以避免匹配可变形表面和物体的问题。我们的变形感知当地描述符,命名优惠,利用极性采样和空间变压器翘曲,以提供旋转,尺度和图像变形的不变性。我们通过将等距非刚性变形应用于模拟环境中的对象作为指导来提供高度辨别的本地特征来培训模型架构端到端。该实验表明,我们的方法优于静止图像中的实际和现实合成可变形对象的不同数据集中的最先进的手工制作,基于学习的图像和RGB-D描述符。描述符的源代码和培训模型在https://www.verlab.dcc.ufmg.br/descriptors/neUrips2021上公开可用。
translated by 谷歌翻译
Video provides us with the spatio-temporal consistency needed for visual learning. Recent approaches have utilized this signal to learn correspondence estimation from close-by frame pairs. However, by only relying on close-by frame pairs, those approaches miss out on the richer long-range consistency between distant overlapping frames. To address this, we propose a self-supervised approach for correspondence estimation that learns from multiview consistency in short RGB-D video sequences. Our approach combines pairwise correspondence estimation and registration with a novel SE(3) transformation synchronization algorithm. Our key insight is that self-supervised multiview registration allows us to obtain correspondences over longer time frames; increasing both the diversity and difficulty of sampled pairs. We evaluate our approach on indoor scenes for correspondence estimation and RGB-D pointcloud registration and find that we perform on-par with supervised approaches.
translated by 谷歌翻译
This paper introduces SuperGlue, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points. Assignments are estimated by solving a differentiable optimal transport problem, whose costs are predicted by a graph neural network. We introduce a flexible context aggregation mechanism based on attention, enabling SuperGlue to reason about the underlying 3D scene and feature assignments jointly. Compared to traditional, hand-designed heuristics, our technique learns priors over geometric transformations and regularities of the 3D world through end-to-end training from image pairs. SuperGlue outperforms other learned approaches and achieves state-of-the-art results on the task of pose estimation in challenging real-world indoor and outdoor environments. The proposed method performs matching in real-time on a modern GPU and can be readily integrated into modern SfM or SLAM systems. The code and trained weights are publicly available at github.com/magicleap/SuperGluePretrainedNetwork.
translated by 谷歌翻译
我们提出了一种新颖的方法,可以可靠地估计相机的姿势,并在极端环境中获得的一系列图像,例如深海或外星地形。在这些挑战性条件下获得的数据被无纹理表面,图像退化以及重复性和高度模棱两可的结构所破坏。当天真地部署时,最先进的方法可能会在我们的经验分析确认的那些情况下失败。在本文中,我们试图在这些极端情况下使摄像机重新定位起作用。为此,我们提出:(i)一个分层定位系统,我们利用时间信息和(ii)一种新颖的环境感知图像增强方法来提高鲁棒性和准确性。我们广泛的实验结果表明,在两个极端环境下我们的方法有利于我们的方法:将自动的水下车辆定位,并将行星漫游者定位在火星样的沙漠中。此外,我们的方法仅使用20%的培训数据就可以在室内基准(7片数据集)上使用最先进的方法(7片数据集)实现可比性的性能。
translated by 谷歌翻译
在许多计算机视觉管道中,在图像之间建立一组稀疏的关键点相关性是一项基本任务。通常,这转化为一个计算昂贵的最近邻居搜索,必须将一个图像的每个键盘描述符与其他图像的所有描述符进行比较。为了降低匹配阶段的计算成本,我们提出了一个能够检测到每个图像处的互补关键集的深度提取网络。由于仅需要在不同图像上比较同一组中的描述符,因此匹配相计算复杂度随集合数量而降低。我们训练我们的网络以预测关键点并共同计算相应的描述符。特别是,为了学习互补的关键点集,我们引入了一种新颖的无监督损失,对不同集合之间的交叉点进行了惩罚。此外,我们提出了一种基于描述符的新型加权方案,旨在惩罚使用非歧视性描述符的关键点的检测。通过广泛的实验,我们表明,我们的功能提取网络仅在合成的扭曲图像和完全无监督的方式进行训练,以降低匹配的复杂性,在3D重建和重新定位任务上取得了竞争成果。
translated by 谷歌翻译
准确的相机姿势估计是许多应用程序(例如自动驾驶,移动机器人技术和增强现实)的基本要求。在这项工作中,我们解决了在给定环境中从单个RGB图像中估算全局6 DOF摄像头的问题。以前的作品考虑图像的每个部分都有价值对于本地化。但是,许多图像区域,例如天空,遮挡和重复的非固定模式,不能用于本地化。除了添加不必要的计算工作外,从此类地区提取和匹配功能还会产生许多错误的匹配,从而降低了本地化准确性和效率。我们的工作解决了这一特定问题,并通过利用有趣的3D模型的有趣概念来显示,我们可以利用歧视性环境零件并避免出于单个图像本地化而避免无用的图像区域。有趣的是,通过避免从树木,灌木丛,汽车,行人和遮挡等不可靠的图像区域中选择关键点,我们的工作自然而然地作为离群过滤器。这使我们的系统高效,在最小的对应关系中,由于异常值的数量很少,因此需要高度准确。我们的工作超过了室外剑桥地标数据集的最新方法。仅在推理上依靠单个图像,它的精度方法超过了构成姿势先验和/或参考3D模型的精度方法,同时更快。通过选择仅100个对应关系,它超过了从数千个对应关系进行定位的类似方法,同时更有效。特别是,与这些方法相比,它实现了,在旧院面场景中,本地化提高了33%。此外,它甚至可以从图像顺序中学习的直接姿势回归器
translated by 谷歌翻译
最近的3D注册方法可以有效处理大规模或部分重叠的点对。然而,尽管具有实用性,但在空间尺度和密度方面与不平衡对匹配。我们提出了一种新颖的3D注册方法,称为uppnet,用于不平衡点对。我们提出了一个层次结构框架,通过逐渐减少搜索空间,可以有效地找到近距离的对应关系。我们的方法预测目标点的子区域可能与查询点重叠。以下超点匹配模块和细粒度的细化模块估计两个点云之间的准确对应关系。此外,我们应用几何约束来完善满足空间兼容性的对应关系。对应性预测是对端到端训练的,我们的方法可以通过单个前向通行率预测适当的刚体转换,并给定点云对。为了验证提出方法的疗效,我们通过增强Kitti LiDAR数据集创建Kitti-UPP数据集。该数据集的实验表明,所提出的方法显着优于最先进的成对点云注册方法,而当目标点云大约为10 $ \ times $ higation时,注册召回率的提高了78%。比查询点云大约比查询点云更密集。
translated by 谷歌翻译
我们研究学习特征姿势的问题,即比例和方向,以构成感兴趣的图像区域。尽管它显然很简单,但问题是不平凡的。很难获得具有模型直接从中学习的明确姿势注释的大规模图像区域。为了解决这个问题,我们通过直方图对准技术提出了一个自制的学习框架。它通过随机重新缩放/旋转来生成成对的图像贴片,然后训练估计器以预测其比例/方向值,从而使其相对差异与所使用的重新分组/旋转一致。估算器学会了预测规模/方向的非参数直方图分布,而无需任何监督。实验表明,它在规模/方向估计中显着优于先前的方法,还可以通过将我们的斑块姿势纳入匹配过程中来改善图像匹配和6个DOF相机姿势估计。
translated by 谷歌翻译
在本文中,我们解决了估算图像之间尺度因子的问题。我们制定规模估计问题作为对尺度因素的概率分布的预测。我们设计了一种新的架构,ScaleNet,它利用扩张的卷积以及自我和互相关层来预测图像之间的比例。我们展示了具有估计尺度的整流图像导致各种任务和方法的显着性能改进。具体而言,我们展示了ScaleNet如何与稀疏的本地特征和密集的通信网络组合,以改善不同的基准和数据集中的相机姿势估计,3D重建或密集的几何匹配。我们对多项任务提供了广泛的评估,并分析了标准齿的计算开销。代码,评估协议和培训的型号在https://github.com/axelbarroso/scalenet上公开提供。
translated by 谷歌翻译
作为智能机器人的一项基本任务,Visual Slam在过去几十年中取得了长足的进步。但是,在高度弱质地的环境下,强大的大满贯仍然非常具有挑战性。在本文中,我们提出了一个名为RWT-Slam的新型视觉大满贯系统,以解决这个问题。我们修改LOFTR网络,该网络能够在低纹理的场景下产生密集的点匹配以生成特征描述符。为了将新功能集成到流行的Orb-Slam框架中,我们开发了功能面具,以滤除不可靠的功能并采用KNN策略来增强匹配的鲁棒性。我们还对新的描述符进行了视觉词汇,以有效地循环结束。在TUM和Openloris等各种公共数据集以及我们自己的数据中测试了由此产生的RWT-SLAM。结果显示在高度弱质地的环境下表现非常有希望。
translated by 谷歌翻译
我们解决了一对图像之间找到密集的视觉对应关系的重要任务。由于各种因素,例如质地差,重复的模式,照明变化和运动模糊,这是一个具有挑战性的问题。与使用密集信号基础真相作为本地功能匹配培训的直接监督的方法相反,我们训练3DG-STFM:一种多模式匹配模型(教师),以在3D密集的对应性监督下执行深度一致性,并将知识转移到2D单峰匹配模型(学生)。教师和学生模型均由两个基于变压器的匹配模块组成,这些模块以粗略的方式获得密集的对应关系。教师模型指导学生模型学习RGB诱导的深度信息,以实现粗糙和精细分支的匹配目的。我们还在模型压缩任务上评估了3DG-STFM。据我们所知,3DG-STFM是第一种用于本地功能匹配任务的学生教师学习方法。该实验表明,我们的方法优于室内和室外摄像头姿势估计以及同型估计问题的最先进方法。代码可在以下网址获得:https://github.com/ryan-prime/3dg-stfm。
translated by 谷歌翻译
We present a novel method for local image feature matching. Instead of performing image feature detection, description, and matching sequentially, we propose to first establish pixel-wise dense matches at a coarse level and later refine the good matches at a fine level. In contrast to dense methods that use a cost volume to search correspondences, we use self and cross attention layers in Transformer to obtain feature descriptors that are conditioned on both images. The global receptive field provided by Transformer enables our method to produce dense matches in low-texture areas, where feature detectors usually struggle to produce repeatable interest points. The experiments on indoor and outdoor datasets show that LoFTR outperforms state-of-the-art methods by a large margin. LoFTR also ranks first on two public benchmarks of visual localization among the published methods. Code is available at our project page: https://zju3dv.github.io/loftr/.
translated by 谷歌翻译