在本文中,我们解决了估算图像之间尺度因子的问题。我们制定规模估计问题作为对尺度因素的概率分布的预测。我们设计了一种新的架构,ScaleNet,它利用扩张的卷积以及自我和互相关层来预测图像之间的比例。我们展示了具有估计尺度的整流图像导致各种任务和方法的显着性能改进。具体而言,我们展示了ScaleNet如何与稀疏的本地特征和密集的通信网络组合,以改善不同的基准和数据集中的相机姿势估计,3D重建或密集的几何匹配。我们对多项任务提供了广泛的评估,并分析了标准齿的计算开销。代码,评估协议和培训的型号在https://github.com/axelbarroso/scalenet上公开提供。
translated by 谷歌翻译
This paper introduces SuperGlue, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points. Assignments are estimated by solving a differentiable optimal transport problem, whose costs are predicted by a graph neural network. We introduce a flexible context aggregation mechanism based on attention, enabling SuperGlue to reason about the underlying 3D scene and feature assignments jointly. Compared to traditional, hand-designed heuristics, our technique learns priors over geometric transformations and regularities of the 3D world through end-to-end training from image pairs. SuperGlue outperforms other learned approaches and achieves state-of-the-art results on the task of pose estimation in challenging real-world indoor and outdoor environments. The proposed method performs matching in real-time on a modern GPU and can be readily integrated into modern SfM or SLAM systems. The code and trained weights are publicly available at github.com/magicleap/SuperGluePretrainedNetwork.
translated by 谷歌翻译
在本文中,我们建议超越建立的基于视觉的本地化方法,该方法依赖于查询图像和3D点云之间的视觉描述符匹配。尽管通过视觉描述符匹配关键点使本地化高度准确,但它具有重大的存储需求,提出了隐私问题,并需要长期对描述符进行更新。为了优雅地应对大规模定位的实用挑战,我们提出了Gomatch,这是基于视觉的匹配的替代方法,仅依靠几何信息来匹配图像键点与地图的匹配,这是轴承矢量集。我们的新型轴承矢量表示3D点,可显着缓解基于几何的匹配中的跨模式挑战,这阻止了先前的工作在现实环境中解决本地化。凭借额外的仔细建筑设计,Gomatch在先前的基于几何的匹配工作中改善了(1067m,95.7升)和(1.43m,34.7摄氏度),平均中位数姿势错误,同时需要7个尺寸,同时需要7片。与最佳基于视觉的匹配方法相比,几乎1.5/1.7%的存储容量。这证实了其对现实世界本地化的潜力和可行性,并为不需要存储视觉描述符的城市规模的视觉定位方法打开了未来努力的大门。
translated by 谷歌翻译
尽管提取了通过手工制作和基于学习的描述符实现的本地特征的进步,但它们仍然受到不符合非刚性转换的不变性的限制。在本文中,我们提出了一种计算来自静止图像的特征的新方法,该特征对于非刚性变形稳健,以避免匹配可变形表面和物体的问题。我们的变形感知当地描述符,命名优惠,利用极性采样和空间变压器翘曲,以提供旋转,尺度和图像变形的不变性。我们通过将等距非刚性变形应用于模拟环境中的对象作为指导来提供高度辨别的本地特征来培训模型架构端到端。该实验表明,我们的方法优于静止图像中的实际和现实合成可变形对象的不同数据集中的最先进的手工制作,基于学习的图像和RGB-D描述符。描述符的源代码和培训模型在https://www.verlab.dcc.ufmg.br/descriptors/neUrips2021上公开可用。
translated by 谷歌翻译
视觉定位通过使用查询图像和地图之间的对应分析来解决估计摄像机姿势的挑战。此任务是计算和数据密集型,这在彻底评估各种数据集上的方法攻击挑战。然而,为了进一步进一步前进,我们声称应该在覆盖广域品种的多个数据集上进行稳健的视觉定位算法。为了促进这一点,我们介绍了Kapture,一种新的,灵活,统一的数据格式和工具箱,用于视觉本地化和结构 - 来自运动(SFM)。它可以轻松使用不同的数据集以及有效和可重复使用的数据处理。为了证明这一点,我们提出了一种多功能管道,用于视觉本地化,促进使用不同的本地和全局特征,3D数据(例如深度图),非视觉传感器数据(例如IMU,GPS,WiFi)和各种处理算法。使用多种管道配置,我们在我们的实验中显示出Kapture的巨大功能性。此外,我们在八个公共数据集中评估我们的方法,在那里他们排名第一,并在其中许多上排名第一。为了促进未来的研究,我们在允许BSD许可证下释放本文中使用的代码,模型和本文中使用的所有数据集。 github.com/naver/kapture,github.com/naver/kapture-localization.
translated by 谷歌翻译
在统一功能对应模型中建模稀疏和致密的图像匹配最近引起了研究的兴趣。但是,现有的努力主要集中于提高匹配的准确性,同时忽略其效率,这对于现实世界的应用至关重要。在本文中,我们提出了一种有效的结构,该结构以粗到精细的方式找到对应关系,从而显着提高了功能对应模型的效率。为了实现这一目标,多个变压器块是阶段范围连接的,以逐步完善共享的多尺度特征提取网络上的预测坐标。给定一对图像和任意查询坐标,所有对应关系均在单个进纸传球内预测。我们进一步提出了一种自适应查询聚类策略和基于不确定性的离群检测模块,以与提出的框架合作,以进行更快,更好的预测。对各种稀疏和密集的匹配任务进行的实验证明了我们方法在效率和有效性上对现有的最新作品的优势。
translated by 谷歌翻译
本地图像功能匹配,旨在识别图像对的识别和相应的相似区域,是计算机视觉中的重要概念。大多数现有的图像匹配方法遵循一对一的分配原则,并采用共同最近的邻居来确保跨图像之间本地特征之间的独特对应关系。但是,来自不同条件的图像可能会容纳大规模变化或观点多样性,以便一对一的分配可能在密集匹配中导致模棱两可或丢失的表示形式。在本文中,我们介绍了一种新颖的无探测器本地特征匹配方法Adamatcher,该方法首先通过轻巧的特征交互模块与密集的特征相关联,并估算了配对图像的可见面积,然后执行贴片级多到 - 一个分配可以预测匹配建议,并最终根据一对一的完善模块进行完善。广泛的实验表明,Adamatcher的表现优于固体基线,并在许多下游任务上实现最先进的结果。此外,多对一分配和一对一的完善模块可以用作其他匹配方法(例如Superglue)的改进网络,以进一步提高其性能。代码将在出版时提供。
translated by 谷歌翻译
大多数图像匹配方法在遇到大规模的图像变化时表现不佳。为了解决这个问题,首先,我们提出了一种规模差异感知图像匹配方法(SDAIM),其通过根据估计比例的尺度比调整图像对的两个图像大小来降低局部特征提取之前的图像比例差异。其次,为了准确估计比例比率,我们提出了一种可执行的加强匹配模块(CVARM),然后设计了一种基于CVARM的新型神经网络,称为Scale-Net。所提出的CVARM可以对图像对内的可释放区域进行更多的压力,并抑制仅在一个图像中可见的那些区域的分散注意力。定量和定性实验证实,与所有现有比例比率估计方法相比,所提出的尺度净净值具有更高的比例估计精度和更好的泛化能力。图像匹配和相对姿势估计任务的进一步实验表明,我们的SDAIM和Scale-Net能够大大提高代表性本地特征的性能和最先进的本地特征匹配方法。
translated by 谷歌翻译
This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multihomography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.
translated by 谷歌翻译
在许多计算机视觉管道中,在图像之间建立一组稀疏的关键点相关性是一项基本任务。通常,这转化为一个计算昂贵的最近邻居搜索,必须将一个图像的每个键盘描述符与其他图像的所有描述符进行比较。为了降低匹配阶段的计算成本,我们提出了一个能够检测到每个图像处的互补关键集的深度提取网络。由于仅需要在不同图像上比较同一组中的描述符,因此匹配相计算复杂度随集合数量而降低。我们训练我们的网络以预测关键点并共同计算相应的描述符。特别是,为了学习互补的关键点集,我们引入了一种新颖的无监督损失,对不同集合之间的交叉点进行了惩罚。此外,我们提出了一种基于描述符的新型加权方案,旨在惩罚使用非歧视性描述符的关键点的检测。通过广泛的实验,我们表明,我们的功能提取网络仅在合成的扭曲图像和完全无监督的方式进行训练,以降低匹配的复杂性,在3D重建和重新定位任务上取得了竞争成果。
translated by 谷歌翻译
我们研究学习特征姿势的问题,即比例和方向,以构成感兴趣的图像区域。尽管它显然很简单,但问题是不平凡的。很难获得具有模型直接从中学习的明确姿势注释的大规模图像区域。为了解决这个问题,我们通过直方图对准技术提出了一个自制的学习框架。它通过随机重新缩放/旋转来生成成对的图像贴片,然后训练估计器以预测其比例/方向值,从而使其相对差异与所使用的重新分组/旋转一致。估算器学会了预测规模/方向的非参数直方图分布,而无需任何监督。实验表明,它在规模/方向估计中显着优于先前的方法,还可以通过将我们的斑块姿势纳入匹配过程中来改善图像匹配和6个DOF相机姿势估计。
translated by 谷歌翻译
Video provides us with the spatio-temporal consistency needed for visual learning. Recent approaches have utilized this signal to learn correspondence estimation from close-by frame pairs. However, by only relying on close-by frame pairs, those approaches miss out on the richer long-range consistency between distant overlapping frames. To address this, we propose a self-supervised approach for correspondence estimation that learns from multiview consistency in short RGB-D video sequences. Our approach combines pairwise correspondence estimation and registration with a novel SE(3) transformation synchronization algorithm. Our key insight is that self-supervised multiview registration allows us to obtain correspondences over longer time frames; increasing both the diversity and difficulty of sampled pairs. We evaluate our approach on indoor scenes for correspondence estimation and RGB-D pointcloud registration and find that we perform on-par with supervised approaches.
translated by 谷歌翻译
Erroneous feature matches have severe impact on subsequent camera pose estimation and often require additional, time-costly measures, like RANSAC, for outlier rejection. Our method tackles this challenge by addressing feature matching and pose optimization jointly. To this end, we propose a graph attention network to predict image correspondences along with confidence weights. The resulting matches serve as weighted constraints in a differentiable pose estimation. Training feature matching with gradients from pose optimization naturally learns to down-weight outliers and boosts pose estimation on image pairs compared to SuperGlue by 6.7% on ScanNet. At the same time, it reduces the pose estimation time by over 50% and renders RANSAC iterations unnecessary. Moreover, we integrate information from multiple views by spanning the graph across multiple frames to predict the matches all at once. Multi-view matching combined with end-to-end training improves the pose estimation metrics on Matterport3D by 18.8% compared to SuperGlue.
translated by 谷歌翻译
本地功能匹配是在子像素级别上的计算密集任务。尽管基于检测器的方法和特征描述符在低文本场景中遇到了困难,但具有顺序提取到匹配管道的基于CNN的方法无法使用编码器的匹配能力,并且倾向于覆盖用于匹配的解码器。相比之下,我们提出了一种新型的层次提取和匹配变压器,称为火柴场。在层次编码器的每个阶段,我们将自我注意事项与特征提取和特征匹配的交叉注意相结合,从而产生了人直觉提取和匹配方案。这种匹配感知的编码器释放了过载的解码器,并使该模型高效。此外,将自我交叉注意在分层体系结构中的多尺度特征结合起来,可以提高匹配的鲁棒性,尤其是在低文本室内场景或更少的室外培训数据中。得益于这样的策略,MatchFormer是效率,鲁棒性和精度的多赢解决方案。与以前的室内姿势估计中的最佳方法相比,我们的Lite MatchFormer只有45%的Gflops,但获得了 +1.3%的精度增益和41%的运行速度提升。大型火柴构造器以四个不同的基准达到最新的基准,包括室内姿势估计(SCANNET),室外姿势估计(Megadepth),同型估计和图像匹配(HPATCH)和视觉定位(INLOC)。
translated by 谷歌翻译
我们提出了一种称为DPODV2(密集姿势对象检测器)的三个阶段6 DOF对象检测方法,该方法依赖于致密的对应关系。我们将2D对象检测器与密集的对应关系网络和多视图姿势细化方法相结合,以估计完整的6 DOF姿势。与通常仅限于单眼RGB图像的其他深度学习方法不同,我们提出了一个统一的深度学习网络,允许使用不同的成像方式(RGB或DEPTH)。此外,我们提出了一种基于可区分渲染的新型姿势改进方法。主要概念是在多个视图中比较预测并渲染对应关系,以获得与所有视图中预测的对应关系一致的姿势。我们提出的方法对受控设置中的不同数据方式和培训数据类型进行了严格的评估。主要结论是,RGB在对应性估计中表现出色,而如果有良好的3D-3D对应关系,则深度有助于姿势精度。自然,他们的组合可以实现总体最佳性能。我们进行广泛的评估和消融研究,以分析和验证几个具有挑战性的数据集的结果。 DPODV2在所有这些方面都取得了出色的成果,同时仍然保持快速和可扩展性,独立于使用的数据模式和培训数据的类型
translated by 谷歌翻译
In this paper, we propose an end-to-end framework that jointly learns keypoint detection, descriptor representation and cross-frame matching for the task of image-based 3D localization. Prior art has tackled each of these components individually, purportedly aiming to alleviate difficulties in effectively train a holistic network. We design a self-supervised image warping correspondence loss for both feature detection and matching, a weakly-supervised epipolar constraints loss on relative camera pose learning, and a directional matching scheme that detects key-point features in a source image and performs coarse-to-fine correspondence search on the target image. We leverage this framework to enforce cycle consistency in our matching module. In addition, we propose a new loss to robustly handle both definite inlier/outlier matches and less-certain matches. The integration of these learning mechanisms enables end-to-end training of a single network performing all three localization components. Bench-marking our approach on public data-sets, exemplifies how such an end-to-end framework is able to yield more accurate localization that out-performs both traditional methods as well as state-of-the-art weakly supervised methods.
translated by 谷歌翻译
现有方法以非可分子点检测关键点,因此它们不能直接通过背部传播优化关键点的位置。为解决此问题,我们呈现了一个可差异的关键点检测模块,其输出精确的子像素键点。然后提出了再分断损耗直接优化这些子像素键点,并且呈现了分散峰值损耗以获得准确的关键点正则化。我们还以子像素方式提取描述符,并通过稳定的神经输注误差丢失训练。此外,轻量化网络被设计用于关键点检测和描述符提取,其可以在商业GPU上以每秒95帧运行为95帧。在同性记估计,相机姿态估计和视觉(重新)定位任务中,所提出的方法通过最先进的方法实现了相同的性能,而大大减少了推理时间。
translated by 谷歌翻译
We introduce a lightweight network to improve descriptors of keypoints within the same image. The network takes the original descriptors and the geometric properties of keypoints as the input, and uses an MLP-based self-boosting stage and a Transformer-based cross-boosting stage to enhance the descriptors. The enhanced descriptors can be either real-valued or binary ones. We use the proposed network to boost both hand-crafted (ORB, SIFT) and the state-of-the-art learning-based descriptors (SuperPoint, ALIKE) and evaluate them on image matching, visual localization, and structure-from-motion tasks. The results show that our method significantly improves the performance of each task, particularly in challenging cases such as large illumination changes or repetitive patterns. Our method requires only 3.2ms on desktop GPU and 27ms on embedded GPU to process 2000 features, which is fast enough to be applied to a practical system.
translated by 谷歌翻译
我们提出了一个简单的基线,用于直接估计两个图像之间的相对姿势(旋转和翻译,包括比例)。深度方法最近显示出很强的进步,但通常需要复杂或多阶段的体系结构。我们表明,可以将少数修改应用于视觉变压器(VIT),以使其计算接近八点算法。这种归纳偏见使一种简单的方法在多种环境中具有竞争力,通常在有限的数据制度中具有强劲的性能增长,从而实质上有所改善。
translated by 谷歌翻译
Sparse local feature extraction is usually believed to be of important significance in typical vision tasks such as simultaneous localization and mapping, image matching and 3D reconstruction. At present, it still has some deficiencies needing further improvement, mainly including the discrimination power of extracted local descriptors, the localization accuracy of detected keypoints, and the efficiency of local feature learning. This paper focuses on promoting the currently popular sparse local feature learning with camera pose supervision. Therefore, it pertinently proposes a Shared Coupling-bridge scheme with four light-weight yet effective improvements for weakly-supervised local feature (SCFeat) learning. It mainly contains: i) the \emph{Feature-Fusion-ResUNet Backbone} (F2R-Backbone) for local descriptors learning, ii) a shared coupling-bridge normalization to improve the decoupling training of description network and detection network, iii) an improved detection network with peakiness measurement to detect keypoints and iv) the fundamental matrix error as a reward factor to further optimize feature detection training. Extensive experiments prove that our SCFeat improvement is effective. It could often obtain a state-of-the-art performance on classic image matching and visual localization. In terms of 3D reconstruction, it could still achieve competitive results. For sharing and communication, our source codes are available at https://github.com/sunjiayuanro/SCFeat.git.
translated by 谷歌翻译