可学习的关键点探测器和描述符开始超过经典的手工制作的特征提取方法。关于视觉表示的自我监督学习的最新研究推动了基于深网的可学习模型的不断增长。通过利用传统的数据增强和本性转换,这些网络学会在不利条件下检测到拐角,例如极端照明变化。但是,它们的概括功能仅限于通过经典方法或合成生成的数据检测到的类似角的特征。在本文中,我们提出了对应网络(Corrnet),该网络学会了检测可重复的关键点并通过在空间约束下通过无监督的对比度学习提取歧视性描述。我们的实验表明,Corrnet不仅能够检测到诸如角落之类的低级特征,还可以通过我们提议的关节引导的潜在空间反向传播来代表一对输入图像中存在相似对象的高级特征。我们的方法在视点变化下获得了竞争结果,并在照明变化下实现了最先进的性能。
translated by 谷歌翻译
尽管提取了通过手工制作和基于学习的描述符实现的本地特征的进步,但它们仍然受到不符合非刚性转换的不变性的限制。在本文中,我们提出了一种计算来自静止图像的特征的新方法,该特征对于非刚性变形稳健,以避免匹配可变形表面和物体的问题。我们的变形感知当地描述符,命名优惠,利用极性采样和空间变压器翘曲,以提供旋转,尺度和图像变形的不变性。我们通过将等距非刚性变形应用于模拟环境中的对象作为指导来提供高度辨别的本地特征来培训模型架构端到端。该实验表明,我们的方法优于静止图像中的实际和现实合成可变形对象的不同数据集中的最先进的手工制作,基于学习的图像和RGB-D描述符。描述符的源代码和培训模型在https://www.verlab.dcc.ufmg.br/descriptors/neUrips2021上公开可用。
translated by 谷歌翻译
Local feature detection is a key ingredient of many image processing and computer vision applications, such as visual odometry and localization. Most existing algorithms focus on feature detection from a sharp image. They would thus have degraded performance once the image is blurred, which could happen easily under low-lighting conditions. To address this issue, we propose a simple yet both efficient and effective keypoint detection method that is able to accurately localize the salient keypoints in a blurred image. Our method takes advantages of a novel multi-layer perceptron (MLP) based architecture that significantly improve the detection repeatability for a blurred image. The network is also light-weight and able to run in real-time, which enables its deployment for time-constrained applications. Extensive experimental results demonstrate that our detector is able to improve the detection repeatability with blurred images, while keeping comparable performance as existing state-of-the-art detectors for sharp images.
translated by 谷歌翻译
弱监督学习可以帮助本地特征方法来克服以密集标记的对应关系获取大规模数据集的障碍。然而,由于弱监管无法区分检测和描述步骤造成的损失,因此直接在联合描述 - 然后检测管道内进行弱监督的学习,其性能受到限制。在本文中,我们提出了一种针对弱监督当地特征学习量身定制的解耦描述的管道。在我们的管道内,检测步骤与描述步骤分离并推迟直到学习判别和鲁棒描述符。此外,我们介绍了一条线到窗口搜索策略,以明确地使用相机姿势信息以获得更好的描述符学习。广泛的实验表明,我们的方法,即POSFEAT(相机姿势监督特征),以前完全和弱监督的方法优异,在各种下游任务上实现了最先进的性能。
translated by 谷歌翻译
We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description. While previous works have successfully tackled each one of these problems individually, we show how to learn to do all three in a unified manner while preserving end-to-end differentiability. We then demonstrate that our Deep pipeline outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining.
translated by 谷歌翻译
This paper introduces SuperGlue, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points. Assignments are estimated by solving a differentiable optimal transport problem, whose costs are predicted by a graph neural network. We introduce a flexible context aggregation mechanism based on attention, enabling SuperGlue to reason about the underlying 3D scene and feature assignments jointly. Compared to traditional, hand-designed heuristics, our technique learns priors over geometric transformations and regularities of the 3D world through end-to-end training from image pairs. SuperGlue outperforms other learned approaches and achieves state-of-the-art results on the task of pose estimation in challenging real-world indoor and outdoor environments. The proposed method performs matching in real-time on a modern GPU and can be readily integrated into modern SfM or SLAM systems. The code and trained weights are publicly available at github.com/magicleap/SuperGluePretrainedNetwork.
translated by 谷歌翻译
Sparse local feature extraction is usually believed to be of important significance in typical vision tasks such as simultaneous localization and mapping, image matching and 3D reconstruction. At present, it still has some deficiencies needing further improvement, mainly including the discrimination power of extracted local descriptors, the localization accuracy of detected keypoints, and the efficiency of local feature learning. This paper focuses on promoting the currently popular sparse local feature learning with camera pose supervision. Therefore, it pertinently proposes a Shared Coupling-bridge scheme with four light-weight yet effective improvements for weakly-supervised local feature (SCFeat) learning. It mainly contains: i) the \emph{Feature-Fusion-ResUNet Backbone} (F2R-Backbone) for local descriptors learning, ii) a shared coupling-bridge normalization to improve the decoupling training of description network and detection network, iii) an improved detection network with peakiness measurement to detect keypoints and iv) the fundamental matrix error as a reward factor to further optimize feature detection training. Extensive experiments prove that our SCFeat improvement is effective. It could often obtain a state-of-the-art performance on classic image matching and visual localization. In terms of 3D reconstruction, it could still achieve competitive results. For sharing and communication, our source codes are available at https://github.com/sunjiayuanro/SCFeat.git.
translated by 谷歌翻译
In this paper, we propose an end-to-end framework that jointly learns keypoint detection, descriptor representation and cross-frame matching for the task of image-based 3D localization. Prior art has tackled each of these components individually, purportedly aiming to alleviate difficulties in effectively train a holistic network. We design a self-supervised image warping correspondence loss for both feature detection and matching, a weakly-supervised epipolar constraints loss on relative camera pose learning, and a directional matching scheme that detects key-point features in a source image and performs coarse-to-fine correspondence search on the target image. We leverage this framework to enforce cycle consistency in our matching module. In addition, we propose a new loss to robustly handle both definite inlier/outlier matches and less-certain matches. The integration of these learning mechanisms enables end-to-end training of a single network performing all three localization components. Bench-marking our approach on public data-sets, exemplifies how such an end-to-end framework is able to yield more accurate localization that out-performs both traditional methods as well as state-of-the-art weakly supervised methods.
translated by 谷歌翻译
This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multihomography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.
translated by 谷歌翻译
小天体的任务在很大程度上依赖于光学特征跟踪,以表征和相对导航。尽管深度学习导致了功能检测和描述方面的巨大进步,但由于大规模,带注释的数据集的可用性有限,因此培训和验证了空间应用程序的数据驱动模型具有挑战性。本文介绍了Astrovision,这是一个大规模数据集,由115,970个密集注释的,真实的图像组成,这些图像是过去和正在进行的任务中捕获的16个不同物体的真实图像。我们利用Astrovision开发一组标准化基准,并对手工和数据驱动的功能检测和描述方法进行详尽的评估。接下来,我们采用Astrovision对最先进的,深刻的功能检测和描述网络进行端到端培训,并在多个基准测试中表现出改善的性能。将公开使用完整的基准管道和数据集,以促进用于空间应用程序的计算机视觉算法的发展。
translated by 谷歌翻译
In this paper, we introduce a contrastive learning framework for keypoint detection (CoKe). Keypoint detection differs from other visual tasks where contrastive learning has been applied because the input is a set of images in which multiple keypoints are annotated. This requires the contrastive learning to be extended such that the keypoints are represented and detected independently, which enables the contrastive loss to make the keypoint features different from each other and from the background. Our approach has two benefits: It enables us to exploit contrastive learning for keypoint detection, and by detecting each keypoint independently the detection becomes more robust to occlusion compared to holistic methods, such as stacked hourglass networks, which attempt to detect all keypoints jointly. Our CoKe framework introduces several technical innovations. In particular, we introduce: (i) A clutter bank to represent non-keypoint features; (ii) a keypoint bank that stores prototypical representations of keypoints to approximate the contrastive loss between keypoints; and (iii) a cumulative moving average update to learn the keypoint prototypes while training the feature extractor. Our experiments on a range of diverse datasets (PASCAL3D+, MPII, ObjectNet3D) show that our approach works as well, or better than, alternative methods for keypoint detection, even for human keypoints, for which the literature is vast. Moreover, we observe that CoKe is exceptionally robust to partial occlusion and previously unseen object poses.
translated by 谷歌翻译
我们为对密集物体网(DON)的稳健训练(DON)提出了一个框架,重点是多对象机器人操纵方案。 DON是一种获取密集的,视图的对象描述符的流行方法,可用于机器人操纵中的多种下游任务,例如,姿势估算,控制状态表示控制等。在唱歌对象上,在实例特定的多对象应用程序上的结果有限。此外,训练需要复杂的数据收集管道,包括每个对象的3D重建和掩盖注释。在本文中,我们通过简化的数据收集和培训制度进一步提高了DON的功效,从而始终如一地产生更高的精度,并能够对数据要求较少的关键点进行强有力的跟踪。特别是,我们专注于使用多对象数据而不是奇异的对象进行培训,并结合精心挑选的增强方案。我们还针对原始PixelWise配方提出了一种替代损失公式,该配方提供了更好的结果,并且对超参数较少敏感。最后,我们在现实世界的机器人抓握任务上展示了我们提出的框架的鲁棒性和准确性。
translated by 谷歌翻译
对图像分类任务的对比学习成功的鼓励,我们为3D手姿势估计的结构化回归任务提出了一种新的自我监督方法。对比学习利用未标记的数据来通过损失制定来使用未标记的数据,以鼓励学习的特征表示在任何图像转换下都是不变的。对于3D手姿势估计,它也希望具有不变性地与诸如颜色抖动的外观变换。但是,该任务需要在仿射和转换之类的转换下的标准性。为了解决这个问题,我们提出了一种对比的对比目标,并在3D手姿势估计的背景下展示其有效性。我们通过实验研究了不变性和对比的对比目标的影响,并表明学习的等待特征导致3D手姿势估计的任务的更好表示。此外,我们显示具有足够深度的标准Evenet,在额外的未标记数据上培训,在弗雷手中获得高达14.5%的提高,因此在没有任何任务的专用架构的情况下实现最先进的性能。 https://ait.ethz.ch/projects/2021/peclr/使用代码和模型
translated by 谷歌翻译
我们研究学习特征姿势的问题,即比例和方向,以构成感兴趣的图像区域。尽管它显然很简单,但问题是不平凡的。很难获得具有模型直接从中学习的明确姿势注释的大规模图像区域。为了解决这个问题,我们通过直方图对准技术提出了一个自制的学习框架。它通过随机重新缩放/旋转来生成成对的图像贴片,然后训练估计器以预测其比例/方向值,从而使其相对差异与所使用的重新分组/旋转一致。估算器学会了预测规模/方向的非参数直方图分布,而无需任何监督。实验表明,它在规模/方向估计中显着优于先前的方法,还可以通过将我们的斑块姿势纳入匹配过程中来改善图像匹配和6个DOF相机姿势估计。
translated by 谷歌翻译
现有方法以非可分子点检测关键点,因此它们不能直接通过背部传播优化关键点的位置。为解决此问题,我们呈现了一个可差异的关键点检测模块,其输出精确的子像素键点。然后提出了再分断损耗直接优化这些子像素键点,并且呈现了分散峰值损耗以获得准确的关键点正则化。我们还以子像素方式提取描述符,并通过稳定的神经输注误差丢失训练。此外,轻量化网络被设计用于关键点检测和描述符提取,其可以在商业GPU上以每秒95帧运行为95帧。在同性记估计,相机姿态估计和视觉(重新)定位任务中,所提出的方法通过最先进的方法实现了相同的性能,而大大减少了推理时间。
translated by 谷歌翻译
在许多计算机视觉管道中,在图像之间建立一组稀疏的关键点相关性是一项基本任务。通常,这转化为一个计算昂贵的最近邻居搜索,必须将一个图像的每个键盘描述符与其他图像的所有描述符进行比较。为了降低匹配阶段的计算成本,我们提出了一个能够检测到每个图像处的互补关键集的深度提取网络。由于仅需要在不同图像上比较同一组中的描述符,因此匹配相计算复杂度随集合数量而降低。我们训练我们的网络以预测关键点并共同计算相应的描述符。特别是,为了学习互补的关键点集,我们引入了一种新颖的无监督损失,对不同集合之间的交叉点进行了惩罚。此外,我们提出了一种基于描述符的新型加权方案,旨在惩罚使用非歧视性描述符的关键点的检测。通过广泛的实验,我们表明,我们的功能提取网络仅在合成的扭曲图像和完全无监督的方式进行训练,以降低匹配的复杂性,在3D重建和重新定位任务上取得了竞争成果。
translated by 谷歌翻译
关键点检测和描述是计算机视觉系统中常用的构建块,特别是用于机器人和自主驾驶。然而,大多数迄今为止的技术都集中在标准相机上,几乎没有考虑到Fisheye相机,这些摄像机通常用于城市驾驶和自动停车处。在本文中,我们提出了一种用于鱼眼图像的新型培训和评估管道。我们利用SuperPoint作为我们的基线,这是一个自我监督的Keypoint检测器和描述符,该探测器和描述符已经实现了最先进的同位估计。我们介绍了一种Fisheye适应管道,以便在未造成的Fisheye图像上培训。我们通过在牛津机Robotcar数据集上引入用于检测可重复性和描述符的鱼眼基于评估方法来评估HPAPTES基准测试的性能。
translated by 谷歌翻译
我们介绍了TemPCLR,这是一种针对3D手重建的结构化回归任务的新的时代对比学习方法。与以前的手部姿势估计方法相抵触方法不同,我们的框架考虑了其增强方案中的时间一致性,并说明了沿时间方向的手部姿势的差异。我们的数据驱动方法利用了未标记的视频和标准CNN,而无需依赖合成数据,伪标签或专业体系结构。我们的方法在HO-3D和Freihand数据集中分别将全面监督的手部重建方法的性能提高了15.9%和7.6%,从而确立了新的最先进的性能。最后,我们证明了我们的方法会随着时间的推移产生更平滑的手部重建,并且与以前的最新作品相比,对重型的闭塞更为强大,我们在定量和定性上表现出来。我们的代码和模型将在https://eth-ait.github.io/tempclr上找到。
translated by 谷歌翻译
在本文中,我们考虑了同时找到和从单个2D图像中恢复多手的具有挑战性的任务。先前的研究要么关注单手重建,要么以多阶段的方式解决此问题。此外,常规的两阶段管道首先检测到手部区域,然后估计每个裁剪贴片的3D手姿势。为了减少预处理和特征提取中的计算冗余,我们提出了一条简洁但有效的单阶段管道。具体而言,我们为多手重建设计了多头自动编码器结构,每个HEAD网络分别共享相同的功能图并分别输出手动中心,姿势和纹理。此外,我们采用了一个弱监督的计划来减轻昂贵的3D现实世界数据注释的负担。为此,我们提出了一系列通过舞台训练方案优化的损失,其中根据公开可用的单手数据集生成具有2D注释的多手数据集。为了进一步提高弱监督模型的准确性,我们在单手和多个手设置中采用了几个功能一致性约束。具体而言,从本地功能估算的每只手的关键点应与全局功能预测的重新投影点一致。在包括Freihand,HO3D,Interhand 2.6M和RHD在内的公共基准测试的广泛实验表明,我们的方法在弱监督和完全监督的举止中优于基于最先进的模型方法。代码和模型可在{\ url {https://github.com/zijinxuxu/smhr}}上获得。
translated by 谷歌翻译
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
translated by 谷歌翻译