Camera pose estimation is a key step in standard 3D reconstruction pipelines that operate on a dense set of images of a single object or scene. However, methods for pose estimation often fail when only a few images are available because they rely on the ability to robustly identify and match visual features between image pairs. While these methods can work robustly with dense camera views, capturing a large set of images can be time-consuming or impractical. We propose SparsePose for recovering accurate camera poses given a sparse set of wide-baseline images (fewer than 10). The method learns to regress initial camera poses and then iteratively refine them after training on a large-scale dataset of objects (Co3D: Common Objects in 3D). SparsePose significantly outperforms conventional and learning-based baselines in recovering accurate camera rotations and translations. We also demonstrate our pipeline for high-fidelity 3D reconstruction using only 5-9 images of an object.
translated by 谷歌翻译
我们描述了一种数据驱动的方法,用于指定任意对象的多个图像,以推断相机观点。该任务是经典几何管道(例如SFM和SLAM)的核心组成部分,也是当代神经方法(例如NERF)的至关重要的预处理要求,以对象重建和视图合成。与现有的对应驱动的方法相反,鉴于稀疏视图的表现不佳,我们提出了一种基于自上而下的预测方法来估计相机观点。我们的主要技术见解是使用基于能量的公式来表示相对摄像机旋转的分布,从而使我们能够明确表示由对象对称或视图引起的多个摄像机模式。利用这些相对预测,我们共同估计了来自多个图像的一致摄像机旋转集。我们表明,我们的方法优于最先进的SFM和SLAM方法,并且在可见和看不见的类别上都稀疏图像。此外,我们的概率方法显着优于直接回归相对姿势的表现,这表明对多模型建模对于相干关节重建很重要。我们证明,我们的系统可以是从多视图数据集中进行野外重建的垫脚石。可以在https://jasonyzhang.com/relpose上找到带有代码和视频的项目页面。
translated by 谷歌翻译
我们提出了一种从有限重叠的图像中对场景进行平面表面重建的方法。此重构任务是具有挑战性的,因为它需要共同推理单个图像3D重建,图像之间的对应关系以及图像之间的相对摄像头姿势。过去的工作提出了基于优化的方法。我们引入了一种更简单的方法,即平面形式,该方法使用应用于3D感知平面令牌的变压器执行3D推理。我们的实验表明,我们的方法比以前的工作更有效,并且几项3D特定的设计决策对于成功的成功至关重要。
translated by 谷歌翻译
Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
translated by 谷歌翻译
We introduce an approach for recovering the 6D pose of multiple known objects in a scene captured by a set of input images with unknown camera viewpoints. First, we present a single-view single-object 6D pose estimation method, which we use to generate 6D object pose hypotheses. Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene. Our approach explicitly handles object symmetries, does not require depth measurements, is robust to missing or incorrect object hypotheses, and automatically recovers the number of objects in the scene. Third, we develop a method for global scene refinement given multiple object hypotheses and their correspondences across views. This is achieved by solving an object-level bundle adjustment problem that refines the poses of cameras and objects to minimize the reprojection error in all views. We demonstrate that the proposed method, dubbed Cosy-Pose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets. Code and pre-trained models are available on the project webpage. 5
translated by 谷歌翻译
While object reconstruction has made great strides in recent years, current methods typically require densely captured images and/or known camera poses, and generalize poorly to novel object categories. To step toward object reconstruction in the wild, this work explores reconstructing general real-world objects from a few images without known camera poses or object categories. The crux of our work is solving two fundamental 3D vision problems -- shape reconstruction and pose estimation -- in a unified approach. Our approach captures the synergies of these two problems: reliable camera pose estimation gives rise to accurate shape reconstruction, and the accurate reconstruction, in turn, induces robust correspondence between different views and facilitates pose estimation. Our method FORGE predicts 3D features from each view and leverages them in conjunction with the input images to establish cross-view correspondence for estimating relative camera poses. The 3D features are then transformed by the estimated poses into a shared space and are fused into a neural radiance field. The reconstruction results are rendered by volume rendering techniques, enabling us to train the model without 3D shape ground-truth. Our experiments show that FORGE reliably reconstructs objects from five views. Our pose estimation method outperforms existing ones by a large margin. The reconstruction results under predicted poses are comparable to the ones using ground-truth poses. The performance on novel testing categories matches the results on categories seen during training. Project page: https://ut-austin-rpl.github.io/FORGE/
translated by 谷歌翻译
We introduce ViewNeRF, a Neural Radiance Field-based viewpoint estimation method that learns to predict category-level viewpoints directly from images during training. While NeRF is usually trained with ground-truth camera poses, multiple extensions have been proposed to reduce the need for this expensive supervision. Nonetheless, most of these methods still struggle in complex settings with large camera movements, and are restricted to single scenes, i.e. they cannot be trained on a collection of scenes depicting the same object category. To address these issues, our method uses an analysis by synthesis approach, combining a conditional NeRF with a viewpoint predictor and a scene encoder in order to produce self-supervised reconstructions for whole object categories. Rather than focusing on high fidelity reconstruction, we target efficient and accurate viewpoint prediction in complex scenarios, e.g. 360{\deg} rotation on real data. Our model shows competitive results on synthetic and real datasets, both for single scenes and multi-instance collections.
translated by 谷歌翻译
我们介绍了日常桌面对象的998 3D型号的数据集及其847,000个现实世界RGB和深度图像。每个图像的相机姿势和对象姿势的准确注释都以半自动化方式执行,以促进将数据集用于多种3D应用程序,例如形状重建,对象姿势估计,形状检索等。3D重建由于缺乏适当的现实世界基准来完成该任务,并证明我们的数据集可以填补该空白。整个注释数据集以及注释工具和评估基线的源代码可在http://www.ocrtoc.org/3d-reconstruction.html上获得。
translated by 谷歌翻译
我们的方法从单个RGB-D观察中研究了以对象为中心的3D理解的复杂任务。由于这是一个不适的问题,因此现有的方法在3D形状和6D姿势和尺寸估计中都遭受了遮挡的复杂多对象方案的尺寸估计。我们提出了Shapo,这是一种联合多对象检测的方法,3D纹理重建,6D对象姿势和尺寸估计。 Shapo的关键是一条单杆管道,可回归形状,外观和构成潜在的代码以及每个对象实例的口罩,然后以稀疏到密集的方式进一步完善。首先学到了一种新颖的剖面形状和前景数据库,以将对象嵌入各自的形状和外观空间中。我们还提出了一个基于OCTREE的新颖的可区分优化步骤,使我们能够以分析的方式进一步改善对象形状,姿势和外观。我们新颖的联合隐式纹理对象表示使我们能够准确地识别和重建新颖的看不见的对象,而无需访问其3D网格。通过广泛的实验,我们表明我们的方法在模拟的室内场景上进行了训练,可以准确地回归现实世界中新颖物体的形状,外观和姿势,并以最小的微调。我们的方法显着超过了NOCS数据集上的所有基准,对于6D姿势估计,MAP的绝对改进为8%。项目页面:https://zubair-irshad.github.io/projects/shapo.html
translated by 谷歌翻译
This paper studies the challenging two-view 3D reconstruction in a rigorous sparse-view configuration, which is suffering from insufficient correspondences in the input image pairs for camera pose estimation. We present a novel Neural One-PlanE RANSAC framework (termed NOPE-SAC in short) that exerts excellent capability to learn one-plane pose hypotheses from 3D plane correspondences. Building on the top of a siamese plane detection network, our NOPE-SAC first generates putative plane correspondences with a coarse initial pose. It then feeds the learned 3D plane parameters of correspondences into shared MLPs to estimate the one-plane camera pose hypotheses, which are subsequently reweighed in a RANSAC manner to obtain the final camera pose. Because the neural one-plane pose minimizes the number of plane correspondences for adaptive pose hypotheses generation, it enables stable pose voting and reliable pose refinement in a few plane correspondences for the sparse-view inputs. In the experiments, we demonstrate that our NOPE-SAC significantly improves the camera pose estimation for the two-view inputs with severe viewpoint changes, setting several new state-of-the-art performances on two challenging benchmarks, i.e., MatterPort3D and ScanNet, for sparse-view 3D reconstruction. The source code is released at https://github.com/IceTTTb/NopeSAC for reproducible research.
translated by 谷歌翻译
我们提出了一个简单的基线,用于直接估计两个图像之间的相对姿势(旋转和翻译,包括比例)。深度方法最近显示出很强的进步,但通常需要复杂或多阶段的体系结构。我们表明,可以将少数修改应用于视觉变压器(VIT),以使其计算接近八点算法。这种归纳偏见使一种简单的方法在多种环境中具有竞争力,通常在有限的数据制度中具有强劲的性能增长,从而实质上有所改善。
translated by 谷歌翻译
Training a Neural Radiance Field (NeRF) without pre-computed camera poses is challenging. Recent advances in this direction demonstrate the possibility of jointly optimising a NeRF and camera poses in forward-facing scenes. However, these methods still face difficulties during dramatic camera movement. We tackle this challenging problem by incorporating undistorted monocular depth priors. These priors are generated by correcting scale and shift parameters during training, with which we are then able to constrain the relative poses between consecutive frames. This constraint is achieved using our proposed novel loss functions. Experiments on real-world indoor and outdoor scenes show that our method can handle challenging camera trajectories and outperforms existing methods in terms of novel view rendering quality and pose estimation accuracy.
translated by 谷歌翻译
从极端视图图像中恢复相机的空间布局和场景的几何形状是计算机视觉的长期挑战。盛行的3D重建算法通常采用匹配范式的图像,并假定场景的一部分是可以在图像上进行的,当输入之间几乎没有重叠时的性能较差。相比之下,人类可以通过形状的先验知识将一个图像中的可见部分与另一个图像中相应的不可见组件相关联。受这个事实的启发,我们提出了一个名为虚拟通信(VC)的新颖概念。 VC是来自两个图像的一对像素,它们的相机射线在3D中相交。与经典的对应关系相似,VC符合异性几何形状;与经典的信件不同,VC不需要在视图上可以共同提供。因此,即使图像不重叠,也可以建立和利用VC。我们介绍了一种方法,可以在场景中找到基于人类的虚拟对应关系。我们展示了如何与经典捆绑捆绑调整无缝集成的风险投资,以恢复跨极视图的相机姿势。实验表明,在具有挑战性的情况下,我们的方法显着优于最先进的摄像头姿势估计方法,并且在传统的密集捕获的设置中是可比的。我们的方法还释放了多个下游任务的潜力,例如在极端视图场景中从多视图立体声和新型视图合成中进行场景重建。
translated by 谷歌翻译
自从神经辐射场(NERF)出现以来,神经渲染引起了极大的关注,并且已经大大推动了新型视图合成的最新作品。最近的重点是在模型上过度适合单个场景,以及学习模型的一些尝试,这些模型可以综合看不见的场景的新型视图,主要包括将深度卷积特征与类似NERF的模型组合在一起。我们提出了一个不同的范式,不需要深层特征,也不需要类似NERF的体积渲染。我们的方法能够直接从现场采样的贴片集中直接预测目标射线的颜色。我们首先利用表现几何形状沿着每个参考视图的异性线提取斑块。每个贴片线性地投影到1D特征向量和一系列变压器处理集合中。对于位置编码,我们像在光场表示中一样对射线进行参数化,并且至关重要的差异是坐标是相对于目标射线的规范化的,这使我们的方法与参考帧无关并改善了概括。我们表明,即使接受比先前的工作要少得多的数据训练,我们的方法在新颖的综合综合方面都超出了最新的视图综合。
translated by 谷歌翻译
We introduce a Transformer based 6D Object Pose Estimation framework VideoPose, comprising an end-to-end attention based modelling architecture, that attends to previous frames in order to estimate accurate 6D Object Poses in videos. Our approach leverages the temporal information from a video sequence for pose refinement, along with being computationally efficient and robust. Compared to existing methods, our architecture is able to capture and reason from long-range dependencies efficiently, thus iteratively refining over video sequences. Experimental evaluation on the YCB-Video dataset shows that our approach is on par with the state-of-the-art Transformer methods, and performs significantly better relative to CNN based approaches. Further, with a speed of 33 fps, it is also more efficient and therefore applicable to a variety of applications that require real-time object pose estimation. Training code and pretrained models are available at https://github.com/ApoorvaBeedu/VideoPose
translated by 谷歌翻译
We present ObjectMatch, a semantic and object-centric camera pose estimation for RGB-D SLAM pipelines. Modern camera pose estimators rely on direct correspondences of overlapping regions between frames; however, they cannot align camera frames with little or no overlap. In this work, we propose to leverage indirect correspondences obtained via semantic object identification. For instance, when an object is seen from the front in one frame and from the back in another frame, we can provide additional pose constraints through canonical object correspondences. We first propose a neural network to predict such correspondences on a per-pixel level, which we then combine in our energy formulation with state-of-the-art keypoint matching solved with a joint Gauss-Newton optimization. In a pairwise setting, our method improves registration recall of state-of-the-art feature matching from 77% to 87% overall and from 21% to 52% in pairs with 10% or less inter-frame overlap. In registering RGB-D sequences, our method outperforms cutting-edge SLAM baselines in challenging, low frame-rate scenarios, achieving more than 35% reduction in trajectory error in multiple scenes.
translated by 谷歌翻译
Input: 3 views of held-out scene NeRF pixelNeRF Output: Rendered new views Input Novel views Input Novel views Input Novel views Figure 1: NeRF from one or few images. We present pixelNeRF, a learning framework that predicts a Neural Radiance Field (NeRF) representation from a single (top) or few posed images (bottom). PixelNeRF can be trained on a set of multi-view images, allowing it to generate plausible novel view synthesis from very few input images without test-time optimization (bottom left). In contrast, NeRF has no generalization capabilities and performs poorly when only three input views are available (bottom right).
translated by 谷歌翻译
计算机愿景中的经典问题是推断从几个可用于以交互式速率渲染新颖视图的图像的3D场景表示。以前的工作侧重于重建预定定义的3D表示,例如,纹理网格或隐式表示,例如隐式表示。辐射字段,并且通常需要输入图像,具有精确的相机姿势和每个新颖场景的长处理时间。在这项工作中,我们提出了场景表示变换器(SRT),一种方法,该方法处理新的区域的构成或未铺设的RGB图像,Infers Infers“设置 - 潜在场景表示”,并合成新颖的视图,全部在一个前馈中经过。为了计算场景表示,我们提出了视觉变压器的概括到图像组,实现全局信息集成,从而实现3D推理。一个有效的解码器变压器通过参加场景表示来参加光场以呈现新颖的视图。通过最大限度地减少新型视图重建错误,学习是通过最终到底的。我们表明,此方法在PSNR和Synthetic DataSets上的速度方面优于最近的基线,包括为纸张创建的新数据集。此外,我们展示了使用街景图像支持现实世界户外环境的交互式可视化和语义分割。
translated by 谷歌翻译
新型视图合成是一个长期存在的问题。在这项工作中,我们考虑了一个问题的变体,在这种变体中,只有几个上下文视图稀疏地涵盖了场景或对象。目的是预测现场的新观点,这需要学习先验。当前的艺术状态基于神经辐射场(NERF),在获得令人印象深刻的结果的同时,这些方法遭受了较长的训练时间,因为它们需要通过每个图像来评估数百万个3D点样品。我们提出了一种仅限2D方法,该方法将多个上下文视图映射,并在神经网络的单个通过中映射到新图像。我们的模型使用由密码簿和变压器模型组成的两阶段体系结构。该密码手册用于将单个图像嵌入较小的潜在空间中,而变压器在此更紧凑的空间中求解了视图综合任务。为了有效地训练我们的模型,我们引入了一种新颖的分支注意机制,该机制使我们不仅可以将相同的模型用于神经渲染,还可以用于摄像头姿势估计。现实世界场景的实验结果表明,与基于NERF的方法相比,我们的方法具有竞争力,而在3D中没有明确推理,并且训练速度更快。
translated by 谷歌翻译
我们介绍了一个自由视的渲染方法 - Humannerf - 这对人类进行了复杂的身体运动的给定单曲视频工作,例如,来自YouTube的视频。我们的方法可以在任何帧中暂停视频,并从任意新相机视点呈现对象,甚至是该特定帧和身体姿势的完整360度摄像机路径。这项任务特别具有挑战性,因为它需要合成身体的光电型细节,如从输入视频中可能不存在的各种相机角度所见,以及合成布折叠和面部外观的细细节。我们的方法优化了在规范T型姿势中的人的体积表示,同时通过运动场,该运动场通过向后的警报将估计的规范表示映射到视频的每个帧。运动场分解成骨骼刚性和非刚性运动,由深网络产生。我们对现有工作显示出显着的性能改进,以及从移动人类的单眼视频的令人尖锐的观点渲染的阐释示例,以挑战不受控制的捕获场景。
translated by 谷歌翻译