我们引入了一个相机重新定位管道,该管道结合了绝对姿势回归(APR)和直接功能匹配。通过结合曝光自适应的新视图综合,我们的方法成功地解决了现有基于光度法方法无法处理的室外环境中的光度扭曲。借助域不变的功能匹配,我们的解决方案通过对未标记数据的半监督学习提高了姿势回归精度。特别是,该管道由两个组成部分组成:新型视图合成器和DFNET。前者综合了新的视图,以补偿暴露的变化,后者会回归摄像头的姿势,并提取了可靠的功能,这些特征弥补了真实图像和合成图像之间的域间隙。此外,我们引入了在线合成数据生成方案。我们表明,这些方法有效地增强了室内和室外场景中的相机姿势估计。因此,我们的方法通过优于现有的单位图APR方法高达56%,可与基于3D结构的方法相当。
translated by 谷歌翻译
新型视图合成是一个长期存在的问题。在这项工作中,我们考虑了一个问题的变体,在这种变体中,只有几个上下文视图稀疏地涵盖了场景或对象。目的是预测现场的新观点,这需要学习先验。当前的艺术状态基于神经辐射场(NERF),在获得令人印象深刻的结果的同时,这些方法遭受了较长的训练时间,因为它们需要通过每个图像来评估数百万个3D点样品。我们提出了一种仅限2D方法,该方法将多个上下文视图映射,并在神经网络的单个通过中映射到新图像。我们的模型使用由密码簿和变压器模型组成的两阶段体系结构。该密码手册用于将单个图像嵌入较小的潜在空间中,而变压器在此更紧凑的空间中求解了视图综合任务。为了有效地训练我们的模型,我们引入了一种新颖的分支注意机制,该机制使我们不仅可以将相同的模型用于神经渲染,还可以用于摄像头姿势估计。现实世界场景的实验结果表明,与基于NERF的方法相比,我们的方法具有竞争力,而在3D中没有明确推理,并且训练速度更快。
translated by 谷歌翻译
While object reconstruction has made great strides in recent years, current methods typically require densely captured images and/or known camera poses, and generalize poorly to novel object categories. To step toward object reconstruction in the wild, this work explores reconstructing general real-world objects from a few images without known camera poses or object categories. The crux of our work is solving two fundamental 3D vision problems -- shape reconstruction and pose estimation -- in a unified approach. Our approach captures the synergies of these two problems: reliable camera pose estimation gives rise to accurate shape reconstruction, and the accurate reconstruction, in turn, induces robust correspondence between different views and facilitates pose estimation. Our method FORGE predicts 3D features from each view and leverages them in conjunction with the input images to establish cross-view correspondence for estimating relative camera poses. The 3D features are then transformed by the estimated poses into a shared space and are fused into a neural radiance field. The reconstruction results are rendered by volume rendering techniques, enabling us to train the model without 3D shape ground-truth. Our experiments show that FORGE reliably reconstructs objects from five views. Our pose estimation method outperforms existing ones by a large margin. The reconstruction results under predicted poses are comparable to the ones using ground-truth poses. The performance on novel testing categories matches the results on categories seen during training. Project page: https://ut-austin-rpl.github.io/FORGE/
translated by 谷歌翻译
对绝对姿势回归剂(APR)网络进行训练,以估计给定捕获图像的相机姿势。他们计算了摄像机位置和方向回归的潜在图像表示。与提供最新精度的基于结构的本地化方案相比,APRS在本地化精度,运行时和内存之间提供了不同的权衡。在这项工作中,我们介绍了相机姿势自动编码器(PAE),多层感知器通过教师学生的方法进行培训,以用APR作为老师来编码相机姿势。我们表明,由此产生的潜在姿势表示可以密切复制APR性能,并证明其对相关任务的有效性。具体而言,我们提出了一个轻巧的测试时间优化,其中最接近火车的姿势编码并用于完善摄像头位置估计。该过程在剑桥大标记和7Scenes基准上都达到了APRS的新最新位置精度。我们还表明,可以从学到的姿势编码中重建火车图像,为以低内存成本以较低的存储器成本整合火车的视觉信息铺平了道路。我们的代码和预培训模型可在https://github.com/yolish/camera-pose-auto-coders上找到。
translated by 谷歌翻译
In this paper, we introduce neural texture learning for 6D object pose estimation from synthetic data and a few unlabelled real images. Our major contribution is a novel learning scheme which removes the drawbacks of previous works, namely the strong dependency on co-modalities or additional refinement. These have been previously necessary to provide training signals for convergence. We formulate such a scheme as two sub-optimisation problems on texture learning and pose learning. We separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel-perfect synthetic data. Combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry. To alleviate pose noise and segmentation imperfection present during the texture learning phase, we propose a surfel-based adversarial training loss together with texture regularisation from synthetic data. We demonstrate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes. Remarkably, our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance.
translated by 谷歌翻译
深度学习的关键批评之一是,需要大量昂贵且难以获得的训练数据,以便培训具有高性能和良好的概率功能的模型。专注于通过场景坐标回归(SCR)的单眼摄像机姿势估计的任务,我们描述了一种新的方法,用于相机姿势估计(舞蹈)网络的域改编,这使得培训模型无需访问目标任务上的任何标签。舞蹈需要未标记的图像(没有已知的姿势,订购或场景坐标标签)和空间的3D表示(例如,扫描点云),这两者都可以使用现成的商品硬件最少的努力来捕获。舞蹈渲染从3D模型标记的合成图像,通过应用无监督的图像级域适应技术(未配对图像到图像转换)来桥接合成和实图像之间的不可避免的域间隙。在实际图像上进行测试时,舞蹈培训的SCR模型在成本的一小部分中对其完全监督的对应物(在两种情况下使用PNP-RANSAC进行最终姿势估算的情况下)进行了相当的性能。我们的代码和数据集可以在https://github.com/jacklangerman/dance获得
translated by 谷歌翻译
神经辐射场(NERF)在代表具有高分辨率细节和有效记忆的复杂3D场景方面取得了巨大成功。然而,当前基于NERF的姿势估计量没有初始姿势预测,并且在优化过程中易于局部优势。在本文中,我们介绍了纬度:全球定位,具有截短的动态低通滤波器,该过滤器引入了城市规模的NERF中的两阶段定位机制。在识别阶段,我们通过训练有素的NERFS生成的图像来训练回归器,该图像为全球本地化提供了初始值。在姿势优化阶段,我们通过直接优化切线平面上的姿势来最大程度地减少观察到的图像之间的残差和渲染图像。为了避免收敛到局部最优,我们引入了一个截短的动态低通滤波器(TDLF),以进行粗到细小的姿势注册。我们在合成和现实世界中评估了我们的方法,并显示了其在大规模城市场景中高精度导航的潜在应用。代码和数据将在https://github.com/jike5/latitude上公开获取。
translated by 谷歌翻译
我们提出了IM2NERF,这是一个学习框架,该框架可以预测在野生中给出单个输入图像的连续神经对象表示,仅通过现成的识别方法进行分割输出而受到监督。构建神经辐射场的标准方法利用了多视图的一致性,需要对场景的许多校准视图,这一要求在野外学习大规模图像数据时无法满足。我们通过引入一个模型将输入图像编码到包含对象形状的代码,对象外观代码以及捕获对象图像的估计相机姿势的模型来迈出解决此缺点的一步。我们的模型条件在预测的对象表示上nerf,并使用卷渲染来从新视图中生成图像。我们将模型端到端训练大量输入图像。由于该模型仅配有单视图像,因此问题高度不足。因此,除了在合成的输入视图上使用重建损失外,我们还对新颖的视图使用辅助对手损失。此外,我们利用对象对称性和循环摄像头的姿势一致性。我们在Shapenet数据集上进行了广泛的定量和定性实验,并在开放图像数据集上进行了定性实验。我们表明,在所有情况下,IM2NERF都从野外的单视图像中实现了新视图合成的最新性能。
translated by 谷歌翻译
Input: 3 views of held-out scene NeRF pixelNeRF Output: Rendered new views Input Novel views Input Novel views Input Novel views Figure 1: NeRF from one or few images. We present pixelNeRF, a learning framework that predicts a Neural Radiance Field (NeRF) representation from a single (top) or few posed images (bottom). PixelNeRF can be trained on a set of multi-view images, allowing it to generate plausible novel view synthesis from very few input images without test-time optimization (bottom left). In contrast, NeRF has no generalization capabilities and performs poorly when only three input views are available (bottom right).
translated by 谷歌翻译
This paper proposes a generalizable, end-to-end deep learning-based method for relative pose regression between two images. Given two images of the same scene captured from different viewpoints, our algorithm predicts the relative rotation and translation between the two respective cameras. Despite recent progress in the field, current deep-based methods exhibit only limited generalization to scenes not seen in training. Our approach introduces a network architecture that extracts a grid of coarse features for each input image using the pre-trained LoFTR network. It subsequently relates corresponding features in the two images, and finally uses a convolutional network to recover the relative rotation and translation between the respective cameras. Our experiments indicate that the proposed architecture can generalize to novel scenes, obtaining higher accuracy than existing deep-learning-based methods in various settings and datasets, in particular with limited training data.
translated by 谷歌翻译
We introduce ViewNeRF, a Neural Radiance Field-based viewpoint estimation method that learns to predict category-level viewpoints directly from images during training. While NeRF is usually trained with ground-truth camera poses, multiple extensions have been proposed to reduce the need for this expensive supervision. Nonetheless, most of these methods still struggle in complex settings with large camera movements, and are restricted to single scenes, i.e. they cannot be trained on a collection of scenes depicting the same object category. To address these issues, our method uses an analysis by synthesis approach, combining a conditional NeRF with a viewpoint predictor and a scene encoder in order to produce self-supervised reconstructions for whole object categories. Rather than focusing on high fidelity reconstruction, we target efficient and accurate viewpoint prediction in complex scenarios, e.g. 360{\deg} rotation on real data. Our model shows competitive results on synthetic and real datasets, both for single scenes and multi-instance collections.
translated by 谷歌翻译
在3D视觉中,视觉重新定位已被广泛讨论:鉴于预构建的3D视觉图,估计查询图像的6 DOF(自由度)姿势。大规模室内环境中的重新定位可实现有吸引力的应用程序,例如增强现实和机器人导航。但是,当相机移动时,在这种环境中,外观变化很快,这对于重新定位系统来说是具有挑战性的。为了解决这个问题,我们建议一种基于虚拟视图综合方法Rendernet,以丰富有关此特定情况的数据库和完善姿势。我们选择直接渲染虚拟观点的必要全局和本地特征,而不是渲染需要高质量3D模型的真实图像,并分别将它们应用于后续图像检索和功能匹配操作中。所提出的方法在很大程度上可以改善大规模室内环境中的性能,例如,在INLOC数据集中获得7.1 \%和12.2 \%的改善。
translated by 谷歌翻译
我们提出了Panohdr-nerf,这是一种新颖的管道,可随意捕获大型室内场景的合理的全HDR辐射场,而无需精心设计或复杂的捕获协议。首先,用户通过在场景中自由挥舞现成的摄像头来捕获场景的低动态范围(LDR)全向视频。然后,LDR2HDR网络将捕获的LDR帧提升到HDR,随后用于训练定制的NERF ++模型。由此产生的Panohdr-NERF管道可以从场景的任何位置估算完整的HDR全景。通过在一个新的测试数据集上进行各种真实场景的实验,并在训练过程中未见的位置捕获了地面真相HDR辐射,我们表明PanoHDR-NERF可以预测任何场景点的合理辐射。我们还表明,PanoHDR-NERF产生的HDR图像可以合成正确的照明效果,从而可以使用正确点亮的合成对象来增强室内场景。
translated by 谷歌翻译
计算机愿景中的经典问题是推断从几个可用于以交互式速率渲染新颖视图的图像的3D场景表示。以前的工作侧重于重建预定定义的3D表示,例如,纹理网格或隐式表示,例如隐式表示。辐射字段,并且通常需要输入图像,具有精确的相机姿势和每个新颖场景的长处理时间。在这项工作中,我们提出了场景表示变换器(SRT),一种方法,该方法处理新的区域的构成或未铺设的RGB图像,Infers Infers“设置 - 潜在场景表示”,并合成新颖的视图,全部在一个前馈中经过。为了计算场景表示,我们提出了视觉变压器的概括到图像组,实现全局信息集成,从而实现3D推理。一个有效的解码器变压器通过参加场景表示来参加光场以呈现新颖的视图。通过最大限度地减少新型视图重建错误,学习是通过最终到底的。我们表明,此方法在PSNR和Synthetic DataSets上的速度方面优于最近的基线,包括为纸张创建的新数据集。此外,我们展示了使用街景图像支持现实世界户外环境的交互式可视化和语义分割。
translated by 谷歌翻译
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views. The core of our method is a network architecture that includes a multilayer perceptron and a ray transformer that estimates radiance and volume density at continuous 5D locations (3D spatial locations and 2D viewing directions), drawing appearance information on the fly from multiple source views. By drawing on source views at render time, our method hearkens back to classic work on image-based rendering (IBR), and allows us to render high-resolution imagery. Unlike neural scene representation work that optimizes per-scene functions for rendering, we learn a generic view interpolation function that generalizes to novel scenes. We render images using classic volume rendering, which is fully differentiable and allows us to train using only multiview posed images as supervision. Experiments show that our method outperforms recent novel view synthesis methods that also seek to generalize to novel scenes. Further, if fine-tuned on each scene, our method is competitive with state-of-the-art single-scene neural rendering methods. 1
translated by 谷歌翻译
我们提出了神经演员(NA),一种用于从任意观点和任意可控姿势的高质量合成人类的新方法。我们的方法是基于最近的神经场景表示和渲染工作,从而从仅从2D图像中学习几何形状和外观的表示。虽然现有的作品令人兴奋地呈现静态场景和动态场景的播放,具有神经隐含方法的照片 - 现实重建和人类的渲染,特别是在用户控制的新颖姿势下,仍然很困难。为了解决这个问题,我们利用一个粗体模型作为将周围的3D空间的代理放入一个规范姿势。神经辐射场从多视图视频输入中了解在规范空间中的姿势依赖几何变形和姿势和视图相关的外观效果。为了综合高保真动态几何和外观的新颖视图,我们利用身体模型上定义的2D纹理地图作为预测残余变形和动态外观的潜变量。实验表明,我们的方法能够比播放的最先进,以及新的姿势合成来实现更好的质量,并且甚至可以概括到新的姿势与训练姿势不同的姿势。此外,我们的方法还支持对合成结果的体形控制。
translated by 谷歌翻译
Image view synthesis has seen great success in reconstructing photorealistic visuals, thanks to deep learning and various novel representations. The next key step in immersive virtual experiences is view synthesis of dynamic scenes. However, several challenges exist due to the lack of high-quality training datasets, and the additional time dimension for videos of dynamic scenes. To address this issue, we introduce a multi-view video dataset, captured with a custom 10-camera rig in 120FPS. The dataset contains 96 high-quality scenes showing various visual effects and human interactions in outdoor scenes. We develop a new algorithm, Deep 3D Mask Volume, which enables temporally-stable view extrapolation from binocular videos of dynamic scenes, captured by static cameras. Our algorithm addresses the temporal inconsistency of disocclusions by identifying the error-prone areas with a 3D mask volume, and replaces them with static background observed throughout the video. Our method enables manipulation in 3D space as opposed to simple 2D masks, We demonstrate better temporal stability than frame-by-frame static view synthesis methods, or those that use 2D masks. The resulting view synthesis videos show minimal flickering artifacts and allow for larger translational movements.
translated by 谷歌翻译
神经辐射场(NERF)通过从多视图2D图像中隐式建模3D表示,在新型视图合成中表现出非常令人印象深刻的性能。但是,大多数现有的研究都使用合理的相机姿势初始化或手动制作的摄像头分布来训练NERF模型,这些分布通常不可用或在各种真实世界中很难获取。我们设计了VMRF,这是一种匹配NERF的创新视图,可以进行有效的NERF培训,而无需在相机姿势或相机姿势分布中进行先验知识。 VMRF引入了视图匹配方案,该方案利用了不平衡的最佳传输来制定功能传输计划,以映射带有随机初始化的摄像头姿势的渲染图像,以映射到相应的真实图像。通过功能传输计划作为指导,设计了一种新颖的姿势校准技术,可以通过预测两对渲染图像和真实图像之间的相对姿势转换来纠正最初的随机摄像头姿势。对许多合成数据集进行的广泛实验表明,所提出的VMRF的性能优于最先进的质量和定量,这是大幅度的。
translated by 谷歌翻译
视觉(RE)本地化解决了估计已知场景中捕获的查询图像的6-DOF(自由度)摄像头的问题,该镜头是许多计算机视觉和机器人应用程序的关键构建块。基于结构的本地化的最新进展通过记住从图像像素到场景坐标的映射与神经网络的映射来构建相机姿势优化的2D-3D对应关系。但是,这种记忆需要在每个场景中训练大量的图像,这是沉重效率降低的。相反,通常很少的图像足以覆盖场景的主要区域,以便人类操作员执行视觉定位。在本文中,我们提出了一种场景区域分类方法,以实现几乎没有拍摄图像的快速有效的场景记忆。我们的见解是利用a)预测的特征提取器,b)场景区域分类器和c)元学习策略,以加速培训,同时缓解过度拟合。我们在室内和室外基准上评估了我们的方法。该实验验证了我们方法在几次设置中的有效性,并且训练时间大大减少到只有几分钟。代码可用:\ url {https://github.com/siyandong/src}
translated by 谷歌翻译
我们提出了一种便携式多型摄像头系统,该系统具有专用模型,用于动态场景中的新型视图和时间综合。我们的目标是使用我们的便携式多座相机从任何角度从任何角度出发为动态场景提供高质量的图像。为了实现这种新颖的观点和时间综合,我们开发了一个配备了五个相机的物理多型摄像头,以在时间和空间域中训练神经辐射场(NERF),以进行动态场景。我们的模型将6D坐标(3D空间位置,1D时间坐标和2D观看方向)映射到观看依赖性且随时间变化的发射辐射和体积密度。量渲染用于在指定的相机姿势和时间上渲染光真实的图像。为了提高物理相机的鲁棒性,我们提出了一个摄像机参数优化模块和一个时间框架插值模块,以促进跨时间的信息传播。我们对现实世界和合成数据集进行了实验以评估我们的系统,结果表明,我们的方法在定性和定量上优于替代解决方案。我们的代码和数据集可从https://yuenfuilau.github.io获得。
translated by 谷歌翻译