Spacecraft pose estimation is a key task to enable space missions in which two spacecrafts must navigate around each other. Current state-of-the-art algorithms for pose estimation employ data-driven techniques. However, there is an absence of real training data for spacecraft imaged in space conditions due to the costs and difficulties associated with the space environment. This has motivated the introduction of 3D data simulators, solving the issue of data availability but introducing a large gap between the training (source) and test (target) domains. We explore a method that incorporates 3D structure into the spacecraft pose estimation pipeline to provide robustness to intensity domain shift and we present an algorithm for unsupervised domain adaptation with robust pseudo-labelling. Our solution has ranked second in the two categories of the 2021 Pose Estimation Challenge organised by the European Space Agency and the Stanford University, achieving the lowest average error over the two categories.
translated by 谷歌翻译
Recently, unsupervised domain adaptation in satellite pose estimation has gained increasing attention, aiming at alleviating the annotation cost for training deep models. To this end, we propose a self-training framework based on the domain-agnostic geometrical constraints. Specifically, we train a neural network to predict the 2D keypoints of a satellite and then use PnP to estimate the pose. The poses of target samples are regarded as latent variables to formulate the task as a minimization problem. Furthermore, we leverage fine-grained segmentation to tackle the information loss issue caused by abstracting the satellite as sparse keypoints. Finally, we iteratively solve the minimization problem in two steps: pseudo-label generation and network training. Experimental results show that our method adapts well to the target domain. Moreover, our method won the 1st place on the sunlamp task of the second international Satellite Pose Estimation Competition.
translated by 谷歌翻译
基于自主视觉的太空传播导航是未来轨道服务和空间物流任务的启用技术。虽然一般的计算机愿景受益于机器学习(ML),但由于在空间环境中获取了预期目标的图像的图像的大规模标记数据集的不切实性,培训和验证的星式载体ML模型非常具有挑战性。迄今为止,诸如航天器姿势估计数据集(速度)的现有数据集主要依赖于培训和验证的合成图像,这很容易批量生产,但不能类似于目标星载图像固有的视觉特征和照明可变性。为了弥合当前实践与未来空间任务中的预期应用之间的差距,介绍了速度+:下一代航天器姿势估计数据集具有特定强调域间隙。除了用于训练的60,000个合成图像外,Speed +还包括从Rendezvous和光学导航(Tron)设施的试验台捕获的航天器模型模型的9,531个硬件映像。 Tron是一种专门的机器人测试用机器,能够以准确和最大多样化的姿势标签和高保真星载照明条件捕获任意数量的目标图像。 Speed +用于由平板和欧洲空间机构的平板和高级概念团队共同主办的第二次国际卫星造型估算挑战,以评估和比较在合成图像上培训的星式载ML模型的稳健性。
translated by 谷歌翻译
虽然姿势估计是一项重要的计算机视觉任务,但它需要昂贵的注释,并且遭受了域转移的困扰。在本文中,我们调查了域自适应2D姿势估计的问题,这些估计会传输有关合成源域的知识,而无需监督。尽管最近已经提出了几个领域的自适应姿势估计模型,但它们不是通用的,而是专注于人姿势或动物姿势估计,因此它们的有效性在某种程度上限于特定情况。在这项工作中,我们提出了一个统一的框架,该框架可以很好地推广到各种领域自适应姿势估计问题上。我们建议使用输入级别和输出级线索(分别是像素和姿势标签)对齐表示,这有助于知识转移从源域到未标记的目标域。我们的实验表明,我们的方法在各个领域变化下实现了最先进的性能。我们的方法的表现优于现有的姿势估计基线,最高4.5%(PP),手部姿势估算高达7.4 pp,狗的动物姿势估计高达4.8 pp,而绵羊的姿势估计为3.3 pp。这些结果表明,我们的方法能够减轻各种任务甚至看不见的域和物体的转移(例如,在马匹上训练并在狗上进行了测试)。我们的代码将在以下网址公开可用:https://github.com/visionlearninggroup/uda_poseestimation。
translated by 谷歌翻译
学习估计对象姿势通常需要地面真理(GT)标签,例如CAD模型和绝对级对象姿势,这在现实世界中获得昂贵且费力。为了解决这个问题,我们为类别级对象姿势估计提出了一个无监督的域适应(UDA),称为\ textbf {uda-cope}。受到最近的多模态UDA技术的启发,所提出的方法利用教师学生自我监督的学习方案来训练姿势估计网络而不使用目标域标签。我们还在预测归一化对象坐标空间(NOCS)地图和观察点云之间引入了双向滤波方法,不仅使我们的教师网络更加强大地对目标域,而且为学生网络培训提供更可靠的伪标签。广泛的实验结果表明了我们所提出的方法的有效性,可以定量和定性。值得注意的是,在不利用目标域GT标签的情况下,我们所提出的方法可以实现与依赖于GT标签的现有方法相当或有时优越的性能。
translated by 谷歌翻译
我们提出了一种称为DPODV2(密集姿势对象检测器)的三个阶段6 DOF对象检测方法,该方法依赖于致密的对应关系。我们将2D对象检测器与密集的对应关系网络和多视图姿势细化方法相结合,以估计完整的6 DOF姿势。与通常仅限于单眼RGB图像的其他深度学习方法不同,我们提出了一个统一的深度学习网络,允许使用不同的成像方式(RGB或DEPTH)。此外,我们提出了一种基于可区分渲染的新型姿势改进方法。主要概念是在多个视图中比较预测并渲染对应关系,以获得与所有视图中预测的对应关系一致的姿势。我们提出的方法对受控设置中的不同数据方式和培训数据类型进行了严格的评估。主要结论是,RGB在对应性估计中表现出色,而如果有良好的3D-3D对应关系,则深度有助于姿势精度。自然,他们的组合可以实现总体最佳性能。我们进行广泛的评估和消融研究,以分析和验证几个具有挑战性的数据集的结果。 DPODV2在所有这些方面都取得了出色的成果,同时仍然保持快速和可扩展性,独立于使用的数据模式和培训数据的类型
translated by 谷歌翻译
大多数实时人类姿势估计方法都基于检测接头位置。使用检测到的关节位置,可以计算偏差和肢体的俯仰。然而,由于这种旋转轴仍然不观察,因此不能计算沿着肢体沿着肢体至关重要的曲折,这对于诸如体育分析和计算机动画至关重要。在本文中,我们引入了方向关键点,一种用于估计骨骼关节的全位置和旋转的新方法,仅使用单帧RGB图像。灵感来自Motion-Capture Systems如何使用一组点标记来估计全骨骼旋转,我们的方法使用虚拟标记来生成足够的信息,以便准确地推断使用简单的后处理。旋转预测改善了接头角度最佳报告的平均误差48%,并且在15个骨骼旋转中实现了93%的精度。该方法还通过MPJPE在原理数据集上测量,通过MPJPE测量,该方法还改善了当前的最新结果14%,并概括为野外数据集。
translated by 谷歌翻译
In this paper, we introduce neural texture learning for 6D object pose estimation from synthetic data and a few unlabelled real images. Our major contribution is a novel learning scheme which removes the drawbacks of previous works, namely the strong dependency on co-modalities or additional refinement. These have been previously necessary to provide training signals for convergence. We formulate such a scheme as two sub-optimisation problems on texture learning and pose learning. We separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel-perfect synthetic data. Combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry. To alleviate pose noise and segmentation imperfection present during the texture learning phase, we propose a surfel-based adversarial training loss together with texture regularisation from synthetic data. We demonstrate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes. Remarkably, our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance.
translated by 谷歌翻译
了解协作环境中工人和机器人的确切3D位置可以实现多种真实应用,例如检测不安全情况或用于统计和社会目的的相互作用的研究。在本文中,我们提出了一个基于深度设备和深度神经网络的非侵入性和光变色的框架,以估算外部摄像头的3D机器人姿势。该方法可以应用于任何机器人,而无需硬件访问内部状态。我们介绍了预测姿势的新颖代表,即半光谱脱钩的热图(SPDH),以准确计算世界坐标中的3D关节位置,以适应为2D人类姿势估计设计的有效的深层网络。所提出的方法可以作为基于XYZ坐标的输入深度表示,可以在合成深度数据上进行训练,并应用于现实世界设置,而无需域适应技术。为此,我们根据合成和真实深度图像介绍SIMBA数据集,并将其用于实验评估。结果表明,由特定的深度图表示和SPDH制成的建议方法克服了当前的最新状态。
translated by 谷歌翻译
本文介绍了基于神经网络的无气体卡尔曼滤波器(UKF),以跟踪已知的,非合作的,翻滚的目标航天飞机的姿势(即位置和方向),以近距离呈现场景。 UKF根据使用卷积神经网络(CNN)从目标航天器的传入单眼图像中提取的姿势信息估计目标相对于服务器的相对轨道和态度状态。为了启用可靠的跟踪,使用自适应状态噪声补偿在线调整UKF的过程噪声协方差矩阵。具体而言,新得出和实现了相对态度动力学的封闭形式的过程噪声模型。为了全面分析提议的CNN驱动UKF的性能和鲁棒性,本文还介绍了卫星硬件在环上的轨迹轨迹(衬衫)数据集,其中包括两个具有代表性的聚会轨迹的标签图像低地球轨道。对于每个轨迹,分别从图形渲染器和机器人测试台创建了两组图像,以允许测试跨域间隙的滤波器的鲁棒性。拟议的UKF在衬衫的两个轨迹领域进行了评估,并被证明在稳态下具有次数级的位置和程度级别的方向误差。
translated by 谷歌翻译
3D gaze estimation is most often tackled as learning a direct mapping between input images and the gaze vector or its spherical coordinates. Recently, it has been shown that pose estimation of the face, body and hands benefits from revising the learning target from few pose parameters to dense 3D coordinates. In this work, we leverage this observation and propose to tackle 3D gaze estimation as regression of 3D eye meshes. We overcome the absence of compatible ground truth by fitting a rigid 3D eyeball template on existing gaze datasets and propose to improve generalization by making use of widely available in-the-wild face images. To this end, we propose an automatic pipeline to retrieve robust gaze pseudo-labels from arbitrary face images and design a multi-view supervision framework to balance their effect during training. In our experiments, our method achieves improvement of 30% compared to state-of-the-art in cross-dataset gaze estimation, when no ground truth data are available for training, and 7% when they are. We make our project publicly available at https://github.com/Vagver/dense3Deyes.
translated by 谷歌翻译
凝视和头部姿势估计模型的鲁棒性高度取决于标记的数据量。最近,生成建模在生成照片现实图像方面表现出了出色的结果,这可以减轻对标记数据的需求。但是,在新领域采用这种生成模型,同时保持其对不同图像属性的细粒度控制的能力,例如,凝视和头部姿势方向,是一个挑战性的问题。本文提出了Cuda-GHR,这是一种无监督的域适应框架,可以对凝视和头部姿势方向进行细粒度的控制,同时保留该人的外观相关因素。我们的框架同时学会了通过利用富含标签的源域和未标记的目标域来适应新的域和删除图像属性,例如外观,凝视方向和头部方向。基准测试数据集的广泛实验表明,所提出的方法在定量和定性评估上都可以胜过最先进的技术。此外,我们表明目标域中生成的图像标签对有效地传递知识并提高下游任务的性能。
translated by 谷歌翻译
This paper addresses the challenge of 6DoF pose estimation from a single RGB image under severe occlusion or truncation. Many recent works have shown that a two-stage approach, which first detects keypoints and then solves a Perspective-n-Point (PnP) problem for pose estimation, achieves remarkable performance. However, most of these methods only localize a set of sparse keypoints by regressing their image coordinates or heatmaps, which are sensitive to occlusion and truncation. Instead, we introduce a Pixel-wise Voting Network (PVNet) to regress pixel-wise unit vectors pointing to the keypoints and use these vectors to vote for keypoint locations using RANSAC. This creates a flexible representation for localizing occluded or truncated keypoints. Another important feature of this representation is that it provides uncertainties of keypoint locations that can be further leveraged by the PnP solver. Experiments show that the proposed approach outperforms the state of the art on the LINEMOD, Occlusion LINEMOD and YCB-Video datasets by a large margin, while being efficient for real-time pose estimation. We further create a Truncation LINEMOD dataset to validate the robustness of our approach against truncation. The code will be avaliable at https://zju-3dv.github.io/pvnet/.
translated by 谷歌翻译
使用合成数据训练的深层模型需要适应域的适应性,以弥合模拟环境和目标环境之间的差距。最新的域适应方法通常需要来自目标域的足够数量(未标记的)数据。但是,当目标域是极端环境(例如空间)时,这种需求很难满足。在本文中,我们的目标问题是接近卫星姿势估计,从实际的会合任务中获取卫星的图像是昂贵的。我们证明,事件传感提供了一种有希望的解决方案,可以在Stark照明差异下从模拟到目标域。我们的主要贡献是一种基于事件的卫星姿势估计技术,纯粹是对合成事件数据进行培训的,该数据具有基本数据增强,以提高针对实际(嘈杂)事件传感器的鲁棒性。基础我们的方法是一个具有仔细校准的地面真相的新型数据集,其中包括通过在剧烈的照明条件下在实验室中模拟卫星集合场景获得的真实事件数据。数据集上的结果表明,我们基于事件的卫星姿势估计方法仅在没有适应的情况下接受合成数据训练,可以有效地概括为目标域。
translated by 谷歌翻译
由于在不良视觉条件下记录的图像的密集像素级语义注释缺乏,因此对此类图像的语义分割的无监督域适应性(UDA)引起了兴趣。 UDA适应了在正常条件下训练的模型,以适应目标不利条件域。同时,多个带有驾驶场景的数据集提供了跨多个条件的相同场景的相应图像,这可以用作域适应的弱监督。我们提出了重新设计,这是对基于自训练的UDA方法的通用扩展,该方法利用了这些跨域对应关系。重新调整由两个步骤组成:(1)使用不确定性意识到的密度匹配网络将正常条件图像与相应的不良条件图像对齐,以及(2)使用自适应标签校正机制来完善不良预测,并使用正常预测。我们设计自定义模块,以简化这两个步骤,并在几个不良条件基准(包括ACDC和Dark Zurich)上设置域自适应语义分割的新技术。该方法不引入额外的训练参数,只有在训练期间最少的计算开销 - 可以用作撤离扩展,以改善任何给定的基于自我训练的UDA方法。代码可从https://github.com/brdav/refign获得。
translated by 谷歌翻译
The goal of this paper is to estimate the 6D pose and dimensions of unseen object instances in an RGB-D image. Contrary to "instance-level" 6D pose estimation tasks, our problem assumes that no exact object CAD models are available during either training or testing time. To handle different and unseen object instances in a given category, we introduce Normalized Object Coordinate Space (NOCS)-a shared canonical representation for all possible object instances within a category. Our region-based neural network is then trained to directly infer the correspondence from observed pixels to this shared object representation (NOCS) along with other object information such as class label and instance mask. These predictions can be combined with the depth map to jointly estimate the metric 6D pose and dimensions of multiple objects in a cluttered scene. To train our network, we present a new contextaware technique to generate large amounts of fully annotated mixed reality data. To further improve our model and evaluate its performance on real data, we also provide a fully annotated real-world dataset with large environment and instance variation. Extensive experiments demonstrate that the proposed method is able to robustly estimate the pose and size of unseen object instances in real environments while also achieving state-of-the-art performance on standard 6D pose estimation benchmarks.
translated by 谷歌翻译
我们呈现ROCA,一种新的端到端方法,可以从形状数据库到单个输入图像中检索并对齐3D CAD模型。这使得从2D RGB观察开始观察到的场景的3D感知,其特征在于轻质,紧凑,清洁的CAD表示。我们的方法的核心是我们基于密集的2D-3D对象对应关系和促使对齐的可差的对准优化。因此,罗卡可以提供强大的CAD对准,同时通过利用2D-3D对应关系来学习几何上类似CAD模型来同时通知CAD检索。SCANNET的真实世界图像实验表明,Roca显着提高了现有技术,从检索感知CAD准确度为9.5%至17.6%。
translated by 谷歌翻译
Estimating the 6D pose of known objects is important for robots to interact with the real world. The problem is challenging due to the variety of objects as well as the complexity of a scene caused by clutter and occlusions between objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network for 6D object pose estimation. PoseCNN estimates the 3D translation of an object by localizing its center in the image and predicting its distance from the camera. The 3D rotation of the object is estimated by regressing to a quaternion representation. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. Our dataset provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. We conduct extensive experiments on our YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input. When using depth data to further refine the poses, our approach achieves state-of-the-art results on the challenging OccludedLINEMOD dataset. Our code and dataset are available at https://rse-lab.cs.washington.edu/projects/posecnn/.
translated by 谷歌翻译
小天体的任务在很大程度上依赖于光学特征跟踪,以表征和相对导航。尽管深度学习导致了功能检测和描述方面的巨大进步,但由于大规模,带注释的数据集的可用性有限,因此培训和验证了空间应用程序的数据驱动模型具有挑战性。本文介绍了Astrovision,这是一个大规模数据集,由115,970个密集注释的,真实的图像组成,这些图像是过去和正在进行的任务中捕获的16个不同物体的真实图像。我们利用Astrovision开发一组标准化基准,并对手工和数据驱动的功能检测和描述方法进行详尽的评估。接下来,我们采用Astrovision对最先进的,深刻的功能检测和描述网络进行端到端培训,并在多个基准测试中表现出改善的性能。将公开使用完整的基准管道和数据集,以促进用于空间应用程序的计算机视觉算法的发展。
translated by 谷歌翻译
深度学习的关键批评之一是,需要大量昂贵且难以获得的训练数据,以便培训具有高性能和良好的概率功能的模型。专注于通过场景坐标回归(SCR)的单眼摄像机姿势估计的任务,我们描述了一种新的方法,用于相机姿势估计(舞蹈)网络的域改编,这使得培训模型无需访问目标任务上的任何标签。舞蹈需要未标记的图像(没有已知的姿势,订购或场景坐标标签)和空间的3D表示(例如,扫描点云),这两者都可以使用现成的商品硬件最少的努力来捕获。舞蹈渲染从3D模型标记的合成图像,通过应用无监督的图像级域适应技术(未配对图像到图像转换)来桥接合成和实图像之间的不可避免的域间隙。在实际图像上进行测试时,舞蹈培训的SCR模型在成本的一小部分中对其完全监督的对应物(在两种情况下使用PNP-RANSAC进行最终姿势估算的情况下)进行了相当的性能。我们的代码和数据集可以在https://github.com/jacklangerman/dance获得
translated by 谷歌翻译