作为SE(3)的基本组成部分 - Quivariant的深度特色学习,可转向卷积最近展示了其3D语义分析的优势。然而,优点由昂贵的体积数据上的昂贵计算带来,这可以防止其实际用途,以便有效地处理固有的稀疏的3D数据。在本文中,我们提出了一种新颖的稀疏转向卷积(SS-Char)设计,以解决缺点; SS-DIM大大加快了稀疏张量的可操纵卷积,同时严格保留了SE(3)的性质。基于SS-CONV,我们提出了一种用于精确估计对象姿势的一般管道,其中一个关键设计是一种特征转向模块,其具有SE(3)的完全优势,并且能够进行高效的姿势改进。为了验证我们的设计,我们对三个对象语义分析的三个任务进行了彻底的实验,包括实例级别6D姿势估计,类别级别6D姿势和大小估计,以及类别级6D姿态跟踪。我们基于SS-CONV的提议管道优于三个任务评估的几乎所有指标上的现有方法。消融研究还在准确性和效率方面展示了我们的SS-CONVES对替代卷积的优越性。我们的代码在https://github.com/gorilla-lab-scut/ss-conv公开发布。
translated by 谷歌翻译
很难精确地注释对象实例及其在3D空间中的语义,因此,合成数据被广泛用于这些任务,例如类别级别6D对象姿势和大小估计。然而,合成域中的简易注释带来了合成到真实(SIM2REAL)域间隙的下行效应。在这项工作中,我们的目标是在SIM2REAL,无监督的域适应的任务设置中解决此问题,以适应类别级别6D对象姿势和尺寸估计。我们提出了一种基于新型的深层变形网络构建的方法,该网络缩短为DPDN。 DPDN学会了将分类形状先验的变形特征与对象观察的特征相匹配,因此能够在特征空间中建立深层对应,以直接回归对象姿势和尺寸。为了减少SIM2REAL域间隙,我们通过一致性学习在DPDN上制定了一个新颖的自我监督目标。更具体地说,我们对每个对象观察进行了两个刚性转换,并分别将它们送入DPDN以产生双重预测集。除了平行学习之外,还采用了一个矛盾术语来保持双重预测之间的交叉一致性,以提高DPDN对姿势变化的敏感性,而单个的内部矛盾范围则用于在每个学习本身内实施自我适应。我们在合成摄像头25和现实世界Real275数据集的两个训练集上训练DPDN;我们的结果优于无监督和监督设置下的Real275测试集中的现有方法。消融研究还验证了我们设计的功效。我们的代码将在https://github.com/jiehonglin/self-dpdn公开发布。
translated by 谷歌翻译
从RGB-D图像中对刚性对象的6D姿势估计对于机器人技术中的对象抓握和操纵至关重要。尽管RGB通道和深度(d)通道通常是互补的,分别提供了外观和几何信息,但如何完全从两个跨模式数据中完全受益仍然是非平凡的。从简单而新的观察结果来看,当对象旋转时,其语义标签是姿势不变的,而其关键点偏移方向是姿势的变体。为此,我们提出了So(3)pose,这是一个新的表示学习网络,可以探索SO(3)equivariant和So(3) - 从深度通道中进行姿势估计的特征。 SO(3) - 激素特征有助于学习更独特的表示,以分割来自RGB通道外观相似的对象。 SO(3) - 等级特征与RGB功能通信,以推导(缺失的)几何形状,以检测从深度通道的反射表面的对象的关键点。与大多数现有的姿势估计方法不同,我们的SO(3) - 不仅可以实现RGB和深度渠道之间的信息通信,而且自然会吸收SO(3) - 等级的几何学知识,从深度图像中,导致更好的外观和更好的外观和更好几何表示学习。综合实验表明,我们的方法在三个基准测试中实现了最先进的性能。
translated by 谷歌翻译
本文提出了一种新的点云卷积结构,该结构学习了SE(3) - 等级功能。与现有的SE(3) - 等级网络相比,我们的设计轻巧,简单且灵活,可以合并到一般的点云学习网络中。我们通过为特征地图选择一个非常规域,在模型的复杂性和容量之间取得平衡。我们通过正确离散$ \ mathbb {r}^3 $来完全利用旋转对称性来进一步减少计算负载。此外,我们采用置换层从其商空间中恢复完整的SE(3)组。实验表明,我们的方法在各种任务中实现了可比或卓越的性能,同时消耗的内存和运行速度要比现有工作更快。所提出的方法可以在基于点云的各种实用应用中促进模棱两可的特征学习,并激发现实世界应用的Equivariant特征学习的未来发展。
translated by 谷歌翻译
虽然最近出现了类别级的9DOF对象姿势估计,但由于较大的对象形状和颜色等类别内差异,因此,先前基于对应的或直接回归方法的准确性均受到限制。 - 级别的物体姿势和尺寸炼油机Catre,能够迭代地增强点云的姿势估计以产生准确的结果。鉴于初始姿势估计,Catre通过对齐部分观察到的点云和先验的抽象形状来预测初始姿势和地面真理之间的相对转换。具体而言,我们提出了一种新颖的分离体系结构,以了解旋转与翻译/大小估计之间的固有区别。广泛的实验表明,我们的方法在REAL275,Camera25和LM基准测试中的最先进方法高达〜85.32Hz,并在类别级别跟踪上取得了竞争成果。我们进一步证明,Catre可以对看不见的类别进行姿势改进。可以使用代码和训练有素的型号。
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard "dense" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SSCNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition.
translated by 谷歌翻译
Recent progress in geometric computer vision has shown significant advances in reconstruction and novel view rendering from multiple views by capturing the scene as a neural radiance field. Such approaches have changed the paradigm of reconstruction but need a plethora of views and do not make use of object shape priors. On the other hand, deep learning has shown how to use priors in order to infer shape from single images. Such approaches, though, require that the object is reconstructed in a canonical pose or assume that object pose is known during training. In this paper, we address the problem of how to compute equivariant priors for reconstruction from a few images, given the relative poses of the cameras. Our proposed reconstruction is $SE(3)$-gauge equivariant, meaning that it is equivariant to the choice of world frame. To achieve this, we make two novel contributions to light field processing: we define light field convolution and we show how it can be approximated by intra-view $SE(2)$ convolutions because the original light field convolution is computationally and memory-wise intractable; we design a map from the light field to $\mathbb{R}^3$ that is equivariant to the transformation of the world frame and to the rotation of the views. We demonstrate equivariance by obtaining robust results in roto-translated datasets without performing transformation augmentation.
translated by 谷歌翻译
A wide range of techniques have been proposed in recent years for designing neural networks for 3D data that are equivariant under rotation and translation of the input. Most approaches for equivariance under the Euclidean group $\mathrm{SE}(3)$ of rotations and translations fall within one of the two major categories. The first category consists of methods that use $\mathrm{SE}(3)$-convolution which generalizes classical $\mathbb{R}^3$-convolution on signals over $\mathrm{SE}(3)$. Alternatively, it is possible to use \textit{steerable convolution} which achieves $\mathrm{SE}(3)$-equivariance by imposing constraints on $\mathbb{R}^3$-convolution of tensor fields. It is known by specialists in the field that the two approaches are equivalent, with steerable convolution being the Fourier transform of $\mathrm{SE}(3)$ convolution. Unfortunately, these results are not widely known and moreover the exact relations between deep learning architectures built upon these two approaches have not been precisely described in the literature on equivariant deep learning. In this work we provide an in-depth analysis of both methods and their equivalence and relate the two constructions to multiview convolutional networks. Furthermore, we provide theoretical justifications of separability of $\mathrm{SE}(3)$ group convolution, which explain the applicability and success of some recent approaches. Finally, we express different methods using a single coherent formalism and provide explicit formulas that relate the kernels learned by different methods. In this way, our work helps to unify different previously-proposed techniques for achieving roto-translational equivariance, and helps to shed light on both the utility and precise differences between various alternatives. We also derive new TFN non-linearities from our equivalence principle and test them on practical benchmark datasets.
translated by 谷歌翻译
由于其在翻译下的增强/不变性,卷积网络成功。然而,在坐标系的旋转取向不会影响数据的含义(例如对象分类)的情况下,诸如图像,卷,形状或点云的可旋转数据需要在旋转下的增强/不变性处理。另一方面,在旋转很重要的情况下是必要的估计/处理旋转(例如运动估计)。最近在所有这些方面的方法和理论方面取得了进展。在这里,我们提供了2D和3D旋转(以及翻译)的现有方法的概述,以及识别它们之间的共性和链接。
translated by 谷歌翻译
我们提出了一种对类别级别的6D对象姿势和大小估计的新方法。为了解决类内的形状变化,我们学习规范形状空间(CASS),统一表示,用于某个对象类别的各种情况。特别地,CASS被建模为具有标准化姿势的规范3D形状深度生成模型的潜在空间。我们训练变形式自动编码器(VAE),用于从RGBD图像中的规范空间中生成3D点云。 VAE培训以跨类方式培训,利用公开的大型3D形状存储库。由于3D点云在归一化姿势(具有实际尺寸)中生成,因此VAE的编码器学习视图分解RGBD嵌入。它将RGBD图像映射到任意视图中以独立于姿势的3D形状表示。然后通过将对象姿势与用单独的深神经网络提取的输入RGBD的姿势相关的特征进行对比姿势估计。我们将CASS和姿势和大小估计的学习集成到最终的培训网络中,实现了最先进的性能。
translated by 谷歌翻译
基于2D图像的3D对象的推理由于从不同方向查看对象引起的外观差异很大,因此具有挑战性。理想情况下,我们的模型将是对物体姿势变化的不变或等效的。不幸的是,对于2D图像输入,这通常是不可能的,因为我们没有一个先验模型,即在平面外对象旋转下如何改变图像。唯一的$ \ mathrm {so}(3)$ - 当前存在的模型需要点云输入而不是2D图像。在本文中,我们提出了一种基于Icosahedral群卷积的新型模型体系结构,即通过将输入图像投影到iCosahedron上,以$ \ mathrm {so(3)} $中的理由。由于此投影,该模型大致与$ \ mathrm {so}(3)$中的旋转大致相当。我们将此模型应用于对象构成估计任务,并发现它的表现优于合理的基准。
translated by 谷歌翻译
Estimating 6D poses of objects from images is an important problem in various applications such as robot manipulation and virtual reality. While direct regression of images to object poses has limited accuracy, matching rendered images of an object against the input image can produce accurate results. In this work, we propose a novel deep neural network for 6D pose matching named DeepIM. Given an initial pose estimation, our network is able to iteratively refine the pose by matching the rendered image against the observed image. The network is trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process. Experiments on two commonly used benchmarks for 6D pose estimation demonstrate that DeepIM achieves large improvements over stateof-the-art methods. We furthermore show that DeepIM is able to match previously unseen objects.
translated by 谷歌翻译
在3D点云上的应用程序越来越需要效率和鲁棒性,在自动驾驶和机器人技术等场景中无处不在使用边缘设备,这通常需要实时和可靠的响应。该论文通过设计一个通用框架来应对挑战,以构建具有(3)均衡和网络二元化的3D学习体系结构。然而,模棱两可的网络和二元化的幼稚组合会导致优化的计算效率或几何歧义。我们建议在网络中同时找到标量和向量特征,以避免这两种情况。确切地说,标量特征的存在使网络的主要部分是可动的,而矢量特征则可以保留丰富的结构信息并确保SO(3)均衡。提出的方法可以应用于PointNet和DGCNN等一般骨干。同时,对ModelNet40,Shapenet和现实世界数据集ScanObjectnn进行的实验表明,该方法在效率,旋转稳健性和准确性之间取决于巨大的权衡。这些代码可在https://github.com/zhuoinoulu/svnet上找到。
translated by 谷歌翻译
点云分析没有姿势前导者在真实应用中非常具有挑战性,因为点云的方向往往是未知的。在本文中,我们提出了一个全新的点集学习框架prin,即点亮旋转不变网络,专注于点云分析中的旋转不变特征提取。我们通过密度意识的自适应采样构建球形信号,以处理球形空间中的扭曲点分布。提出了球形Voxel卷积和点重新采样以提取每个点的旋转不变特征。此外,我们将Prin扩展到称为Sprin的稀疏版本,直接在稀疏点云上运行。 Prin和Sprin都可以应用于从对象分类,部分分割到3D特征匹配和标签对齐的任务。结果表明,在随机旋转点云的数据集上,Sprin比无任何数据增强的最先进方法表现出更好的性能。我们还为我们的方法提供了彻底的理论证明和分析,以实现我们的方法实现的点明智的旋转不变性。我们的代码可在https://github.com/qq456cvb/sprin上找到。
translated by 谷歌翻译
估计对象的6D姿势是必不可少的计算机视觉任务。但是,大多数常规方法从单个角度依赖相机数据,因此遭受遮挡。我们通过称为MV6D的新型多视图6D姿势估计方法克服了这个问题,该方法从多个角度根据RGB-D图像准确地预测了混乱场景中所有对象的6D姿势。我们将方法以PVN3D网络为基础,该网络使用单个RGB-D图像来预测目标对象的关键点。我们通过从多个视图中使用组合点云来扩展此方法,并将每个视图中的图像与密集层层融合。与当前的多视图检测网络(例如Cosypose)相反,我们的MV6D可以以端到端的方式学习多个观点的融合,并且不需要多个预测阶段或随后对预测的微调。此外,我们介绍了三个新颖的影像学数据集,这些数据集具有沉重的遮挡的混乱场景。所有这些都从多个角度包含RGB-D图像,例如语义分割和6D姿势估计。即使在摄像头不正确的情况下,MV6D也明显优于多视图6D姿势估计中最新的姿势估计。此外,我们表明我们的方法对动态相机设置具有强大的态度,并且其准确性随着越来越多的观点而逐渐增加。
translated by 谷歌翻译
本文提出了一种可对应的点云旋转登记的方法。我们学习为每个点云嵌入保留所以(3)-equivariance属性的特征空间中的嵌入,通过最近的Quifariant神经网络的开发启用。所提出的形状登记方法通过用隐含形状模型结合等分性的特征学习来实现三个主要优点。首先,由于网络架构中类似于PointNet的网络体系结构中的置换不变性,因此删除了数据关联的必要性。其次,由于SO(3)的性能,可以使用喇叭的方法以闭合形式来解决特征空间中的注册。第三,由于注册和隐含形状重建的联合培训,注册对点云中的噪声强大。实验结果显示出优异的性能与现有的无对应的深层登记方法相比。
translated by 谷歌翻译
Translating or rotating an input image should not affect the results of many computer vision tasks. Convolutional neural networks (CNNs) are already translation equivariant: input image translations produce proportionate feature map translations. This is not the case for rotations. Global rotation equivariance is typically sought through data augmentation, but patch-wise equivariance is more difficult. We present Harmonic Networks or H-Nets, a CNN exhibiting equivariance to patch-wise translation and 360-rotation. We achieve this by replacing regular CNN filters with circular harmonics, returning a maximal response and orientation for every receptive field patch.H-Nets use a rich, parameter-efficient and fixed computational complexity representation, and we show that deep feature maps within the network encode complicated rotational invariants. We demonstrate that our layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization. We also achieve state-of-the-art classification on rotated-MNIST, and competitive results on other benchmark challenges.
translated by 谷歌翻译
In this paper, we propose a novel 3D graph convolution based pipeline for category-level 6D pose and size estimation from monocular RGB-D images. The proposed method leverages an efficient 3D data augmentation and a novel vector-based decoupled rotation representation. Specifically, we first design an orientation-aware autoencoder with 3D graph convolution for latent feature learning. The learned latent feature is insensitive to point shift and size thanks to the shift and scale-invariance properties of the 3D graph convolution. Then, to efficiently decode the rotation information from the latent feature, we design a novel flexible vector-based decomposable rotation representation that employs two decoders to complementarily access the rotation information. The proposed rotation representation has two major advantages: 1) decoupled characteristic that makes the rotation estimation easier; 2) flexible length and rotated angle of the vectors allow us to find a more suitable vector representation for specific pose estimation task. Finally, we propose a 3D deformation mechanism to increase the generalization ability of the pipeline. Extensive experiments show that the proposed pipeline achieves state-of-the-art performance on category-level tasks. Further, the experiments demonstrate that the proposed rotation representation is more suitable for the pose estimation tasks than other rotation representations.
translated by 谷歌翻译
Steerable convolutional neural networks (CNNs) provide a general framework for building neural networks equivariant to translations and other transformations belonging to an origin-preserving group $G$, such as reflections and rotations. They rely on standard convolutions with $G$-steerable kernels obtained by analytically solving the group-specific equivariance constraint imposed onto the kernel space. As the solution is tailored to a particular group $G$, the implementation of a kernel basis does not generalize to other symmetry transformations, which complicates the development of group equivariant models. We propose using implicit neural representation via multi-layer perceptrons (MLPs) to parameterize $G$-steerable kernels. The resulting framework offers a simple and flexible way to implement Steerable CNNs and generalizes to any group $G$ for which a $G$-equivariant MLP can be built. We apply our method to point cloud (ModelNet-40) and molecular data (QM9) and demonstrate a significant improvement in performance compared to standard Steerable CNNs.
translated by 谷歌翻译