Recent work has constructed neural networks that are equivariant to continuous symmetry groups such as 2D and 3D rotations. This is accomplished using explicit Lie group representations to derive the equivariant kernels and nonlinearities. We present three contributions motivated by frontier applications of equivariance beyond rotations and translations. First, we relax the requirement for explicit Lie group representations with a novel algorithm that finds representations of arbitrary Lie groups given only the structure constants of the associated Lie algebra. Second, we provide a self-contained method and software for building Lie group-equivariant neural networks using these representations. Third, we contribute a novel benchmark dataset for classifying objects from relativistic point clouds, and apply our methods to construct the first object-tracking model equivariant to the Poincar\'e group.
translated by 谷歌翻译
我们提出了E3NN,这是一个通用框架,用于创建E(3)e术训练功能,也称为欧几里得神经网络。E3NN自然地在几何和几何张量上进行操作,这些几何和几何张量描述了3D中的系统,并在坐标系统的变化下可预测地转换。E3NN的核心是诸如张力生产类别或球形谐波函数之类的等效操作,这些功能可以组成,以创建更复杂的模块,例如卷积和注意机制。E3NN的这些核心操作可用于有效地阐明张量球场网络,3D可通道的CNN,Clebsch-Gordan Networks,SE(3)变压器和其他E(3)E(3)Equivariant网络。
translated by 谷歌翻译
标准情况被出现为对构成组的身份保留转换的物体表示的理想性质,例如翻译和旋转。然而,由组标准规定的表示的表示的表现仍然不完全理解。我们通过提供封面函数计数定理的概括来解决这个差距,这些定理量化了可以分配给物体的等异点的线性可分离和组不变二进制二分层的数量。我们发现可分离二分法的分数由由组动作固定的空间的尺寸决定。我们展示了该关系如何扩展到卷积,元素 - 明智的非线性和全局和本地汇集等操作。虽然其他操作不会改变可分离二分法的分数,但尽管是高度非线性操作,但是局部汇集减少了分数。最后,我们在随机初始化和全培训的卷积神经网络的中间代表中测试了我们的理论,并找到了完美的协议。
translated by 谷歌翻译
线性神经网络层的模棱两可。在这项工作中,我们放宽了肩variance条件,只有在投影范围内才是真实的。特别是,我们研究了投射性和普通的肩那样的关系,并表明对于重要的例子,这些问题实际上是等效的。3D中的旋转组在投影平面上投影起作用。在设计用于过滤2D-2D对应的网络时,我们在实验上研究了旋转肩位的实际重要性。完全模型的模型表现不佳,虽然简单地增加了不变的特征,从而在强大的基线产量中得到了改善,但这似乎并不是由于改善的均衡性。
translated by 谷歌翻译
通过深度生成建模的学习表示是动态建模的强大方法,以发现数据的最简化和压缩的基础描述,然后将其用于诸如预测的其他任务。大多数学习任务具有内在的对称性,即输入变换将输出保持不变,或输出经过类似的转换。然而,学习过程通常是对这些对称性的不知情。因此,单独转换输入的学习表示可能不会有意义地相关。在本文中,我们提出了一种如此(3)个等级的深层动态模型(EQDDM),用于运动预测,用于在嵌入随对称转换的情况下变化的意义上学习输入空间的结构化表示。 EQDDM配备了等级网络,可参数化状态空间发射和转换模型。我们展示了在各种运动数据上提出了拟议模型的卓越预测性能。
translated by 谷歌翻译
Steerable convolutional neural networks (CNNs) provide a general framework for building neural networks equivariant to translations and other transformations belonging to an origin-preserving group $G$, such as reflections and rotations. They rely on standard convolutions with $G$-steerable kernels obtained by analytically solving the group-specific equivariance constraint imposed onto the kernel space. As the solution is tailored to a particular group $G$, the implementation of a kernel basis does not generalize to other symmetry transformations, which complicates the development of group equivariant models. We propose using implicit neural representation via multi-layer perceptrons (MLPs) to parameterize $G$-steerable kernels. The resulting framework offers a simple and flexible way to implement Steerable CNNs and generalizes to any group $G$ for which a $G$-equivariant MLP can be built. We apply our method to point cloud (ModelNet-40) and molecular data (QM9) and demonstrate a significant improvement in performance compared to standard Steerable CNNs.
translated by 谷歌翻译
生成建模旨在揭示产生观察到的数据的潜在因素,这些数据通常可以被建模为自然对称性,这些对称性是通过不变和对某些转型定律等效的表现出来的。但是,当前代表这些对称性的方法是在需要构建模棱两可矢量场的连续正式化流中所掩盖的 - 抑制了它们在常规的高维生成建模域(如自然图像)中的简单应用。在本文中,我们专注于使用离散层建立归一化流量。首先,我们从理论上证明了对紧凑空间的紧凑型组的模棱两可的图。我们进一步介绍了三个新的品牌流:$ g $ - 剩余的流量,$ g $ - 耦合流量和$ g $ - inverse自动回旋的回旋流量,可以提升经典的残留剩余,耦合和反向自动性流量,并带有等效的地图, $。从某种意义上说,我们证明$ g $ equivariant的差异性可以通过$ g $ - $ residual流量映射,我们的$ g $ - 剩余流量也很普遍。最后,我们首次在诸如CIFAR-10之类的图像数据集中对我们的理论见解进行了补充,并显示出$ G $ equivariant有限的有限流量,从而提高了数据效率,更快的收敛性和提高的可能性估计。
translated by 谷歌翻译
小组卷积神经网络(G-CNN)是卷积神经网络(CNN)的概括,通过在其体系结构中明确编码旋转和排列,在广泛的技术应用中脱颖而出。尽管G-CNN的成功是由它们的\ emph {emplapicit}对称偏见驱动的,但最近的一项工作表明,\ emph {隐式}对特定体系结构的偏差是理解过度参数化神经网的概​​括的关键。在这种情况下,我们表明,通过梯度下降训练了二进制分类的$ L $ layer全宽线性G-CNN,将二进制分类收敛到具有低级别傅立叶矩阵系数的解决方案,并由$ 2/l $ -schatten矩阵规范正规化。我们的工作严格概括了先前对线性CNN的隐性偏差对线性G-CNN的隐性分析,包括所有有限组,包括非交换组的挑战性设置(例如排列),以及无限组的频段限制G-CNN 。我们通过在各个组上实验验证定理,并在经验上探索更现实的非线性网络,该网络在局部捕获了相似的正则化模式。最后,我们通过不确定性原理提供了对傅立叶空间隐式正则化的直观解释。
translated by 谷歌翻译
Equivariance of neural networks to transformations helps to improve their performance and reduce generalization error in computer vision tasks, as they apply to datasets presenting symmetries (e.g. scalings, rotations, translations). The method of moving frames is classical for deriving operators invariant to the action of a Lie group in a manifold.Recently, a rotation and translation equivariant neural network for image data was proposed based on the moving frames approach. In this paper we significantly improve that approach by reducing the computation of moving frames to only one, at the input stage, instead of repeated computations at each layer. The equivariance of the resulting architecture is proved theoretically and we build a rotation and translation equivariant neural network to process volumes, i.e. signals on the 3D space. Our trained model overperforms the benchmarks in the medical volume classification of most of the tested datasets from MedMNIST3D.
translated by 谷歌翻译
我们介绍了CheBlieset,一种对(各向异性)歧管的组成的方法。对基于GRAP和基于组的神经网络的成功进行冲浪,我们利用了几何深度学习领域的最新发展,以推导出一种新的方法来利用数据中的任何各向异性。通过离散映射的谎言组,我们开发由各向异性卷积层(Chebyshev卷积),空间汇集和解凝层制成的图形神经网络,以及全球汇集层。集团的标准因素是通过具有各向异性左不变性的黎曼距离的图形上的等级和不变的运算符来实现的。由于其简单的形式,Riemannian公制可以在空间和方向域中模拟任何各向异性。这种对Riemannian度量的各向异性的控制允许平衡图形卷积层的不变性(各向异性度量)的平衡(各向异性指标)。因此,我们打开大门以更好地了解各向异性特性。此外,我们经验证明了在CIFAR10上的各向异性参数的存在(数据依赖性)甜点。这一关键的结果是通过利用数据中的各向异性属性来获得福利的证据。我们还评估了在STL10(图像数据)和ClimateNet(球面数据)上的这种方法的可扩展性,显示了对不同任务的显着适应性。
translated by 谷歌翻译
定义网格上卷积的常用方法是将它们作为图形解释并应用图形卷积网络(GCN)。这种GCNS利用各向同性核,因此对顶点的相对取向不敏感,从而对整个网格的几何形状。我们提出了规范的等分性网状CNN,它概括了GCNS施加各向异性仪表等级核。由于产生的特征携带方向信息,我们引入了通过网格边缘并行传输特征来定义的几何消息传递方案。我们的实验验证了常规GCN和其他方法的提出模型的显着提高的表达性。
translated by 谷歌翻译
A wide range of techniques have been proposed in recent years for designing neural networks for 3D data that are equivariant under rotation and translation of the input. Most approaches for equivariance under the Euclidean group $\mathrm{SE}(3)$ of rotations and translations fall within one of the two major categories. The first category consists of methods that use $\mathrm{SE}(3)$-convolution which generalizes classical $\mathbb{R}^3$-convolution on signals over $\mathrm{SE}(3)$. Alternatively, it is possible to use \textit{steerable convolution} which achieves $\mathrm{SE}(3)$-equivariance by imposing constraints on $\mathbb{R}^3$-convolution of tensor fields. It is known by specialists in the field that the two approaches are equivalent, with steerable convolution being the Fourier transform of $\mathrm{SE}(3)$ convolution. Unfortunately, these results are not widely known and moreover the exact relations between deep learning architectures built upon these two approaches have not been precisely described in the literature on equivariant deep learning. In this work we provide an in-depth analysis of both methods and their equivalence and relate the two constructions to multiview convolutional networks. Furthermore, we provide theoretical justifications of separability of $\mathrm{SE}(3)$ group convolution, which explain the applicability and success of some recent approaches. Finally, we express different methods using a single coherent formalism and provide explicit formulas that relate the kernels learned by different methods. In this way, our work helps to unify different previously-proposed techniques for achieving roto-translational equivariance, and helps to shed light on both the utility and precise differences between various alternatives. We also derive new TFN non-linearities from our equivalence principle and test them on practical benchmark datasets.
translated by 谷歌翻译
现有的等分性神经网络需要先前了解对称组和连续组的离散化。我们建议使用Lie代数(无限发电机)而不是谎言群体。我们的模型,Lie代数卷积网络(L-Chir)可以自动发现对称性,并不需要该组的离散化。我们展示L-CONC可以作为构建任何组的建筑块,以构建任何组的馈电架构。CNN和图表卷积网络都可以用适当的组表示为L-DIV。我们发现L-CONC和物理学之间的直接连接:(1)组不变损失概括场理论(2)欧拉拉格朗法令方程测量鲁棒性,(3)稳定性导致保护法和挪威尔特。这些连接开辟了新的途径用于设计更多普遍等级的网络并将其应用于物理科学中的重要问题
translated by 谷歌翻译
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CI-FAR10 and rotated MNIST.
translated by 谷歌翻译
我们在从傅立叶角度得出的同质空间上引入了一个统一的框架。我们解决了卷积层之前和之后的特征场的情况。我们通过利用提起的特征场的傅立叶系数的稀疏性来提出通过傅立叶域的统一推导。当同质空间的稳定子亚组是一个紧凑的谎言组时,稀疏性就会出现。我们进一步通过元素定位元素非线性引入了一种激活方法,并通过均等卷积抬起并投射回现场。我们表明,其他将特征视为稳定器亚组中傅立叶系数的方法是我们激活的特殊情况。$ SO(3)$和$ SE(3)$进行的实验显示了球形矢量场回归,点云分类和分子完成中的最新性能。
translated by 谷歌翻译
模棱两可的神经网络,其隐藏的特征根据G组作用于数据的表示,表现出训练效率和提高的概括性能。在这项工作中,我们将群体不变和模棱两可的表示学习扩展到无监督的深度学习领域。我们根据编码器框架提出了一种通用学习策略,其中潜在表示以不变的术语和模棱两可的组动作组件分开。关键的想法是,网络学会通过学习预测适当的小组操作来对齐输入和输出姿势以解决重建任务的适当组动作来编码和从组不变表示形式进行编码和解码数据。我们在Equivariant编码器上得出必要的条件,并提出了对任何G(离散且连续的)有效的构造。我们明确描述了我们的旋转,翻译和排列的构造。我们在采用不同网络体系结构的各种数据类型的各种实验中测试了方法的有效性和鲁棒性。
translated by 谷歌翻译
虽然可怕的转化扰动稳健,但是已知卷积神经网络(CNNS)在用更普通的输入的测试时间呈现时呈现极端性能劣化。最近,这种限制具有从CNNS到胶囊网络(Capsnets)的焦点转变。但是,Capsnets遭受了相对较少的理论保障的不变性。我们介绍了一个严格的数学框架,以允许不在任何谎言群体群体,专门使用卷曲(通过谎言群体),而无需胶囊。以前关于集团举报的职责受到本集团的强烈假设的阻碍,这阻止了这些技术在计算机视觉中的共同扭曲中的应用,如仿佛和同类。我们的框架可以实现over \ emph {任何}有限维谎组的组卷积。我们在基准仿射不变分类任务中凭经验验证了我们的方法,在那里我们在越野上达到了常规CNN的准确性,同时优于最先进的帽子,我们在达到$ \ SIMP 30 \%的提高。作为我们框架的普遍性的进一步说明,我们训练了一个众所周知的模型,实现了在众所周知的数据集上的卓越稳健性,其中帽子结果降低。
translated by 谷歌翻译
我们提出了一种新颖的机器学习体系结构,双光谱神经网络(BNNS),用于学习数据的数据表示,这些数据是对定义信号的空间中组的行为不变的。该模型结合了双光谱的ANSATZ,这是一个完整的分析定义的组不变的,也就是说,它保留了所有信号结构,同时仅删除了由于组动作而造成的变化。在这里,我们证明了BNN能够在数据中发现任意的交换群体结构,并且训练有素的模型学习了组的不可减至表示,从而可以恢复组Cayley表。值得注意的是,受过训练的网络学会了对这些组的双偏见,因此具有分析对象的稳健性,完整性和通用性。
translated by 谷歌翻译
Convolutional neural networks have been extremely successful in the image recognition domain because they ensure equivariance to translations. There have been many recent attempts to generalize this framework to other domains, including graphs and data lying on manifolds. In this paper we give a rigorous, theoretical treatment of convolution and equivariance in neural networks with respect to not just translations, but the action of any compact group. Our main result is to prove that (given some natural constraints) convolutional structure is not just a sufficient, but also a necessary condition for equivariance to the action of a compact group. Our exposition makes use of concepts from representation theory and noncommutative harmonic analysis and derives new generalized convolution formulae.
translated by 谷歌翻译
The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. Here we show how this principle can be extended beyond global symmetries to local gauge transformations. This enables the development of a very general class of convolutional neural networks on manifolds that depend only on the intrinsic geometry, and which includes many popular methods from equivariant and geometric deep learning.We implement gauge equivariant CNNs for signals defined on the surface of the icosahedron, which provides a reasonable approximation of the sphere. By choosing to work with this very regular manifold, we are able to implement the gauge equivariant convolution using a single conv2d call, making it a highly scalable and practical alternative to Spherical CNNs. Using this method, we demonstrate substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns.
translated by 谷歌翻译