A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.
translated by 谷歌翻译
包含联系和象征主义范式的胶囊网络使新的见解成为人工智能。作为胶囊网络的构建块的胶囊是由向量表示的一组神经元以编码实体的不同特征。通过路由算法通过胶囊层分层地提取信息。在这里,我们将量子胶囊网络(被称为QCaPSnet)与量子动态路由算法一起介绍。我们的模型在动态路由过程中享有指数加速,并展示增强的表示电源。为了基准QCAPPAPSNET的性能,我们对手写数字和对称保护的拓扑阶段进行了广泛的数值模拟,并表明QCAPPSNET可以显然地实现最先进的准确度并优于传统量子分类器。我们进一步解开输出胶囊状态,并发现特定子空间可以对应于输入数据的人类可理解的特征,其指示这种网络的潜在可解释性。我们的工作揭示了量子机器学习中量子胶囊网络的有趣前景,这可能为可解释的量子人工智能提供有价值的指导。
translated by 谷歌翻译
我们提出了一个通过信息瓶颈约束来学习CAPSNET的学习框架的框架,该框架将信息提炼成紧凑的形式,并激励学习可解释的分解化胶囊。在我们的$ \ beta $ -capsnet框架中,使用超参数$ \ beta $用于权衡解开和其他任务,使用变异推理将信息瓶颈术语转换为kl divergence,以近似为约束胶囊。为了进行监督学习,使用类独立掩码矢量来理解合成的变化类型,无论图像类别类别,我们通过调整参数$ \ beta $来进行大量的定量和定性实验,以找出分离,重建和细节之间的关系表现。此外,提出了无监督的$ \ beta $ -capsnet和相应的动态路由算法,以学习范围的方式,以一种无监督的方式学习解散胶囊,广泛的经验评估表明我们的$ \ beta $ -CAPPAPSNET可实现的是先进的分离性截止性性能比较在监督和无监督场景中的几个复杂数据集上的CAPSNET和各种基线。
translated by 谷歌翻译
Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.
translated by 谷歌翻译
The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be used to learn features that output a whole vector of instantiation parameters and we argue that this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. It is also more promising than the handengineered features currently used in computer vision because it provides an efficient way of adapting the features to the domain.
translated by 谷歌翻译
使用胶囊网络的原始点云处理在分类,重建和分割中被广泛采用,因为它能够保留输入数据的空间协议。然而,基于现有的大多数基于胶囊的网络方法是计算繁重的,并且在将整个点云作为单个胶囊代表整个点云。我们通过提出具有参数共享的小说卷积胶囊架构,通过提出Pointcaps来解决现有的胶囊网络基础方法的这些限制。除了点击措施之外,我们提出了一种新颖的欧几里德距离路由算法和独立于独立的潜在潜在表示。潜在的表示捕获了点云的物理解释的几何参数,具有动态欧几里德路由,Pointcaps阱 - 代表点的空间(点对部分)关系。 Pointcaps的参数具有显着较低的参数,并且需要显着较低的拖鞋,同时实现与最先进的胶囊网络相比,对原始点云的可比分类和分割精度实现更好的重建。
translated by 谷歌翻译
The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth further investigation. The Forward-Forward algorithm replaces the forward and backward passes of backpropagation by two forward passes, one with positive (i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes could be separated in time, the negative passes could be done offline, which would make the learning much simpler in the positive pass and allow video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.
translated by 谷歌翻译
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
translated by 谷歌翻译
由于其强大的时空信息表示能力,尖峰神经网络(SNN)引起了很多关注。胶囊神经网络(CAPSNET)在不同级别的组装和耦合功能方面做得好。在这里,我们通过将胶囊引入尖刺神经网络的建模来提出尖峰帽。此外,我们提出了更具生物合理的尖峰定时依赖性可塑性路线机构。通过充分考虑低水平尖峰胶囊与高级尖峰胶囊之间的时空关系,它们之间的耦合能力进一步提高。我们在Mnist和FashionMnist数据集上进行了验证的实验。与其他优秀的SNN模型相比,我们的算法仍然实现了高性能。我们的尖峰帽完全结合了SNN和Capsnet的增强,并对噪声和仿射变换表现出强大的稳健性。通过向测试数据集添加不同的盐胡椒和高斯噪声,实验结果表明,当有更多的噪音时,我们的尖峰帽显示出更强大的性能,而人工神经网络无法正确澄清。同样,我们的尖峰帽显示出强烈的概括,可以在漂式数据集上仿射转换。
translated by 谷歌翻译
从神经网络统治了图像处理的那一刻,解决目标任务所需的计算复杂性飙升:根据这种不可持续的趋势,已经制定了许多策略,雄心勃勃地针对绩效的保存。例如,促进稀疏拓扑允许在嵌入式,资源约束的设备上部署深神网络模型。最近,引入了胶囊网络以增强模型的解释性,其中每个胶囊都是对象或其零件的明确表示。这些模型在玩具数据集上显示出令人鼓舞的结果,但是它们的低可伸缩性可阻止在更复杂的任务上部署。在这项工作中,我们通过减少胶囊数量来探索稀疏性以提高其计算效率。我们展示了胶囊网络的修剪如何通过更少的内存需求,计算工作以及推理和训练时间来实现高概括。
translated by 谷歌翻译
Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
translated by 谷歌翻译
The GLOM architecture proposed by Hinton [2021] is a recurrent neural network for parsing an image into a hierarchy of wholes and parts. When a part is ambiguous, GLOM assumes that the ambiguity can be resolved by allowing the part to make multi-modal predictions for the pose and identity of the whole to which it belongs and then using attention to similar predictions coming from other possibly ambiguous parts to settle on a common mode that is predicted by several different parts. In this study, we describe a highly simplified version of GLOM that allows us to assess the effectiveness of this way of dealing with ambiguity. Our results show that, with supervised training, GLOM is able to successfully form islands of very similar embedding vectors for all of the locations occupied by the same object and it is also robust to strong noise injections in the input and to out-of-distribution input transformations.
translated by 谷歌翻译
在过去的十年中,胶囊网络已显示出巨大的进步,由于其模棱两可的属性,在各种任务中表现出色。通过使用向量I/O,它提供了对象的幅度和方向的信息,或者是该部分的一部分,在无监督的学习环境中使用胶囊网络来进行视觉表示任务,例如多类图像分类。在本文中,我们提出了对比度胶囊(可口可乐)模型,该模型是一种暹罗风格的胶囊网络,使用我们的新型体系结构,训练和测试算法使用对比度损失。我们评估了无监督图像分类CIFAR-10数据集的模型,并获得了70.50%的TOP-1测试精度,前5个测试精度为98.10%。由于我们有效的体系结构,我们的模型的参数少了31倍,而在监督和无监督学习中,当前的SOTA的参数比当前的SOTA少了71倍。
translated by 谷歌翻译
Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional Neural Networks, that are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques.Real-life document recognition systems are composed of multiple modules including eld extraction, segmentation, recognition, and language modeling. A new learning paradigm, called Graph Transformer Networks (GTN), allows such multi-module systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure.Two systems for on-line handwriting recognition are described. Experiments demonstrate the advantage of global training, and the exibility of Graph Transformer Networks.A Graph Transformer Network for reading bank check is also described. It uses Convolutional Neural Network character recognizers combined with global training techniques to provides record accuracy on business and personal checks. It is deployed commercially and reads several million checks per day.
translated by 谷歌翻译
许多视觉现象表明,人类使用自上而下的生成或重建过程来创建视觉感知(例如,图像,对象完成,pareidolia),但对重建在强大的对象识别中的作用鲜为人知。我们构建了一个迭代编码器网络,该网络生成对象重建,并将其用作自上而下的注意力反馈,以将最相关的空间和功能信息路由馈送到前向对象识别过程。我们使用具有挑战性的分布数字识别数据集MNIST-C测试了该模型,其中将15种不同类型的转换和损坏应用于手写数字图像。我们的模型对各种图像扰动表现出强烈的概括性能,平均表现所有其他模型,包括前馈CNN和受对抗训练的网络。我们的模型对于模糊,噪音和遮挡腐败特别强大,在这种情况下,形状感知起着重要作用。消融研究进一步揭示了在强大的物体识别中基于空间和特征注意的两个互补作用,前者在很大程度上与注意文献中的空间掩盖益处一致(重建是掩膜),后者主要促进该模型的推理速度的速度。 (即,达到一定置信阈值的时间步骤的数量)通过减少可能的对象假设的空间。我们还观察到该模型有时会从噪声中幻觉,从而导致高度可解释的人类误差。我们的研究表明,基于重建的反馈建模赋予AI系统具有强大的注意机制,这可以帮助我们了解产生感知在人类视觉处理中的作用。
translated by 谷歌翻译
胶囊网络(CAPSNET)旨在将图像解析为由对象,部分及其关系组成的层次组件结构。尽管它们具有潜力,但它们在计算上还是很昂贵的,并且构成了一个主要的缺点,这限制了在更复杂的数据集中有效利用这些网络的限制。当前的CAPSNET模型仅将其性能与胶囊基线进行比较,并且在复杂任务上的基于CNN的DEEP基于DEEP基于CNN的级别的性能。本文提出了一种学习胶囊的有效方法,该胶囊通过一组子封装来检测输入图像的原子部分,并在其上投射输入向量。随后,我们提出了Wasserstein嵌入模块,该模块首先测量由子胶囊建模的输入和组件之间的差异,然后根据学习的最佳运输找到它们的对齐程度。该策略利用基于其各自的组件分布之间的相似性来定义输入和子胶囊之间的一致性的新见解。我们提出的模型(i)是轻量级的,允许将胶囊应用于更复杂的视觉任务; (ii)在这些具有挑战性的任务上的表现要好于或与基于CNN的模型相提并论。我们的实验结果表明,Wasserstein嵌入胶囊(Wecapsules)在仿射转换方面更加强大,有效地扩展到较大的数据集,并且在几个视觉任务中胜过CNN和CAPSNET模型。
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
胶囊网络(CAPSNET)是图像处理的新兴趋势。与卷积神经网络相反,CAPSNET不容易受到对象变形的影响,因为对象的相对空间信息在整个网络中保存。但是,它们的复杂性主要与胶囊结构和动态路由机制有关,这使得以其原始形式部署封闭式以由小型微控制器(MCU)供电的设备几乎是不合理的。在一个智力从云到边缘迅速转移的时代,这种高复杂性对在边缘的采用capsnets的采用构成了严重的挑战。为了解决此问题,我们提出了一个API,用于执行ARM Cortex-M和RISC-V MCUS中的量化capsnet。我们的软件内核扩展了ARM CMSIS-NN和RISC-V PULP-NN,以用8位整数作为操作数支持胶囊操作。随之而来的是,我们提出了一个框架,以执行CAPSNET的训练后量化。结果显示,记忆足迹的减少近75%,准确性损失范围从0.07%到0.18%。在吞吐量方面,我们的ARM Cortex-M API可以分别在仅119.94和90.60毫秒(MS)的中型胶囊和胶囊层执行(STM32H7555ZIT6U,Cortex-M7 @ 480 MHz)。对于GAP-8 SOC(RISC-V RV32IMCXPULP @ 170 MHz),延迟分别降至7.02和38.03 ms。
translated by 谷歌翻译
It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining individual "expert" models makes it hard to generate samples from the combined model but easy to infer the values of the latent variables of each expert, because the combination rule ensures that the latent variables of different experts are conditionally independent when given the data. A product of experts (PoE) is therefore an interesting candidate for a perceptual system in which rapid inference is vital and generation is unnecessary. Training a PoE by maximizing the likelihood of the data is difficult because it is hard even to approximate the derivatives of the renormalization term in the combination rule. Fortunately, a PoE can be trained using a different objective function called "contrastive divergence" whose derivatives with regard to the parameters can be approximated accurately and efficiently. Examples are presented of contrastive divergence learning using several types of expert on several types of data.
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译