神经网络是人工智能的巅峰之作,因为近年来我们目睹了许多新颖的体系结构,学习和优化技术的深度学习。利用这一事实是,神经网络固有地构成神经元之间的多部分图,我们旨在直接分析其结构,以提取有意义的信息,以改善学习过程。对于我们的知识图挖掘技术,尚未对神经网络中的学习进行增强。在本文中,我们为从深度学习体系结构中提取的完整加权多部分图的K核结构提出了一个改编版本。由于多方图是两分图的组合,而两分图的组合是超图的起点图,因此我们设计了k-hypercore分解,这是k核退化性的超图类似物。我们将K-Hypercore应用于几个神经网络体系结构,更具体地用于卷积神经网络和多层感知,以进行非常短的训练后的图像识别任务。然后,我们使用了由神经元的超核数量提供的信息来重新定位神经网络的权重,从而偏向梯度优化方案。广泛的实验证明,K-Hypercore的表现优于最新初始化方法。
translated by 谷歌翻译
The study of feature propagation at initialization in neural networks lies at the root of numerous initialization designs. An assumption very commonly made in the field states that the pre-activations are Gaussian. Although this convenient Gaussian hypothesis can be justified when the number of neurons per layer tends to infinity, it is challenged by both theoretical and experimental works for finite-width neural networks. Our major contribution is to construct a family of pairs of activation functions and initialization distributions that ensure that the pre-activations remain Gaussian throughout the network's depth, even in narrow neural networks. In the process, we discover a set of constraints that a neural network should fulfill to ensure Gaussian pre-activations. Additionally, we provide a critical review of the claims of the Edge of Chaos line of works and build an exact Edge of Chaos analysis. We also propose a unified view on pre-activations propagation, encompassing the framework of several well-known initialization procedures. Finally, our work provides a principled framework for answering the much-debated question: is it desirable to initialize the training of a neural network whose pre-activations are ensured to be Gaussian?
translated by 谷歌翻译
深度学习文献通过新的架构和培训技术不断更新。然而,尽管有一些关于随机权重的发现,但最近的研究却忽略了重量初始化。另一方面,最近的作品一直在接近网络科学,以了解训练后人工神经网络(ANN)的结构和动态。因此,在这项工作中,我们分析了随机初始化网络中神经元的中心性。我们表明,较高的神经元强度方差可能会降低性能,而较低的神经元强度方差通常会改善它。然后,提出了一种新方法,根据其强度根据优先附着(PA)规则重新连接神经元连接,从而大大降低了通过常见方法初始化的层的强度方差。从这个意义上讲,重新布线仅重新组织连接,同时保留权重的大小和分布。我们通过对图像分类进行的广泛统计分析表明,在使用简单和复杂的体系结构和学习时间表时,在大多数情况下,在培训和测试过程中,性能都会提高。我们的结果表明,除了规模外,权重的组织也与更好的初始化初始化有关。
translated by 谷歌翻译
Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -and building on other recent work for finding simple network structures -we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the "deconvolution approach" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.
translated by 谷歌翻译
神经网络(NNS)在广泛的应用中的成功导致对理解这些模型的潜在学习动态的兴趣增加。在本文中,我们通过采用图形透视并调查NNS图形结构与其性能之间的关系来超越学习动态的描述。具体地,我们提出(1)表示神经网络学习过程作为时间不断发展的图表(即,通过时代的一系列静态图形快照),(2)在简单的时间内捕获NN期间NN的结构变化发明内容,(3)利用结构摘要,以预测底层NN在分类或回归任务中的准确性。对于NNS的动态图形表示,我们探索完全连接和卷积层的结构表示,这是强大的NN模型的关键组件。我们的分析表明,图形统计数据简单摘要,如加权程度和特征向量中心,只能用于准确地预测NNS的性能。例如,基于Lenet架构的5次训练时期构造的基于加权的基于程度的概要,实现了超过93%的分类精度。我们的发现对于不同的NN架构,包括Lenet,VGG,AlexNet和Reset。
translated by 谷歌翻译
Neural networks require careful weight initialization to prevent signals from exploding or vanishing. Existing initialization schemes solve this problem in specific cases by assuming that the network has a certain activation function or topology. It is difficult to derive such weight initialization strategies, and modern architectures therefore often use these same initialization schemes even though their assumptions do not hold. This paper introduces AutoInit, a weight initialization algorithm that automatically adapts to different neural network architectures. By analytically tracking the mean and variance of signals as they propagate through the network, AutoInit appropriately scales the weights at each layer to avoid exploding or vanishing signals. Experiments demonstrate that AutoInit improves performance of convolutional, residual, and transformer networks across a range of activation function, dropout, weight decay, learning rate, and normalizer settings, and does so more reliably than data-dependent initialization methods. This flexibility allows AutoInit to initialize models for everything from small tabular tasks to large datasets such as ImageNet. Such generality turns out particularly useful in neural architecture search and in activation function discovery. In these settings, AutoInit initializes each candidate appropriately, making performance evaluations more accurate. AutoInit thus serves as an automatic configuration tool that makes design of new neural network architectures more robust. The AutoInit package provides a wrapper around TensorFlow models and is available at https://github.com/cognizant-ai-labs/autoinit.
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
深度学习使用由其重量进行参数化的神经网络。通常通过调谐重量来直接最小化给定损耗功能来训练神经网络。在本文中,我们建议将权重重新参数转化为网络中各个节点的触发强度的目标。给定一组目标,可以计算使得发射强度最佳地满足这些目标的权重。有人认为,通过我们称之为级联解压缩的过程,使用培训的目标解决爆炸梯度的问题,并使损失功能表面更加光滑,因此导致更容易,培训更快,以及潜在的概括,神经网络。它还允许更容易地学习更深层次和经常性的网络结构。目标对重量的必要转换有额外的计算费用,这是在许多情况下可管理的。在目标空间中学习可以与现有的神经网络优化器相结合,以额外收益。实验结果表明了使用目标空间的速度,以及改进的泛化的示例,用于全连接的网络和卷积网络,以及调用和处理长时间序列的能力,并使用经常性网络进行自然语言处理。
translated by 谷歌翻译
We propose deeply-supervised nets (DSN), a method that simultaneously minimizes classification error and improves the directness and transparency of the hidden layer learning process. We focus our attention on three aspects of traditional convolutional-neuralnetwork-type (CNN-type) architectures: (1) transparency in the effect intermediate layers have on overall classification; (2) discriminativeness and robustness of learned features, especially in early layers; (3) training effectiveness in the face of "vanishing" gradients. To combat these issues, we introduce "companion" objective functions at each hidden layer, in addition to the overall objective function at the output layer (an integrated strategy distinct from layer-wise pretraining). We also analyze our algorithm using techniques extended from stochastic gradient methods. The advantages provided by our method are evident in our experimental results, showing state-of-the-art performance on MNIST, CIFAR-10, CIFAR-100, and SVHN.
translated by 谷歌翻译
本文提出了一种新的和富有激光激活方法,被称为FPLUS,其利用具有形式的极性标志的数学功率函数。它是通过常见的逆转操作来启发,同时赋予仿生学的直观含义。制剂在某些先前知识和预期特性的条件下理论上得出,然后通过使用典型的基准数据集通过一系列实验验证其可行性,其结果表明我们的方法在许多激活功能中拥有卓越的竞争力,以及兼容稳定性许多CNN架构。此外,我们将呈现给更广泛类型的功能延伸到称为PFPlus的函数,具有两个可以固定的或学习的参数,以便增加其表现力的容量,并且相同的测试结果验证了这种改进。
translated by 谷歌翻译
为了对线性不可分离的数据进行分类,神经元通常被组织成具有至少一个隐藏层的多层神经网络。灵感来自最近神经科学的发现,我们提出了一种新的神经元模型以及一种新的激活函数,可以使用单个神经元来学习非线性决策边界。我们表明标准神经元随后是新颖的顶端枝晶激活(ADA)可以使用100 \%的精度来学习XOR逻辑函数。此外,我们在计算机视觉,信号处理和自然语言处理中进行五个基准数据集进行实验,即摩洛哥,utkface,crema-d,时尚mnist和微小的想象成,表明ADA和泄漏的ADA功能提供了卓越的结果用于各种神经网络架构的整流线性单元(Relu),泄漏的Relu,RBF和嗖嗖声,例如单隐层或两个隐藏层的多层的Perceptrons(MLPS)和卷积神经网络(CNNS),如LENET,VGG,RESET和字符级CNN。当我们使用具有顶端树突激活(Pynada)的金字塔神经元改变神经元的标准模型时,我们获得进一步的性能改进。我们的代码可用于:https://github.com/raduionescu/pynada。
translated by 谷歌翻译
How well does a classic deep net architecture like AlexNet or VGG19 classify on a standard dataset such as CIFAR-10 when its "width"-namely, number of channels in convolutional layers, and number of nodes in fully-connected internal layers -is allowed to increase to infinity? Such questions have come to the forefront in the quest to theoretically understand deep learning and its mysteries about optimization and generalization. They also connect deep learning to notions such as Gaussian processes and kernels. A recent paper [Jacot et al., 2018] introduced the Neural Tangent Kernel (NTK) which captures the behavior of fully-connected deep nets in the infinite width limit trained by gradient descent; this object was implicit in some other recent papers. An attraction of such ideas is that a pure kernel-based method is used to capture the power of a fully-trained deep net of infinite width. The current paper gives the first efficient exact algorithm for computing the extension of NTK to convolutional neural nets, which we call Convolutional NTK (CNTK), as well as an efficient GPU implementation of this algorithm. This results in a significant new benchmark for performance of a pure kernel-based method on CIFAR-10, being 10% higher than the methods reported in [Novak et al., 2019], and only 6% lower than the performance of the corresponding finite deep net architecture (once batch normalization etc. are turned off). Theoretically, we also give the first non-asymptotic proof showing that a fully-trained sufficiently wide net is indeed equivalent to the kernel regression predictor using NTK.
translated by 谷歌翻译
We propose a simultaneous learning and pruning algorithm capable of identifying and eliminating irrelevant structures in a neural network during the early stages of training. Thus, the computational cost of subsequent training iterations, besides that of inference, is considerably reduced. Our method, based on variational inference principles using Gaussian scale mixture priors on neural network weights, learns the variational posterior distribution of Bernoulli random variables multiplying the units/filters similarly to adaptive dropout. Our algorithm, ensures that the Bernoulli parameters practically converge to either 0 or 1, establishing a deterministic final network. We analytically derive a novel hyper-prior distribution over the prior parameters that is crucial for their optimal selection and leads to consistent pruning levels and prediction accuracy regardless of weight initialization or the size of the starting network. We prove the convergence properties of our algorithm establishing theoretical and practical pruning conditions. We evaluate the proposed algorithm on the MNIST and CIFAR-10 data sets and the commonly used fully connected and convolutional LeNet and VGG16 architectures. The simulations show that our method achieves pruning levels on par with state-of the-art methods for structured pruning, while maintaining better test-accuracy and more importantly in a manner robust with respect to network initialization and initial size.
translated by 谷歌翻译
神经运营商最近成为设计神经网络形式的功能空间之间的解决方案映射的流行工具。不同地,从经典的科学机器学习方法,以固定分辨率为输入参数的单个实例学习参数,神经运算符近似PDE系列的解决方案图。尽管他们取得了成功,但是神经运营商的用途迄今为止仅限于相对浅的神经网络,并限制了学习隐藏的管理法律。在这项工作中,我们提出了一种新颖的非局部神经运营商,我们将其称为非本体内核网络(NKN),即独立的分辨率,其特征在于深度神经网络,并且能够处理各种任务,例如学习管理方程和分类图片。我们的NKN源于神经网络的解释,作为离散的非局部扩散反应方程,在无限层的极限中,相当于抛物线非局部方程,其稳定性通过非本种载体微积分分析。与整体形式的神经运算符相似允许NKN捕获特征空间中的远程依赖性,而节点到节点交互的持续处理使NKNS分辨率独立于NKNS分辨率。与神经杂物中的相似性,在非本体意义上重新解释,并且层之间的稳定网络动态允许NKN的最佳参数从浅到深网络中的概括。这一事实使得能够使用浅层初始化技术。我们的测试表明,NKNS在学习管理方程和图像分类任务中占据基线方法,并概括到不同的分辨率和深度。
translated by 谷歌翻译
Training a very deep neural network is a challenging task, as the deeper a neural network is, the more non-linear it is. We compare the performances of various preconditioned Langevin algorithms with their non-Langevin counterparts for the training of neural networks of increasing depth. For shallow neural networks, Langevin algorithms do not lead to any improvement, however the deeper the network is and the greater are the gains provided by Langevin algorithms. Adding noise to the gradient descent allows to escape from local traps, which are more frequent for very deep neural networks. Following this heuristic we introduce a new Langevin algorithm called Layer Langevin, which consists in adding Langevin noise only to the weights associated to the deepest layers. We then prove the benefits of Langevin and Layer Langevin algorithms for the training of popular deep residual architectures for image classification.
translated by 谷歌翻译
We introduce a method to train Quantized Neural Networks (QNNs) -neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
translated by 谷歌翻译
Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
translated by 谷歌翻译
An activation function has a significant impact on the efficiency and robustness of the neural networks. As an alternative, we evolved a cutting-edge non-monotonic activation function, Negative Stimulated Hybrid Activation Function (Nish). It acts as a Rectified Linear Unit (ReLU) function for the positive region and a sinus-sigmoidal function for the negative region. In other words, it incorporates a sigmoid and a sine function and gaining new dynamics over classical ReLU. We analyzed the consistency of the Nish for different combinations of essential networks and most common activation functions using on several most popular benchmarks. From the experimental results, we reported that the accuracy rates achieved by the Nish is slightly better than compared to the Mish in classification.
translated by 谷歌翻译
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
translated by 谷歌翻译