这项研究介绍了一个称为批处理层归一化(BLN)的新的归一化层,以减少深神经网络层中内部协变量转移的问题。作为批处理和层归一化的组合版本,BLN自适应地将适当的重量放在迷你批处理上,并基于迷你批次的逆尺寸,在学习过程中将输入标准化为层。它还使用微型批量统计或人口统计数据,在推理时间执行精确的计算,并在推理时间进行较小的更改。使用迷你批量或人口统计的决策过程使BLN具有在模型的超参数优化过程中发挥全面作用的能力。 BLN的关键优势是对独立于输入数据的理论分析的支持,其统计配置在很大程度上取决于执行的任务,培训数据的量和批次的大小。测试结果表明,BLN的应用潜力及其更快的收敛性在卷积和复发性神经网络中都比批处理归一化和层归一化。实验的代码在线公开可用(https://github.com/a2amir/batch-layer-normalization)。
translated by 谷歌翻译
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batchnormalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
translated by 谷歌翻译
Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs). Despite its pervasiveness, the exact reasons for BatchNorm's effectiveness are still poorly understood. The popular belief is that this effectiveness stems from controlling the change of the layers' input distributions during training to reduce the so-called "internal covariate shift". In this work, we demonstrate that such distributional stability of layer inputs has little to do with the success of BatchNorm. Instead, we uncover a more fundamental impact of BatchNorm on the training process: it makes the optimization landscape significantly smoother. This smoothness induces a more predictive and stable behavior of the gradients, allowing for faster training.
translated by 谷歌翻译
In training neural networks, batch normalization has many benefits, not all of them entirely understood. But it also has some drawbacks. Foremost is arguably memory consumption, as computing the batch statistics requires all instances within the batch to be processed simultaneously, whereas without batch normalization it would be possible to process them one by one while accumulating the weight gradients. Another drawback is that that distribution parameters (mean and standard deviation) are unlike all other model parameters in that they are not trained using gradient descent but require special treatment, complicating implementation. In this paper, I show a simple and straightforward way to address these issues. The idea, in short, is to add terms to the loss that, for each activation, cause the minimization of the negative log likelihood of a Gaussian distribution that is used to normalize the activation. Among other benefits, this will hopefully contribute to the democratization of AI research by means of lowering the hardware requirements for training larger models.
translated by 谷歌翻译
深度学习使用由其重量进行参数化的神经网络。通常通过调谐重量来直接最小化给定损耗功能来训练神经网络。在本文中,我们建议将权重重新参数转化为网络中各个节点的触发强度的目标。给定一组目标,可以计算使得发射强度最佳地满足这些目标的权重。有人认为,通过我们称之为级联解压缩的过程,使用培训的目标解决爆炸梯度的问题,并使损失功能表面更加光滑,因此导致更容易,培训更快,以及潜在的概括,神经网络。它还允许更容易地学习更深层次和经常性的网络结构。目标对重量的必要转换有额外的计算费用,这是在许多情况下可管理的。在目标空间中学习可以与现有的神经网络优化器相结合,以额外收益。实验结果表明了使用目标空间的速度,以及改进的泛化的示例,用于全连接的网络和卷积网络,以及调用和处理长时间序列的能力,并使用经常性网络进行自然语言处理。
translated by 谷歌翻译
We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.
translated by 谷歌翻译
差异隐私(DP)提供了正式的隐私保证,以防止对手可以访问机器学习模型,从而从提取有关单个培训点的信息。最受欢迎的DP训练方法是差异私有随机梯度下降(DP-SGD),它通过在训练过程中注入噪声来实现这种保护。然而,以前的工作发现,DP-SGD通常会导致标准图像分类基准的性能显着降解。此外,一些作者假设DP-SGD在大型模型上固有地表现不佳,因为保留隐私所需的噪声规范与模型维度成正比。相反,我们证明了过度参数化模型上的DP-SGD可以比以前想象的要好得多。将仔细的超参数调整与简单技术结合起来,以确保信号传播并提高收敛速率,我们获得了新的SOTA,而没有额外数据的CIFAR-10,在81.4%的81.4%下(8,10^{ - 5}) - 使用40 -layer wide-Resnet,比以前的SOTA提高了71.7%。当对预训练的NFNET-F3进行微调时,我们在ImageNet(0.5,8*10^{ - 7})下达到了83.8%的TOP-1精度。此外,我们还在(8,8 \ cdot 10^{ - 7})下达到了86.7%的TOP-1精度,DP仅比当前的非私人SOTA仅4.3%。我们认为,我们的结果是缩小私人图像分类和非私有图像分类之间准确性差距的重要一步。
translated by 谷歌翻译
Batch Normalization (BN) is an important preprocessing step to many deep learning applications. Since it is a data-dependent process, for some homogeneous datasets it is a redundant or even a performance-degrading process. In this paper, we propose an early-stage feasibility assessment method for estimating the benefits of applying BN on the given data batches. The proposed method uses a novel threshold-based approach to classify the training data batches into two sets according to their need for normalization. The need for normalization is decided based on the feature heterogeneity of the considered batch. The proposed approach is a pre-training processing, which implies no training overhead. The evaluation results show that the proposed approach achieves better performance mostly in small batch sizes than the traditional BN using MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets. Additionally, the network stability is increased by reducing the occurrence of internal variable transformation.
translated by 谷歌翻译
预测基金绩效对投资者和基金经理都是有益的,但这是一项艰巨的任务。在本文中,我们测试了深度学习模型是否比传统统计技术更准确地预测基金绩效。基金绩效通常通过Sharpe比率进行评估,该比例代表了风险调整的绩效,以确保基金之间有意义的可比性。我们根据每月收益率数据序列数据计算了年度夏普比率,该数据的时间序列数据为600多个投资于美国上市大型股票的开放式共同基金投资。我们发现,经过现代贝叶斯优化训练的长期短期记忆(LSTM)和封闭式复发单元(GRUS)深度学习方法比传统统计量相比,预测基金的Sharpe比率更高。结合了LSTM和GRU的预测的合奏方法,可以实现所有模型的最佳性能。有证据表明,深度学习和结合能提供有希望的解决方案,以应对基金绩效预测的挑战。
translated by 谷歌翻译
We introduce a method to train Quantized Neural Networks (QNNs) -neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.
translated by 谷歌翻译
差异私有随机梯度下降(DPSGD)是基于差分隐私(DP)范例的随机梯度下降的变化,这可以减轻来自在训练数据中存在敏感信息的隐私威胁。然而,具有DPSGD的培训深度神经网络的一个主要缺点是模型精度的降低。本文研究了标准化层对DPSGD性能的影响。我们证明标准化层显着影响了深度神经网络与嘈杂参数的效用,应该被视为DPSGD培训的基本成分。特别是,我们提出了一种新的方法,用于将批量标准化与DPSGD集成,而不会产生额外的隐私损失。通过我们的方法,我们能够培训更深的网络并实现更好的效用隐私权衡。
translated by 谷歌翻译
事实证明,诸如层归一化(LN)和批处理(BN)之类的方法可有效改善复发性神经网络(RNN)的训练。但是,现有方法仅在一个特定的时间步骤中仅使用瞬时信息进行归一化,而归一化的结果是具有时间无关分布的预反应状态。该实现无法解释RNN的输入和体系结构中固有的某些时间差异。由于这些网络跨时间步骤共享权重,因此也可能需要考虑标准化方案中时间步长之间的连接。在本文中,我们提出了一种称为“分类时间归一化”(ATN)的归一化方法,该方法保留了来自多个连续时间步骤的信息,并使用它们归一化。这种设置使我们能够将更长的时间依赖项引入传统的归一化方法,而无需引入任何新的可训练参数。我们介绍了梯度传播的理论推导,并证明了权重缩放不变属性。我们将ATN应用于LN的实验表明,对各种任务(例如添加,复制和DENOISE问题和语言建模问题)表现出一致的改进。
translated by 谷歌翻译
由于稀疏神经网络通常包含许多零权重,因此可以在不降低网络性能的情况下潜在地消除这些不必要的网络连接。因此,设计良好的稀疏神经网络具有显着降低拖鞋和计算资源的潜力。在这项工作中,我们提出了一种新的自动修剪方法 - 稀疏连接学习(SCL)。具体地,重量被重新参数化为可培训权重变量和二进制掩模的元素方向乘法。因此,由二进制掩模完全描述网络连接,其由单位步进函数调制。理论上,从理论上证明了使用直通估计器(STE)进行网络修剪的基本原理。这一原则是STE的代理梯度应该是积极的,确保掩模变量在其最小值处收敛。在找到泄漏的Relu后,SoftPlus和Identity Stes可以满足这个原理,我们建议采用SCL的身份STE以进行离散面膜松弛。我们发现不同特征的面具梯度非常不平衡,因此,我们建议将每个特征的掩模梯度标准化以优化掩码变量训练。为了自动训练稀疏掩码,我们将网络连接总数作为我们的客观函数中的正则化术语。由于SCL不需要由网络层设计人员定义的修剪标准或超级参数,因此在更大的假设空间中探讨了网络,以实现最佳性能的优化稀疏连接。 SCL克服了现有自动修剪方法的局限性。实验结果表明,SCL可以自动学习并选择各种基线网络结构的重要网络连接。 SCL培训的深度学习模型以稀疏性,精度和减少脚波特的SOTA人类设计和自动修剪方法训练。
translated by 谷歌翻译
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-ofthe-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.
translated by 谷歌翻译
培训深度神经网络是一项非常苛刻的任务,尤其是具有挑战性的是如何适应体系结构以提高训练有素的模型的性能。我们可以发现,有时,浅网络比深网概括得更好,并且增加更多层会导致更高的培训和测试错误。深层残留学习框架通过将跳过连接添加到几个神经网络层来解决此降解问题。最初,需要这种跳过连接才能成功地训练深层网络,因为网络的表达性会随着深度的指数增长而成功。在本文中,我们首先通过神经网络分析信息流。我们介绍和评估批处理循环,该批处理通过神经网络的每一层量化信息流。我们从经验和理论上证明,基于梯度下降的训练方法需要正面批处理融合,以成功地优化给定的损失功能。基于这些见解,我们引入了批处理凝聚正则化,以使基于梯度下降的训练算法能够单独通过每个隐藏层来优化信息流。借助批处理正则化,梯度下降优化器可以将不可吸引的网络转换为可训练的网络。我们从经验上表明,因此我们可以训练“香草”完全连接的网络和卷积神经网络 - 没有跳过连接,批处理标准化,辍学或任何其他建筑调整 - 只需将批处理 - 凝集正则术语添加到500层中损失功能。批处理 - 注入正则化的效果不仅在香草神经网络上评估,还评估了在各种计算机视觉以及自然语言处理任务上的剩余网络,自动编码器以及变压器模型上。
translated by 谷歌翻译
Neural networks require careful weight initialization to prevent signals from exploding or vanishing. Existing initialization schemes solve this problem in specific cases by assuming that the network has a certain activation function or topology. It is difficult to derive such weight initialization strategies, and modern architectures therefore often use these same initialization schemes even though their assumptions do not hold. This paper introduces AutoInit, a weight initialization algorithm that automatically adapts to different neural network architectures. By analytically tracking the mean and variance of signals as they propagate through the network, AutoInit appropriately scales the weights at each layer to avoid exploding or vanishing signals. Experiments demonstrate that AutoInit improves performance of convolutional, residual, and transformer networks across a range of activation function, dropout, weight decay, learning rate, and normalizer settings, and does so more reliably than data-dependent initialization methods. This flexibility allows AutoInit to initialize models for everything from small tabular tasks to large datasets such as ImageNet. Such generality turns out particularly useful in neural architecture search and in activation function discovery. In these settings, AutoInit initializes each candidate appropriately, making performance evaluations more accurate. AutoInit thus serves as an automatic configuration tool that makes design of new neural network architectures more robust. The AutoInit package provides a wrapper around TensorFlow models and is available at https://github.com/cognizant-ai-labs/autoinit.
translated by 谷歌翻译
分批归一化(BN)由归一化组成部分,然后是仿射转化,并且对于训练深神经网络至关重要。网络中每个BN的标准初始化分别设置了仿射变换量表,并将其转移到1和0。但是,经过训练,我们观察到这些参数从初始化中并没有太大变化。此外,我们注意到归一化过程仍然可以产生过多的值,这对于训练是不可能的。我们重新审视BN公式,并为BN提出了一种新的初始化方法和更新方法,以解决上述问题。实验旨在强调和证明适当的BN规模初始化对性能的积极影响,并使用严格的统计显着性测试进行评估。该方法可以与现有实施方式一起使用,没有额外的计算成本。源代码可在https://github.com/osu-cvl/revisiting-bninit上获得。
translated by 谷歌翻译
已经提出了各种归一化层来帮助培训神经网络。组归一化(GN)是在视觉识别任务中实现出色表现的有效和有吸引力的研究之一。尽管取得了巨大的成功,但GN仍然存在几个问题,可能会对神经网络培训产生负面影响。在本文中,我们介绍了一个分析框架,并讨论了GN在影响神经网络训练过程时的工作原理。从实验结果中,我们得出结论GN对批处理标准化(BN)的较低性能的真正原因:1)\ TextBf {不稳定的训练性能},2)\ TextBf {更敏感}对失真,无论是来自外部噪声还是扰动。通过正规化。此外,我们发现GN只能在某个特定时期内帮助神经网络培训,而BN可以帮助整个培训中的网络。为了解决这些问题,我们提出了一个新的归一化层,该层是通过合并BN的优势在GN顶部构建的。图像分类任务的实验结果表明,所提出的归一化层优于官方GN,以提高识别精度,无论批次大小如何,并稳定网络训练。
translated by 谷歌翻译
使用卷积神经网络(CNN)已经显着改善了几种图像处理任务,例如图像分类和对象检测。与Reset和Abseralnet一样,许多架构在创建时至少在一个数据集中实现了出色的结果。培训的一个关键因素涉及网络的正规化,这可以防止结构过度装备。这项工作分析了在过去几年中开发的几种正规化方法,显示了不同CNN模型的显着改进。该作品分为三个主要区域:第一个称为“数据增强”,其中所有技术都侧重于执行输入数据的更改。第二个,命名为“内部更改”,旨在描述修改神经网络或内核生成的特征映射的过程。最后一个称为“标签”,涉及转换给定输入的标签。这项工作提出了与关于正则化的其他可用调查相比的两个主要差异:(i)第一个涉及在稿件中收集的论文并非超过五年,并第二个区别是关于可重复性,即所有作品此处推荐在公共存储库中可用的代码,或者它们已直接在某些框架中实现,例如Tensorflow或Torch。
translated by 谷歌翻译
最近,稀疏的培训方法已开始作为事实上的人工神经网络的培训和推理效率的方法。然而,这种效率只是理论上。在实践中,每个人都使用二进制掩码来模拟稀疏性,因为典型的深度学习软件和硬件已针对密集的矩阵操作进行了优化。在本文中,我们采用正交方法,我们表明我们可以训练真正稀疏的神经网络以收获其全部潜力。为了实现这一目标,我们介绍了三个新颖的贡献,这些贡献是专门为稀疏神经网络设计的:(1)平行训练算法及其相应的稀疏实现,(2)具有不可训练的参数的激活功能,以支持梯度流动,以支持梯度流量, (3)隐藏的神经元对消除冗余的重要性指标。总而言之,我们能够打破记录并训练有史以来最大的神经网络在代表力方面训练 - 达到蝙蝠大脑的大小。结果表明,我们的方法具有最先进的表现,同时为环保人工智能时代开辟了道路。
translated by 谷歌翻译