从物理的角度来看,深度神经网络是其“链接”和“顶点”迭代处理数据并以优选求解任务的图形。我们使用复杂的网络理论(CNT)作为定向的加权图代表深神网络(DNN):在此框架内,我们引入指标将DNN作为动力学系统,其粒度从重量到包括神经元在内的层延伸到层。CNT区分参数和神经元数量不同的网络,隐藏层和激活的类型以及客观任务。我们进一步表明,我们的指标会区分低性能网络。CNT是一种理论DNN的综合方法,也是解释模型行为的互补方法,该方法实际上是基于网络理论的,并且超越了研究良好的输入输出关系。
translated by 谷歌翻译
深度学习文献通过新的架构和培训技术不断更新。然而,尽管有一些关于随机权重的发现,但最近的研究却忽略了重量初始化。另一方面,最近的作品一直在接近网络科学,以了解训练后人工神经网络(ANN)的结构和动态。因此,在这项工作中,我们分析了随机初始化网络中神经元的中心性。我们表明,较高的神经元强度方差可能会降低性能,而较低的神经元强度方差通常会改善它。然后,提出了一种新方法,根据其强度根据优先附着(PA)规则重新连接神经元连接,从而大大降低了通过常见方法初始化的层的强度方差。从这个意义上讲,重新布线仅重新组织连接,同时保留权重的大小和分布。我们通过对图像分类进行的广泛统计分析表明,在使用简单和复杂的体系结构和学习时间表时,在大多数情况下,在培训和测试过程中,性能都会提高。我们的结果表明,除了规模外,权重的组织也与更好的初始化初始化有关。
translated by 谷歌翻译
最近,稀疏的培训方法已开始作为事实上的人工神经网络的培训和推理效率的方法。然而,这种效率只是理论上。在实践中,每个人都使用二进制掩码来模拟稀疏性,因为典型的深度学习软件和硬件已针对密集的矩阵操作进行了优化。在本文中,我们采用正交方法,我们表明我们可以训练真正稀疏的神经网络以收获其全部潜力。为了实现这一目标,我们介绍了三个新颖的贡献,这些贡献是专门为稀疏神经网络设计的:(1)平行训练算法及其相应的稀疏实现,(2)具有不可训练的参数的激活功能,以支持梯度流动,以支持梯度流量, (3)隐藏的神经元对消除冗余的重要性指标。总而言之,我们能够打破记录并训练有史以来最大的神经网络在代表力方面训练 - 达到蝙蝠大脑的大小。结果表明,我们的方法具有最先进的表现,同时为环保人工智能时代开辟了道路。
translated by 谷歌翻译
物理信息的神经网络(PINN)是神经网络(NNS),它们作为神经网络本身的组成部分编码模型方程,例如部分微分方程(PDE)。如今,PINN是用于求解PDE,分数方程,积分分化方程和随机PDE的。这种新颖的方法已成为一个多任务学习框架,在该框架中,NN必须在减少PDE残差的同时拟合观察到的数据。本文对PINNS的文献进行了全面的综述:虽然该研究的主要目标是表征这些网络及其相关的优势和缺点。该综述还试图将出版物纳入更广泛的基于搭配的物理知识的神经网络,这些神经网络构成了香草·皮恩(Vanilla Pinn)以及许多其他变体,例如物理受限的神经网络(PCNN),各种HP-VPINN,变量HP-VPINN,VPINN,VPINN,变体。和保守的Pinn(CPINN)。该研究表明,大多数研究都集中在通过不同的激活功能,梯度优化技术,神经网络结构和损耗功能结构来定制PINN。尽管使用PINN的应用范围广泛,但通过证明其在某些情况下比有限元方法(FEM)等经典数值技术更可行的能力,但仍有可能的进步,最著名的是尚未解决的理论问题。
translated by 谷歌翻译
深度神经网络(DNN)已被广泛使用,并在计算机视觉和自动导航领域起着重要作用。但是,这些DNN在计算上是复杂的,并且在没有其他优化和自定义的情况下,它们在资源受限平台上的部署很困难。在本手稿中,我们描述了DNN体系结构的概述,并提出了降低计算复杂性的方法,以加速培训和推理速度,以使其适合具有低计算资源的边缘计算平台。
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
作为一种强大的建模方法,分段线性神经网络(PWLNNS)已在各个领域都被证明是成功的,最近在深度学习中。为了应用PWLNN方法,长期以来一直研究了表示和学习。 1977年,规范表示率先通过增量设计学到了浅层PWLNN的作品,但禁止使用大规模数据的应用。 2010年,纠正的线性单元(RELU)提倡在深度学习中PWLNN的患病率。从那以后,PWLNNS已成功地应用于广泛的任务并实现了有利的表现。在本引物中,我们通过将作品分组为浅网络和深层网络来系统地介绍PWLNNS的方法。首先,不同的PWLNN表示模型是由详细示例构建的。使用PWLNNS,提出了学习数据的学习算法的演变,并且基本理论分析遵循深入的理解。然后,将代表性应用与讨论和前景一起引入。
translated by 谷歌翻译
欧文(Owen)和霍伊特(Hoyt)最近表明,有效维度提供了有关人工神经网络基础的投入输出映射的关键结构信息。沿着这一研究,这项工作提出了一个估算过程,该过程允许从给定数据集计算平均维度,而无需重新采样外部分布。当功能独立时,当特征相关时,设计会产生总索引。我们表明,这种变体具有零独立性。使用合成数据集,我们分析了平均维度如何按一层演化,以及激活函数如何影响相互作用的幅度。然后,我们使用平均维度来研究一些用于图像识别的最广泛使用的卷积架构(Lenet,Resnet,Densenet)。为了说明像素相关性,我们建议在添加逆PCA层后计算平均尺寸,该层允许人们在无关的PCA转换功能上工作,而无需重新训练神经网络。我们使用广义的总索引来生产热图用于事后解释,并且我们在PCA转换特征上采用了平均维度来进行人工神经网络结构的交叉比较。结果提供了有关架构之间相互作用幅度差异的几个见解,以及有关训练过程中平均维度如何演变的指示。
translated by 谷歌翻译
深信仰网络(DBN)是随机神经网络,可以从感觉数据中提取丰富的环境内部表示。 DBN在触发深度学习革命方面具有催化作用,这是第一次证明在具有许多隐藏神经元层的网络中无监督学习的可行性。由于它们的生物学和认知合理性,这些等级架构也已成功利用,以在各种领域建立人类感知和认知的计算模型。但是,DBN的学习通常是以贪婪的,层次的方式进行的,这不允许模拟皮质回路的整体发展。在这里,我们提出IDBN,这是一种迭代学习算法,用于DBN,允许共同更新层次结构所有层的连接权重。我们在两组不同的视觉刺激上测试算法,我们表明网络开发也可以通过图理论属性来跟踪。使用我们的迭代方法训练的DBN实现了与贪婪对应物相当的最终性能,同时允许准确地分析生成模型中内部表示的逐步发展。我们的工作为使用IDBN进行建模神经认知发展铺平了道路。
translated by 谷歌翻译
我们对深度学习的理论理解并没有与其经验成功保持同步。尽管已知网络体系结构至关重要,但我们尚不了解其对学习的表示和网络行为的影响,或者该体系结构如何反映任务结构。在这项工作中,我们开始通过引入门控的深层线性网络框架来解决此差距。这阐明了信息流的路径如何影响体系结构内的学习动态。至关重要的是,由于门控,这些网络可以计算其输入的非线性函数。我们得出了精确的减少,并且在某些情况下,我们可以确切解决学习动力学的方法。我们的分析表明,结构化网络中的学习动态可以概念化为具有隐性偏见的神经种族,然后控制模型的系统概括,多任务和转移的能力。我们通过自然主义数据集并使用轻松的假设来验证我们的关键见解。综上所述,我们的工作提出了将神经体系结构与学习有关的一般假设,并提供了一种数学方法,以理解更复杂的架构的设计以及模块化和组成性在解决现实世界中问题中的作用。代码和结果可在https://www.saxelab.org/gated-dln上找到。
translated by 谷歌翻译
Deep learning takes advantage of large datasets and computationally efficient training algorithms to outperform other approaches at various machine learning tasks. However, imperfections in the training phase of deep neural networks make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. In an application to computer vision, we show that our algorithms can reliably produce samples correctly classified by human subjects but misclassified in specific targets by a DNN with a 97% adversarial success rate while only modifying on average 4.02% of the input features per sample. We then evaluate the vulnerability of different sample classes to adversarial perturbations by defining a hardness measure. Finally, we describe preliminary work outlining defenses against adversarial samples by defining a predictive measure of distance between a benign input and a target classification.
translated by 谷歌翻译
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-ofthe-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
深度神经网络通过解决了许多以前被视为更高人类智能的任务解锁了广泛的新应用。实现这一成功的一个发展之一是由专用硬件提供的计算能力提升,例如图形或张量处理单元。但是,这些不利用神经网络等并行性和模拟状态变量的基本特征。相反,它们模拟了依赖于二元计算的神经网络,这导致不可持续的能量消耗和相对低的速度。完全平行和模拟硬件承诺克服这些挑战,但模拟神经元噪声的影响及其传播,即积累,威胁到威胁这些方法无能为力。在这里,我们首次确定噪声在训练的完全连接层中包含噪声非线性神经元的深神经网络中的噪声传播。我们研究了添加剂和乘法以及相关和不相关的噪声,以及开发预测因对称深神经网络的任何层中的噪声水平的分析方法,或者在训练中培训的对称深神经网络或深神经网络。我们发现噪声累积通常绑定,并且添加附加网络层不会使信号与超出限制的信噪比恶化。最重要的是,当神经元激活函数具有小于单位的斜率时,可以完全抑制噪声累积。因此,我们开发了在模拟系统中实现的完全连接的深神经网络中的噪声框架,并识别允许工程师设计噪声弹性新型神经网络硬件的标准。
translated by 谷歌翻译
人工神经网络从其生物学对应物中汲取了很多灵感,成为我们最好的机器感知系统。这项工作总结了一些历史,并将现代理论神经科学纳入了深度学习领域的人工神经网络的实验。具体而言,迭代幅度修剪用于训练稀疏连接的网络,重量减少33倍而不会损失性能。这些用于测试并最终拒绝这样的假设:仅体重稀疏就可以改善图像噪声稳健性。最近的工作减轻了使用重量稀疏性,激活稀疏性和主动树突建模的灾难性遗忘。本文复制了这些发现,并扩展了培训卷积神经网络的方法,以更具挑战性的持续学习任务。该代码已公开可用。
translated by 谷歌翻译
了解不同网络架构的能力和局限性对机器学习的根本重要性。高斯工艺的贝叶斯推断已被证明是一种可行的方法,用于研究无限层宽度的反复和深网络,$ n \ infty $。在这里,我们通过采用来自无序系统的统计物理学的建立方法,从第一个原则开始的架构的统一和系统的衍生均衡和系统的推导。该理论阐明了,虽然平均场方程关于其时间结构不同,但是当读出分别在单个时间点或层拍摄时,它们却产生相同的高斯核。贝叶斯推理应用于分类,然后预测两种架构的相同性能和能力。在数值上,我们发现朝向平均场理论的收敛通常对复发网络的速度较慢,而不是对于深网络,并且收敛速度仅取决于前面的重量的参数以及时间步骤的参数。我们的方法公开了高斯进程,但系统扩展的最低阶数为1 / N $。因此,形式主义铺平了调查有限宽度$ N $的经常性和深层架构之间的根本差异。
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
尖峰神经网络(SNN)是大脑中低功率,耐断层的信息处理的基础,并且在适当的神经形态硬件加速器上实施时,可能构成传统深层神经网络的能力替代品。但是,实例化解决复杂的计算任务的SNN在Silico中仍然是一个重大挑战。替代梯度(SG)技术已成为培训SNN端到端的标准解决方案。尽管如此,它们的成功取决于突触重量初始化,类似于常规的人工神经网络(ANN)。然而,与ANN不同,它仍然难以捉摸地构成SNN的良好初始状态。在这里,我们为受到大脑中通常观察到的波动驱动的策略启发的SNN制定了一般初始化策略。具体而言,我们为数据依赖性权重初始化提供了实用的解决方案,以确保广泛使用的泄漏的集成和传火(LIF)神经元的波动驱动。我们从经验上表明,经过SGS培训时,SNN遵循我们的策略表现出卓越的学习表现。这些发现概括了几个数据集和SNN体系结构,包括完全连接,深度卷积,经常性和更具生物学上合理的SNN遵守Dale的定律。因此,波动驱动的初始化提供了一种实用,多功能且易于实现的策略,可改善神经形态工程和计算神经科学的不同任务的SNN培训绩效。
translated by 谷歌翻译
These notes were compiled as lecture notes for a course developed and taught at the University of the Southern California. They should be accessible to a typical engineering graduate student with a strong background in Applied Mathematics. The main objective of these notes is to introduce a student who is familiar with concepts in linear algebra and partial differential equations to select topics in deep learning. These lecture notes exploit the strong connections between deep learning algorithms and the more conventional techniques of computational physics to achieve two goals. First, they use concepts from computational physics to develop an understanding of deep learning algorithms. Not surprisingly, many concepts in deep learning can be connected to similar concepts in computational physics, and one can utilize this connection to better understand these algorithms. Second, several novel deep learning algorithms can be used to solve challenging problems in computational physics. Thus, they offer someone who is interested in modeling a physical phenomena with a complementary set of tools.
translated by 谷歌翻译