我们介绍了深张量网络,这些网络是基于权重矩阵的张量网络表示的成倍宽的神经网络。我们评估图像分类(MNIST,FashionMnist)和序列预测(蜂窝自动机)任务的建议方法。在图像分类案例中,深度张量网络改善了我们的矩阵产品状态基线,并在MNIST上达到0.49%的错误率,而时尚人士的错误率为8.3%。在序列预测情况下,我们证明了与一层张量网络方法相比,参数数量的指数改善。在这两种情况下,我们都讨论了非均匀和均匀的张量网络模型,并表明后者可以很好地推广到不同的输入尺寸。
translated by 谷歌翻译
尽管过度拟合并且更普遍地,双重下降在机器学习中无处不在,但增加了最广泛使用的张量网络的参数数量,但矩阵乘积状态(MPS)通常会导致先前研究中的测试性能单调改善。为了更好地理解由MPS参数参数的体系结构的概括属性,我们构建了人工数据,这些数据可以由MPS精确建模并使用不同数量的参数训练模型。我们观察到一维数据的模型过于拟合,但也发现,对于更复杂的数据而言,过度拟合的意义较低,而对于MNIST图像数据,我们找不到任何过拟合的签名。我们推测,MPS的概括属性取决于数据的属性:具有一维数据(MPS ANSATZ是最合适的)MPS容易拟合的数据,而使用更复杂的数据,该数据不能完全适合MPS,过度拟合过度。可能不那么重要。
translated by 谷歌翻译
量子多体系统的状态是在高维的希尔伯特空间中定义的,可以对子系统之间的丰富而复杂的相互作用进行建模。在机器学习中,复杂的多个多线性相关性也可能存在于输入功能中。在本文中,我们提出了一个量子启发的多线性模型,称为残留张量列(RESTT),以捕获单个模型中从低阶到高阶的特征的多次多线性相关性。 RESTT能够在高维空间中建立强大的决策边界,以解决拟合和分类任务。特别是,我们证明了完全连接的层和Volterra系列可以将其视为特殊情况。此外,我们得出了根据平均场分析来稳定RESTT训练的权重初始化规则。我们证明,这样的规则比TT的规则放松得多,这意味着休息可以轻松解决现有TT模型中存在的消失和爆炸梯度问题。数值实验表明,RESTT的表现优于最先进的张量网络,并在MNIST和时尚MNIST数据集上进行基准深度学习模型。此外,RESTT在两个实践示例上的统计方法比其他有限数据的统计方法更好,这些方法具有复杂的功能相互作用。
translated by 谷歌翻译
神经运营商最近成为设计神经网络形式的功能空间之间的解决方案映射的流行工具。不同地,从经典的科学机器学习方法,以固定分辨率为输入参数的单个实例学习参数,神经运算符近似PDE系列的解决方案图。尽管他们取得了成功,但是神经运营商的用途迄今为止仅限于相对浅的神经网络,并限制了学习隐藏的管理法律。在这项工作中,我们提出了一种新颖的非局部神经运营商,我们将其称为非本体内核网络(NKN),即独立的分辨率,其特征在于深度神经网络,并且能够处理各种任务,例如学习管理方程和分类图片。我们的NKN源于神经网络的解释,作为离散的非局部扩散反应方程,在无限层的极限中,相当于抛物线非局部方程,其稳定性通过非本种载体微积分分析。与整体形式的神经运算符相似允许NKN捕获特征空间中的远程依赖性,而节点到节点交互的持续处理使NKNS分辨率独立于NKNS分辨率。与神经杂物中的相似性,在非本体意义上重新解释,并且层之间的稳定网络动态允许NKN的最佳参数从浅到深网络中的概括。这一事实使得能够使用浅层初始化技术。我们的测试表明,NKNS在学习管理方程和图像分类任务中占据基线方法,并概括到不同的分辨率和深度。
translated by 谷歌翻译
The choice of activation functions and their motivation is a long-standing issue within the neural network community. Neuronal representations within artificial neural networks are commonly understood as logits, representing the log-odds score of presence of features within the stimulus. We derive logit-space operators equivalent to probabilistic Boolean logic-gates AND, OR, and XNOR for independent probabilities. Such theories are important to formalize more complex dendritic operations in real neurons, and these operations can be used as activation functions within a neural network, introducing probabilistic Boolean-logic as the core operation of the neural network. Since these functions involve taking multiple exponents and logarithms, they are computationally expensive and not well suited to be directly used within neural networks. Consequently, we construct efficient approximations named $\text{AND}_\text{AIL}$ (the AND operator Approximate for Independent Logits), $\text{OR}_\text{AIL}$, and $\text{XNOR}_\text{AIL}$, which utilize only comparison and addition operations, have well-behaved gradients, and can be deployed as activation functions in neural networks. Like MaxOut, $\text{AND}_\text{AIL}$ and $\text{OR}_\text{AIL}$ are generalizations of ReLU to two-dimensions. While our primary aim is to formalize dendritic computations within a logit-space probabilistic-Boolean framework, we deploy these new activation functions, both in isolation and in conjunction to demonstrate their effectiveness on a variety of tasks including image classification, transfer learning, abstract reasoning, and compositional zero-shot learning.
translated by 谷歌翻译
In recent times, Variational Quantum Circuits (VQC) have been widely adopted to different tasks in machine learning such as Combinatorial Optimization and Supervised Learning. With the growing interest, it is pertinent to study the boundaries of the classical simulation of VQCs to effectively benchmark the algorithms. Classically simulating VQCs can also provide the quantum algorithms with a better initialization reducing the amount of quantum resources needed to train the algorithm. This manuscript proposes an algorithm that compresses the quantum state within a circuit using a tensor ring representation which allows for the implementation of VQC based algorithms on a classical simulator at a fraction of the usual storage and computational complexity. Using the tensor ring approximation of the input quantum state, we propose a method that applies the parametrized unitary operations while retaining the low-rank structure of the tensor ring corresponding to the transformed quantum state, providing an exponential improvement of storage and computational time in the number of qubits and layers. This approximation is used to implement the tensor ring VQC for the task of supervised learning on Iris and MNIST datasets to demonstrate the comparable performance as that of the implementations from classical simulator using Matrix Product States.
translated by 谷歌翻译
Understanding the functional principles of information processing in deep neural networks continues to be a challenge, in particular for networks with trained and thus non-random weights. To address this issue, we study the mapping between probability distributions implemented by a deep feed-forward network. We characterize this mapping as an iterated transformation of distributions, where the non-linearity in each layer transfers information between different orders of correlation functions. This allows us to identify essential statistics in the data, as well as different information representations that can be used by neural networks. Applied to an XOR task and to MNIST, we show that correlations up to second order predominantly capture the information processing in the internal layers, while the input layer also extracts higher-order correlations from the data. This analysis provides a quantitative and explainable perspective on classification.
translated by 谷歌翻译
可微分的编程是一种新的编程范式,它通过自动计算梯度的自动计算也称为自动分化。这一概念从深度学习中出现,并且也普遍化了张量网络优化。在这里,我们将不同的规划扩展到张量网络,其具有与多尺度纠缠重新运算Ansatz(MERA)和张量网络重新运行(TNR)的应用程序的等距约束。通过为等距张量网络引入几种基于梯度的优化方法并与平均vidal方法进行比较,我们表明自我分化具有更好的性能,可实现稳定性和准确性。我们在1D关键量子ising旋转链和2D古典ising模型上进行了数值测试了我们的方法。我们为古典模型的1D量子模型和内部能量计算地位能量,以及缩放操作员的缩放尺寸,并找到它们都同意理论良好的同意。
translated by 谷歌翻译
基于签名的技术使数学洞察力洞悉不断发展的数据的复杂流之间的相互作用。这些见解可以自然地转化为理解流数据的数值方法,也许是由于它们的数学精度,已被证明在数据不规则而不是固定的情况下分析流的数据以及数据和数据的尺寸很有用样本量均为中等。了解流的多模式数据是指数的:$ d $ d $的字母中的$ n $字母中的一个单词可以是$ d^n $消息之一。签名消除了通过采样不规则性引起的指数级噪声,但仍然存在指数量的信息。这项调查旨在留在可以直接管理指数缩放的域中。在许多问题中,可伸缩性问题是一个重要的挑战,但需要另一篇调查文章和进一步的想法。这项调查描述了一系列环境集足够小以消除大规模机器学习的可能性,并且可以有效地使用一小部分免费上下文和原则性功能。工具的数学性质可以使他们对非数学家的使用恐吓。本文中介绍的示例旨在弥合此通信差距,并提供从机器学习环境中绘制的可进行的工作示例。笔记本可以在线提供这些示例中的一些。这项调查是基于伊利亚·雪佛兰(Ilya Chevryev)和安德烈·科米利津(Andrey Kormilitzin)的早期论文,它们在这种机械开发的较早时刻大致相似。本文说明了签名提供的理论见解是如何在对应用程序数据的分析中简单地实现的,这种方式在很大程度上对数据类型不可知。
translated by 谷歌翻译
FIG. 1. Schematic diagram of a Variational Quantum Algorithm (VQA). The inputs to a VQA are: a cost function C(θ), with θ a set of parameters that encodes the solution to the problem, an ansatz whose parameters are trained to minimize the cost, and (possibly) a set of training data {ρ k } used during the optimization. Here, the cost can often be expressed in the form in Eq. ( 3), for some set of functions {f k }. Also, the ansatz is shown as a parameterized quantum circuit (on the left), which is analogous to a neural network (also shown schematically on the right). At each iteration of the loop one uses a quantum computer to efficiently estimate the cost (or its gradients). This information is fed into a classical computer that leverages the power of optimizers to navigate the cost landscape C(θ) and solve the optimization problem in Eq. ( 1). Once a termination condition is met, the VQA outputs an estimate of the solution to the problem. The form of the output depends on the precise task at hand. The red box indicates some of the most common types of outputs.
translated by 谷歌翻译
Machine learning (ML) has recently facilitated many advances in solving problems related to many-body physical systems. Given the intrinsic quantum nature of these problems, it is natural to speculate that quantum-enhanced machine learning will enable us to unveil even greater details than we currently have. With this motivation, this paper examines a quantum machine learning approach based on shallow variational ansatz inspired by tensor networks for supervised learning tasks. In particular, we first look at the standard image classification tasks using the Fashion-MNIST dataset and study the effect of repeating tensor network layers on ansatz's expressibility and performance. Finally, we use this strategy to tackle the problem of quantum phase recognition for the transverse-field Ising and Heisenberg spin models in one and two dimensions, where we were able to reach $\geq 98\%$ test-set accuracies with both multi-scale entanglement renormalization ansatz (MERA) and tree tensor network (TTN) inspired parametrized quantum circuits.
translated by 谷歌翻译
Future surveys such as the Legacy Survey of Space and Time (LSST) of the Vera C. Rubin Observatory will observe an order of magnitude more astrophysical transient events than any previous survey before. With this deluge of photometric data, it will be impossible for all such events to be classified by humans alone. Recent efforts have sought to leverage machine learning methods to tackle the challenge of astronomical transient classification, with ever improving success. Transformers are a recently developed deep learning architecture, first proposed for natural language processing, that have shown a great deal of recent success. In this work we develop a new transformer architecture, which uses multi-head self attention at its core, for general multi-variate time-series data. Furthermore, the proposed time-series transformer architecture supports the inclusion of an arbitrary number of additional features, while also offering interpretability. We apply the time-series transformer to the task of photometric classification, minimising the reliance of expert domain knowledge for feature selection, while achieving results comparable to state-of-the-art photometric classification methods. We achieve a logarithmic-loss of 0.507 on imbalanced data in a representative setting using data from the Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC). Moreover, we achieve a micro-averaged receiver operating characteristic area under curve of 0.98 and micro-averaged precision-recall area under curve of 0.87.
translated by 谷歌翻译
众所周知,张量网络回归模型在呈指数型的特征空间上运行,但是关于它们能够有效地利用此空间的有效性仍然存在问题。使用Novikov等人的多项式特征,我们提出相互作用分解作为一种工具,可以评估不同回归器的相对重要性,其函数是其多项式程度的函数。我们将这种分解应用于在MNIST和时尚MNIST数据集中训练的张量环和树张量网络模型,并发现多达75%的交互作用度对这些模型有意义地贡献了。我们还引入了一种新型的张量网络模型,该模型仅在相互作用的一小部分上进行明确训练,并发现这些模型能够仅使用指数特征空间的一小部分匹配甚至优于整个模型。这表明标准张量网络模型以低效率的方式利用其多项式回归器,较低的程度术语被大大不足。
translated by 谷歌翻译
尽管在自然语言处理(NLP)中经常发生的经常性神经网络(RNN),但由于RNN中的本质上复杂计算,RNN的理论理解仍然有限。我们在普遍存在的NLP任务中对RNNS的行为进行了系统分析,通过映射到一种称为经常性算术电路(RAC)和矩阵产品状态(MPS)之间的映射来对电影评论的情感分析。使用von-neumann纠缠熵(EE)作为信息传播的代理,我们表明单层RACS具有最大信息传播能力,由EE的饱和反映。放大超出EE饱和阈值的MP的键尺寸不会增加预测精度,因此可以构建最佳估计数据统计数据的最小模型。虽然饱和EE小于MPS的面积法可实现的最大EE,但我们的模型在现实情绪分析数据集中实现了〜99%的训练准确性。因此,单独的低EE不是针对NLP采用单层RAC的权证。与常见的信念相反,远程信息传播是RNNS表达的主要来源,我们表明单层RACS也从有意义的单词矢量嵌入中利用高表现力。我们的工作揭示了在RAC的现象学中,更一般地用于NLP的RNNS的解释性方面,使用来自许多身体量子物理学的工具。
translated by 谷歌翻译
密度矩阵描述了量子系统的统计状态。它是一种强大的形式主义,代表量子系统的量子和经典不确定性,并表达不同的统计操作,例如测量,系统组合和期望作为线性代数操作。本文探讨了密度矩阵如何用作构建块,以构建机器学习模型,利用它们直接组合线性代数和概率的能力。本文的主要结果之一是表示与随机傅里叶功能耦合的密度矩阵可以近似任意概率分布超过$ \ mathbb {r} ^ n $。基于此发现,该纸张为密度估计,分类和回归构建了不同的模型。这些模型是可疑的,因此可以将它们与其他可分辨率的组件(例如深度学习架构)集成,并使用基于梯度的优化来学习其参数。此外,本文提出了基于估计和模型平均的优化培训策略。该模型在基准任务中进行评估,并报告并讨论结果。
translated by 谷歌翻译
我们介绍了Netket的版本3,机器学习工具箱适用于许多身体量子物理学。Netket围绕神经网络量子状态构建,并为其评估和优化提供有效的算法。这个新版本是基于JAX的顶部,一个用于Python编程语言的可差分编程和加速的线性代数框架。最重要的新功能是使用机器学习框架的简明符号来定义纯Python代码中的任意神经网络ANS \“凝固的可能性,这允许立即编译以及渐变的隐式生成自动化。Netket 3还带来了GPU和TPU加速器的支持,对离散对称组的高级支持,块以缩放多程度的自由度,Quantum动态应用程序的驱动程序,以及改进的模块化,允许用户仅使用部分工具箱是他们自己代码的基础。
translated by 谷歌翻译
神经网络是强大的功能估计器,导致其作为建模结构化数据的首选范式的地位。但是,与其他强调问题模块化的结构化表示不同,例如因子图 - 神经网络通常是从输入到输出的单片映射,并具有固定的计算顺序。这种限制阻止他们捕获模型变量之间的不同计算方向和相互作用。在本文中,我们结合了因子图和神经网络的代表性强度,提出了无向神经网络(UNNS):一个灵活的框架,用于指定可以按任何顺序执行的计算。对于特定的选择,我们提出的模型集成并扩展了许多现有的架构:带有隐式层的馈电,经常性,自我发项网络,自动编码器和网络。我们在一系列任务上展示了无方向性的神经体系结构的有效性:受树约束依赖性解析,卷积图像分类和序列完成。通过改变计算顺序,我们展示了如何同时将单个UNN用作分类器和原型发生器,以及它如何填充输入序列的缺失部分,从而使它们成为进一步研究的有希望的领域。
translated by 谷歌翻译
最近引入的普通微分方程网络(ODE-网)在深度学习和动态系统之间建立了丰富的连接。在这项工作中,我们使用基础函数的线性组合重新考虑重量作为连续的函数,这使我们能够利用诸如功能投影的参数变换。反过来,这个视图允许我们制定处理有状态层的新型有状态ode-块。这个新的ode-块的好处是双重的:首先,它使得能够纳入有意义的连续深度批量归一代化层以实现最先进的性能;其次,它使得能够通过改变来压缩权重,而不会再培训,同时保持近最先进的性能并降低推理时间和存储器占用。使用卷积单元和(b)使用变压器编码器单元将(b)句子标记任务应用于(a)图像分类任务来证明性能。
translated by 谷歌翻译
These notes were compiled as lecture notes for a course developed and taught at the University of the Southern California. They should be accessible to a typical engineering graduate student with a strong background in Applied Mathematics. The main objective of these notes is to introduce a student who is familiar with concepts in linear algebra and partial differential equations to select topics in deep learning. These lecture notes exploit the strong connections between deep learning algorithms and the more conventional techniques of computational physics to achieve two goals. First, they use concepts from computational physics to develop an understanding of deep learning algorithms. Not surprisingly, many concepts in deep learning can be connected to similar concepts in computational physics, and one can utilize this connection to better understand these algorithms. Second, several novel deep learning algorithms can be used to solve challenging problems in computational physics. Thus, they offer someone who is interested in modeling a physical phenomena with a complementary set of tools.
translated by 谷歌翻译
在许多情况下,更简单的模型比更复杂的模型更可取,并且该模型复杂性的控制是机器学习中许多方法的目标,例如正则化,高参数调整和体系结构设计。在深度学习中,很难理解复杂性控制的潜在机制,因为许多传统措施并不适合深度神经网络。在这里,我们开发了几何复杂性的概念,该概念是使用离散的dirichlet能量计算的模型函数变异性的量度。使用理论论据和经验结果的结合,我们表明,许多常见的训练启发式方法,例如参数规范正规化,光谱规范正则化,平稳性正则化,隐式梯度正则化,噪声正则化和参数初始化的选择,都可以控制几何学复杂性,并提供一个统一的框架,以表征深度学习模型的行为。
translated by 谷歌翻译