短期内存(LSTM)和变压器是两个流行的神经结构用于自然语言处理任务。理论结果表明,两者都是图灵完成的,可以代表任何无论如何的语言(CFL)。在实践中,经常观察到变压器模型具有比LSTM更好的表示功率。但原因几乎没有明白。我们研究了LSTM和变压器之间的实际差异,并提出了基于潜空间分解模式的解释。为了实现这一目标,我们介绍了Oracle培训范式,这迫使LSTM和变压器的潜在表示的分解,并监督相应CFL的推动自动化(PDA)的转换。通过强制分解,我们表明LSTM和变压器在学习CFL中的性能上限是关闭:它们都可以模拟堆栈并与状态转换一起执行堆栈操作。然而,没有强制分解导致LSTM模型的故障捕获堆叠和堆叠操作,同时对变压器模型产生边缘影响。最后,我们将原型PDA的实验连接到真实的解析任务,以重新验证结论
translated by 谷歌翻译
可靠的概括是安全ML和AI的核心。但是,了解神经网络何时以及如何推广仍然是该领域最重要的未解决问题之一。在这项工作中,我们进行了一项广泛的实证研究(2200个模型,16个任务),以研究计算理论中的见解是否可以预测实践中神经网络概括的局限性。我们证明,根据Chomsky层次结构进行分组任务使我们能够预测某些架构是否能够推广到分布外输入。这包括负面结果,即使大量数据和训练时间也不会导致任何非平凡的概括,尽管模型具有足够的能力完美地适合培训数据。我们的结果表明,对于我们的任务子集,RNN和变形金刚无法概括非规范的任务,LSTMS可以解决常规和反语言任务,并且只有通过结构化内存(例如堆栈或存储器磁带)可以增强的网络可以成功地概括了无上下文和上下文敏感的任务。
translated by 谷歌翻译
We present a differentiable stack data structure that simultaneously and tractably encodes an exponential number of stack configurations, based on Lang's algorithm for simulating nondeterministic pushdown automata. We call the combination of this data structure with a recurrent neural network (RNN) controller a Nondeterministic Stack RNN. We compare our model against existing stack RNNs on various formal languages, demonstrating that our model converges more reliably to algorithmic behavior on deterministic tasks, and achieves lower cross-entropy on inherently nondeterministic tasks.
translated by 谷歌翻译
Learning hierarchical structures in sequential data -- from simple algorithmic patterns to natural language -- in a reliable, generalizable way remains a challenging problem for neural language models. Past work has shown that recurrent neural networks (RNNs) struggle to generalize on held-out algorithmic or syntactic patterns without supervision or some inductive bias. To remedy this, many papers have explored augmenting RNNs with various differentiable stacks, by analogy with finite automata and pushdown automata (PDAs). In this paper, we improve the performance of our recently proposed Nondeterministic Stack RNN (NS-RNN), which uses a differentiable data structure that simulates a nondeterministic PDA, with two important changes. First, the model now assigns unnormalized positive weights instead of probabilities to stack actions, and we provide an analysis of why this improves training. Second, the model can directly observe the state of the underlying PDA. Our model achieves lower cross-entropy than all previous stack RNNs on five context-free language modeling tasks (within 0.05 nats of the information-theoretic lower bound), including a task on which the NS-RNN previously failed to outperform a deterministic stack RNN baseline. Finally, we propose a restricted version of the NS-RNN that incrementally processes infinitely long sequences, and we present language modeling results on the Penn Treebank.
translated by 谷歌翻译
Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset. * Equal contribution, alphabetically by last name. † Work performed while at Google Brain.
translated by 谷歌翻译
在本文中,我们试图通过引入深度学习模型的句法归纳偏见来建立两所学校之间的联系。我们提出了两个归纳偏见的家族,一个家庭用于选区结构,另一个用于依赖性结构。选区归纳偏见鼓励深度学习模型使用不同的单位(或神经元)分别处理长期和短期信息。这种分离为深度学习模型提供了一种方法,可以从顺序输入中构建潜在的层次表示形式,即更高级别的表示由高级表示形式组成,并且可以分解为一系列低级表示。例如,在不了解地面实际结构的情况下,我们提出的模型学会通过根据其句法结构组成变量和运算符的表示来处理逻辑表达。另一方面,依赖归纳偏置鼓励模型在输入序列中找到实体之间的潜在关系。对于自然语言,潜在关系通常被建模为一个定向依赖图,其中一个单词恰好具有一个父节点和零或几个孩子的节点。将此约束应用于类似变压器的模型之后,我们发现该模型能够诱导接近人类专家注释的有向图,并且在不同任务上也优于标准变压器模型。我们认为,这些实验结果为深度学习模型的未来发展展示了一个有趣的选择。
translated by 谷歌翻译
In order to achieve deep natural language understanding, syntactic constituent parsing is a vital step, highly demanded by many artificial intelligence systems to process both text and speech. One of the most recent proposals is the use of standard sequence-to-sequence models to perform constituent parsing as a machine translation task, instead of applying task-specific parsers. While they show a competitive performance, these text-to-parse transducers are still lagging behind classic techniques in terms of accuracy, coverage and speed. To close the gap, we here extend the framework of sequence-to-sequence models for constituent parsing, not only by providing a more powerful neural architecture for improving their performance, but also by enlarging their coverage to handle the most complex syntactic phenomena: discontinuous structures. To that end, we design several novel linearizations that can fully produce discontinuities and, for the first time, we test a sequence-to-sequence model on the main discontinuous benchmarks, obtaining competitive results on par with task-specific discontinuous constituent parsers and achieving state-of-the-art scores on the (discontinuous) English Penn Treebank.
translated by 谷歌翻译
将已知的原始概念重组为更大的新型组合是一种典型的人类认知能力。NLP中的大型神经模型是否在从数据中学习时获得此能力是一个悬而未决的问题。在本文中,我们从形式语言的角度看一下这个问题。我们使用确定性有限状态传感器来制作具有控制组合性的可控属性的无限数量数据集。通过对许多传感器进行随机采样,我们探讨了它们的哪些属性(状态数,字母大小,过渡次数等)有助于通过神经网络的组成关系可学习。通常,我们发现模型要么完全学习关系。关键是过渡覆盖范围,以每个过渡为400个示例设置软可学习性限制。
translated by 谷歌翻译
Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BIGBIRD, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BIGBIRD is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BIGBIRD drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.
translated by 谷歌翻译
We introduce Transformer Grammars (TGs), a novel class of Transformer language models that combine (i) the expressive power, scalability, and strong performance of Transformers and (ii) recursive syntactic compositions, which here are implemented through a special attention mask and deterministic transformation of the linearized tree. We find that TGs outperform various strong baselines on sentence-level language modeling perplexity, as well as on multiple syntax-sensitive language modeling evaluation metrics. Additionally, we find that the recursive syntactic composition bottleneck which represents each sentence as a single vector harms perplexity on document-level language modeling, providing evidence that a different kind of memory mechanism -- one that is independent of composed syntactic representations -- plays an important role in current successful models of long text.
translated by 谷歌翻译
Recurrent neural networks are a widely used class of neural architectures. They have, however, two shortcomings. First, they are often treated as black-box models and as such it is difficult to understand what exactly they learn as well as how they arrive at a particular prediction. Second, they tend to work poorly on sequences requiring long-term memorization, despite having this capacity in principle. We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications. This mechanism, which we term state-regularization, makes RNNs transition between a finite set of learnable states. We evaluate state-regularized RNNs on (1) regular languages for the purpose of automata extraction; (2) non-regular languages such as balanced parentheses and palindromes where external memory is required; and (3) real-word sequence learning tasks for sentiment analysis, visual object recognition and text categorisation. We show that state-regularization (a) simplifies the extraction of finite state automata that display an RNN's state transition dynamic; (b) forces RNNs to operate more like automata with external memory and less like finite state machines, which potentiality leads to a more structural memory; (c) leads to better interpretability and explainability of RNNs by leveraging the probabilistic finite state transition mechanism over time steps.
translated by 谷歌翻译
基于变压器的模型在多个领域和任务上显示了它们的有效性。自我注意力允许将所有序列元素的信息结合到上下文感知表示形式中。但是,全球和本地信息必须主要存储在相同的元素表示中。此外,输入序列的长度受到自我注意的二次计算复杂性的限制。在这项工作中,我们提出并研究了一个记忆启动的片段级循环变压器(复发记忆变压器)。内存允许借助复发的帮助存储和处理本地和全局信息,并可以在长序列的段之间传递信息。我们通过将特殊的内存令牌添加到输入或输出序列中,实现了一个内存机制,无需更改变压器模型。然后,对变压器进行了训练,以控制内存操作和序列表示处理。实验的结果表明,我们的模型与Transformer-XL在语言建模上的较小内存大小上的表现相同,并在需要更长序列处理的任务方面胜过它。我们证明,将内存令牌添加到TR-XL可以提高IT性能。这使得反复的内存变压器成为需要学习长期依赖性和内存处理中的通用性(例如算法任务和推理)的应用程序的有前途的体系结构。
translated by 谷歌翻译
我们介绍了神经堆栈体系结构,包括一个可区分的参数化堆栈操作员,该堆栈操作员近似堆栈推送和弹出操作,以选择明确表示堆栈的参数选择。我们证明了这种堆栈体系结构的稳定性:在任意许多堆栈操作之后,神经堆栈的状态仍然与离散堆栈的状态非常相似。使用神经堆栈和复发性神经网络,我们引入了神经网络下降自动机(NNPDA),并证明具有有限/有界神经元的NNPDA可以模拟任何PDA。此外,我们扩展了建筑,并提出了新的建筑神经状态图灵机(NNTM)。我们证明,具有有界神经元的可区分NNTM可以实时模拟图灵机(TM)。就像神经堆栈一样,这些架构也很稳定。最后,我们扩展了构造,以表明可区分的NNTM等同于通用图灵机(UTM),并且只能使用\ textbf {七个有限/有限的精度}神经元模拟任何TM。这项工作为有界精度RNN的计算能力提供了新的理论界限,并随着内存增强。
translated by 谷歌翻译
受到人类掌握算术和普遍不见问题的非凡能力的启发,我们提出了一个新的数据集,提示,以研究机器在三个层面上学习可推广概念的能力:感知,语法和语义。学习代理人是从图像(即感知)等原始信号中观察到的概念,如何在结构上组合多个概念来形成有效的表达(即语法),以及如何实现概念以提供各种推理任务(即语义学),都是以弱监督的方式。以系统的概括为重点,我们仔细设计了一个五倍的测试集,以评估插值和推断学概念W.R.T.这三个级别。我们进一步设计了一些学习的分割,以测试模型是否可以快速学习新概念并将其推广到更复杂的场景。为了了解现有模型的局限性,我们通过包括RNN,Transformers和GPT-3在内的各种顺序到序列模型(以及思想提示链)进行了广泛的实验。结果表明,当前的模型仍在推断出远程句法依赖性和语义方面仍在努力。当在几次设置中使用新概念测试时,模型显示出对人级概括的显着差距。此外,我们发现通过简单地扩大数据集和模型大小来解决提示是不可行的。该策略几乎没有帮助推断语法和语义。最后,在零拍的GPT-3实验中,思想链提示链显示出令人印象深刻的结果,并显着提高了测试准确性。我们认为,拟议的数据集以及实验发现在系统概括方面引起了极大的兴趣。
translated by 谷歌翻译
我用Hunglish2语料库训练神经电脑翻译任务的模型。这项工作的主要贡献在培训NMT模型期间评估不同的数据增强方法。我提出了5种不同的增强方法,这些方法是结构感知的,这意味着而不是随机选择用于消隐或替换的单词,句子的依赖树用作增强的基础。我首先关于神经网络的详细文献综述,顺序建模,神经机翻译,依赖解析和数据增强。经过详细的探索性数据分析和Hunglish2语料库的预处理之后,我使用所提出的数据增强技术进行实验。匈牙利语的最佳型号达到了33.9的BLEU得分,而英国匈牙利最好的模型达到了28.6的BLEU得分。
translated by 谷歌翻译
近年来,基于变压器的预训练模型已获得了很大的进步,成为自然语言处理中最重要的骨干之一。最近的工作表明,变压器内部的注意力机制可能不需要,卷积神经网络和基于多层感知器的模型也已被研究为变压器替代方案。在本文中,我们考虑了一个用于语言模型预训练的图形循环网络,该网络通过本地令牌级通信为每个序列构建一个图形结构,以及与其他代币解耦的句子级表示。原始模型在受监督培训下的特定领域特定文本分类中表现良好,但是,其通过自我监督的方式学习转移知识的潜力尚未得到充分利用。我们通过优化体系结构并验证其在更通用的语言理解任务(英语和中文)中的有效性来填补这一空白。至于模型效率,我们的模型在基于变压器的模型中而不是二次复杂性,而是具有线性复杂性,并且在推断过程中的性能更有效。此外,我们发现与现有基于注意力的模型相比,我们的模型可以生成更多样化的输出,而背景化的功能冗余性较小。
translated by 谷歌翻译
诱导顺序数据的潜在树结构是今天NLP研究景观的新出现趋势,主要是由最近的方法(如Gumbel LSTM和有序神经元)(LSTM)所普及。本文提出了Fasttrees,一种新的通用神经模块,用于快速序列编码。与最先前的作品不同,考虑到树归类所需的复发,我们的工作探讨了并行树归纳的概念,即,通过分层电感偏置的并行,非自动增加时尚的分层感应偏差。为此,我们提出的Fasttrees在四个建立良好的序列建模任务中实现了对LSTM的竞争或卓越的性能,即语言建模,逻辑推断,情感分析和自然语言推断。此外,我们表明FastTrees模块可以应用于增强变压器模型,实现三个序列转换任务(机器翻译,主语 - 动词协议和数学语言理解)实现性能增益,为模块化树感应模块铺平了道路。总的来说,我们以+ 4%的逻辑推理任务和数学语言理解+ 8%的现有最先进的模型。
translated by 谷歌翻译
Named Entity Recognition and Intent Classification are among the most important subfields of the field of Natural Language Processing. Recent research has lead to the development of faster, more sophisticated and efficient models to tackle the problems posed by those two tasks. In this work we explore the effectiveness of two separate families of Deep Learning networks for those tasks: Bidirectional Long Short-Term networks and Transformer-based networks. The models were trained and tested on the ATIS benchmark dataset for both English and Greek languages. The purpose of this paper is to present a comparative study of the two groups of networks for both languages and showcase the results of our experiments. The models, being the current state-of-the-art, yielded impressive results and achieved high performance.
translated by 谷歌翻译
我们表明,LSTM和统一进化的复发神经网络(URN)都可以在两种类型的句法模式上实现令人鼓舞的准确性:无上下文的长距离一致性和轻微的上下文敏感的交叉序列依赖性。这项工作扩展了有关深层无上下文的长距离依赖性的最新实验,结果相似。URN与LSTM的不同之处在于它们避免了非线性激活函数,并且它们将矩阵乘法应用于编码为单位矩阵的单词嵌入。这使他们可以将所有信息保留在任意距离上的输入字符串中。这也使他们满足严格的组成性。在应用于NLP的深度学习中寻找可解释的模型时,URN构成了重大进步。
translated by 谷歌翻译
我们介绍了块状变压器,该变压器以序列的反复方式应用变压器层,并且相对于序列长度具有线性复杂性。我们的复发单元在训练过程中在代币的块而不是单个令牌上运行,并利用块内并行计算,以便有效利用加速器硬件。单元本身非常简单。它仅仅是一个变压器层:它使用自我注意事项和交叉注意力来有效计算大量状态向量和令牌上的复发函数。我们的设计部分受到LSTM单元的启发,它使用LSTM风格的大门,但它可以将典型的LSTM单元缩放为几个数量级。我们的复发实现在计算时间和参数计数中都具有相同的成本作为传统的变压器层,但是在很长的序列中,语言建模任务中的语言建模任务的困惑极大地改善了。我们的模型比远程变压器XL基线的表现宽大,同时运行的速度是两倍。我们证明了它在PG19(书籍),Arxiv论文和GitHub源代码上的有效性。我们的代码已发布为开​​源。
translated by 谷歌翻译