We present a differentiable stack data structure that simultaneously and tractably encodes an exponential number of stack configurations, based on Lang's algorithm for simulating nondeterministic pushdown automata. We call the combination of this data structure with a recurrent neural network (RNN) controller a Nondeterministic Stack RNN. We compare our model against existing stack RNNs on various formal languages, demonstrating that our model converges more reliably to algorithmic behavior on deterministic tasks, and achieves lower cross-entropy on inherently nondeterministic tasks.
translated by 谷歌翻译
Learning hierarchical structures in sequential data -- from simple algorithmic patterns to natural language -- in a reliable, generalizable way remains a challenging problem for neural language models. Past work has shown that recurrent neural networks (RNNs) struggle to generalize on held-out algorithmic or syntactic patterns without supervision or some inductive bias. To remedy this, many papers have explored augmenting RNNs with various differentiable stacks, by analogy with finite automata and pushdown automata (PDAs). In this paper, we improve the performance of our recently proposed Nondeterministic Stack RNN (NS-RNN), which uses a differentiable data structure that simulates a nondeterministic PDA, with two important changes. First, the model now assigns unnormalized positive weights instead of probabilities to stack actions, and we provide an analysis of why this improves training. Second, the model can directly observe the state of the underlying PDA. Our model achieves lower cross-entropy than all previous stack RNNs on five context-free language modeling tasks (within 0.05 nats of the information-theoretic lower bound), including a task on which the NS-RNN previously failed to outperform a deterministic stack RNN baseline. Finally, we propose a restricted version of the NS-RNN that incrementally processes infinitely long sequences, and we present language modeling results on the Penn Treebank.
translated by 谷歌翻译
可靠的概括是安全ML和AI的核心。但是,了解神经网络何时以及如何推广仍然是该领域最重要的未解决问题之一。在这项工作中,我们进行了一项广泛的实证研究(2200个模型,16个任务),以研究计算理论中的见解是否可以预测实践中神经网络概括的局限性。我们证明,根据Chomsky层次结构进行分组任务使我们能够预测某些架构是否能够推广到分布外输入。这包括负面结果,即使大量数据和训练时间也不会导致任何非平凡的概括,尽管模型具有足够的能力完美地适合培训数据。我们的结果表明,对于我们的任务子集,RNN和变形金刚无法概括非规范的任务,LSTMS可以解决常规和反语言任务,并且只有通过结构化内存(例如堆栈或存储器磁带)可以增强的网络可以成功地概括了无上下文和上下文敏感的任务。
translated by 谷歌翻译
我们介绍了神经堆栈体系结构,包括一个可区分的参数化堆栈操作员,该堆栈操作员近似堆栈推送和弹出操作,以选择明确表示堆栈的参数选择。我们证明了这种堆栈体系结构的稳定性:在任意许多堆栈操作之后,神经堆栈的状态仍然与离散堆栈的状态非常相似。使用神经堆栈和复发性神经网络,我们引入了神经网络下降自动机(NNPDA),并证明具有有限/有界神经元的NNPDA可以模拟任何PDA。此外,我们扩展了建筑,并提出了新的建筑神经状态图灵机(NNTM)。我们证明,具有有界神经元的可区分NNTM可以实时模拟图灵机(TM)。就像神经堆栈一样,这些架构也很稳定。最后,我们扩展了构造,以表明可区分的NNTM等同于通用图灵机(UTM),并且只能使用\ textbf {七个有限/有限的精度}神经元模拟任何TM。这项工作为有界精度RNN的计算能力提供了新的理论界限,并随着内存增强。
translated by 谷歌翻译
短期内存(LSTM)和变压器是两个流行的神经结构用于自然语言处理任务。理论结果表明,两者都是图灵完成的,可以代表任何无论如何的语言(CFL)。在实践中,经常观察到变压器模型具有比LSTM更好的表示功率。但原因几乎没有明白。我们研究了LSTM和变压器之间的实际差异,并提出了基于潜空间分解模式的解释。为了实现这一目标,我们介绍了Oracle培训范式,这迫使LSTM和变压器的潜在表示的分解,并监督相应CFL的推动自动化(PDA)的转换。通过强制分解,我们表明LSTM和变压器在学习CFL中的性能上限是关闭:它们都可以模拟堆栈并与状态转换一起执行堆栈操作。然而,没有强制分解导致LSTM模型的故障捕获堆叠和堆叠操作,同时对变压器模型产生边缘影响。最后,我们将原型PDA的实验连接到真实的解析任务,以重新验证结论
translated by 谷歌翻译
Recurrent neural networks are a widely used class of neural architectures. They have, however, two shortcomings. First, they are often treated as black-box models and as such it is difficult to understand what exactly they learn as well as how they arrive at a particular prediction. Second, they tend to work poorly on sequences requiring long-term memorization, despite having this capacity in principle. We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications. This mechanism, which we term state-regularization, makes RNNs transition between a finite set of learnable states. We evaluate state-regularized RNNs on (1) regular languages for the purpose of automata extraction; (2) non-regular languages such as balanced parentheses and palindromes where external memory is required; and (3) real-word sequence learning tasks for sentiment analysis, visual object recognition and text categorisation. We show that state-regularization (a) simplifies the extraction of finite state automata that display an RNN's state transition dynamic; (b) forces RNNs to operate more like automata with external memory and less like finite state machines, which potentiality leads to a more structural memory; (c) leads to better interpretability and explainability of RNNs by leveraging the probabilistic finite state transition mechanism over time steps.
translated by 谷歌翻译
在本文中,我们试图通过引入深度学习模型的句法归纳偏见来建立两所学校之间的联系。我们提出了两个归纳偏见的家族,一个家庭用于选区结构,另一个用于依赖性结构。选区归纳偏见鼓励深度学习模型使用不同的单位(或神经元)分别处理长期和短期信息。这种分离为深度学习模型提供了一种方法,可以从顺序输入中构建潜在的层次表示形式,即更高级别的表示由高级表示形式组成,并且可以分解为一系列低级表示。例如,在不了解地面实际结构的情况下,我们提出的模型学会通过根据其句法结构组成变量和运算符的表示来处理逻辑表达。另一方面,依赖归纳偏置鼓励模型在输入序列中找到实体之间的潜在关系。对于自然语言,潜在关系通常被建模为一个定向依赖图,其中一个单词恰好具有一个父节点和零或几个孩子的节点。将此约束应用于类似变压器的模型之后,我们发现该模型能够诱导接近人类专家注释的有向图,并且在不同任务上也优于标准变压器模型。我们认为,这些实验结果为深度学习模型的未来发展展示了一个有趣的选择。
translated by 谷歌翻译
We introduce Transformer Grammars (TGs), a novel class of Transformer language models that combine (i) the expressive power, scalability, and strong performance of Transformers and (ii) recursive syntactic compositions, which here are implemented through a special attention mask and deterministic transformation of the linearized tree. We find that TGs outperform various strong baselines on sentence-level language modeling perplexity, as well as on multiple syntax-sensitive language modeling evaluation metrics. Additionally, we find that the recursive syntactic composition bottleneck which represents each sentence as a single vector harms perplexity on document-level language modeling, providing evidence that a different kind of memory mechanism -- one that is independent of composed syntactic representations -- plays an important role in current successful models of long text.
translated by 谷歌翻译
将已知的原始概念重组为更大的新型组合是一种典型的人类认知能力。NLP中的大型神经模型是否在从数据中学习时获得此能力是一个悬而未决的问题。在本文中,我们从形式语言的角度看一下这个问题。我们使用确定性有限状态传感器来制作具有控制组合性的可控属性的无限数量数据集。通过对许多传感器进行随机采样,我们探讨了它们的哪些属性(状态数,字母大小,过渡次数等)有助于通过神经网络的组成关系可学习。通常,我们发现模型要么完全学习关系。关键是过渡覆盖范围,以每个过渡为400个示例设置软可学习性限制。
translated by 谷歌翻译
我们表明,LSTM和统一进化的复发神经网络(URN)都可以在两种类型的句法模式上实现令人鼓舞的准确性:无上下文的长距离一致性和轻微的上下文敏感的交叉序列依赖性。这项工作扩展了有关深层无上下文的长距离依赖性的最新实验,结果相似。URN与LSTM的不同之处在于它们避免了非线性激活函数,并且它们将矩阵乘法应用于编码为单位矩阵的单词嵌入。这使他们可以将所有信息保留在任意距离上的输入字符串中。这也使他们满足严格的组成性。在应用于NLP的深度学习中寻找可解释的模型时,URN构成了重大进步。
translated by 谷歌翻译
最近显示出一种仅通过神经元的尖峰实现的计算系统,即语法,即进行简单的英语句子的依赖性解析。我们解决了这项工作所留下的两个最重要的问题:选区(句子的关键部分,例如动词短语)和处理依赖句子的处理,尤其是中央句子。我们表明,语言的这两个方面也可以由神经元和突触以与已知或被广泛相信的语言器官的结构和功能兼容的方式来实现。令人惊讶的是,我们实施中心嵌入的方式指出了无上下文语言的新表征。
translated by 谷歌翻译
我们训练神经网络以优化最小描述长度分数,即,在网络的复杂性之间平衡,并在任务中的准确性。我们展示了使用此目标函数主任务培训的网络,涉及记忆挑战,例如计数,包括超出无背景语言的案例。这些学习者掌握语法,例如,$ a ^ nb ^ n $,$ a ^ nb ^ nc ^ n $,$ a ^ nb ^ {2n} $和$ a ^ nb ^ mc ^ {n + m} $,他们进行加法。他们这样做的准确性100%,有时也有100%的信心。网络也很小,内部工作是透明的。因此,我们提供正式证据,即他们的完美准确性不仅在给定的测试集上持有,而是用于任何输入序列。
translated by 谷歌翻译
In order to achieve deep natural language understanding, syntactic constituent parsing is a vital step, highly demanded by many artificial intelligence systems to process both text and speech. One of the most recent proposals is the use of standard sequence-to-sequence models to perform constituent parsing as a machine translation task, instead of applying task-specific parsers. While they show a competitive performance, these text-to-parse transducers are still lagging behind classic techniques in terms of accuracy, coverage and speed. To close the gap, we here extend the framework of sequence-to-sequence models for constituent parsing, not only by providing a more powerful neural architecture for improving their performance, but also by enlarging their coverage to handle the most complex syntactic phenomena: discontinuous structures. To that end, we design several novel linearizations that can fully produce discontinuities and, for the first time, we test a sequence-to-sequence model on the main discontinuous benchmarks, obtaining competitive results on par with task-specific discontinuous constituent parsers and achieving state-of-the-art scores on the (discontinuous) English Penn Treebank.
translated by 谷歌翻译
当前独立于域的经典计划者需要问题域和实例作为输入的符号模型,从而导致知识采集瓶颈。同时,尽管深度学习在许多领域都取得了重大成功,但知识是在与符号系统(例如计划者)不兼容的亚符号表示中编码的。我们提出了Latplan,这是一种无监督的建筑,结合了深度学习和经典计划。只有一组未标记的图像对,显示了环境中允许的过渡子集(训练输入),Latplan学习了环境的完整命题PDDL动作模型。稍后,当给出代表初始状态和目标状态(计划输入)的一对图像时,Latplan在符号潜在空间中找到了目标状态的计划,并返回可视化的计划执行。我们使用6个计划域的基于图像的版本来评估LATPLAN:8个插头,15个式嘴,Blockworld,Sokoban和两个LightsOut的变体。
translated by 谷歌翻译
This paper explores semi-supervised training for sequence tasks, such as Optical Character Recognition or Automatic Speech Recognition. We propose a novel loss function $\unicode{x2013}$ SoftCTC $\unicode{x2013}$ which is an extension of CTC allowing to consider multiple transcription variants at the same time. This allows to omit the confidence based filtering step which is otherwise a crucial component of pseudo-labeling approaches to semi-supervised learning. We demonstrate the effectiveness of our method on a challenging handwriting recognition task and conclude that SoftCTC matches the performance of a finely-tuned filtering based pipeline. We also evaluated SoftCTC in terms of computational efficiency, concluding that it is significantly more efficient than a na\"ive CTC-based approach for training on multiple transcription variants, and we make our GPU implementation public.
translated by 谷歌翻译
Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset. * Equal contribution, alphabetically by last name. † Work performed while at Google Brain.
translated by 谷歌翻译
We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network [23] but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch [2] to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering [22] and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.
translated by 谷歌翻译
结构分布,即组合空间的分布,通常用于学习观察到数据的潜在概率表示。然而,缩放这些模型是由高计算和内存复杂度相对于潜在表示的大小的瓶颈。诸如隐藏的马尔可夫模型(HMMS)和概率的无内容语法(PCFG)的常见模型在隐藏状态的数量中需要时间和空间二次和立方。这项工作展示了一种简单的方法来降低大类结构化模型的计算和内存复杂性。我们展示通过将中央推理步骤视为矩阵 - 矢量产品,并使用低秩约束,我们可以通过等级进行模型表达性和速度。用神经参数化结构化模型进行语言建模,复音音乐建模,无监督语法诱导和视频建模的实验表明,我们的方法在提供实用加速度的同时匹配大状态空间的标准模型的准确性。
translated by 谷歌翻译
Common to all different kinds of recurrent neural networks (RNNs) is the intention to model relations between data points through time. When there is no immediate relationship between subsequent data points (like when the data points are generated at random, e.g.), we show that RNNs are still able to remember a few data points back into the sequence by memorizing them by heart using standard backpropagation. However, we also show that for classical RNNs, LSTM and GRU networks the distance of data points between recurrent calls that can be reproduced this way is highly limited (compared to even a loose connection between data points) and subject to various constraints imposed by the type and size of the RNN in question. This implies the existence of a hard limit (way below the information-theoretic one) for the distance between related data points within which RNNs are still able to recognize said relation.
translated by 谷歌翻译
复杂的事件识别(CER)系统在过去二十年中变得流行,因为它们能够“立即”检测在实时事件流上的模式。然而,缺乏预测模式可能发生在例如由Cer发动机实际检测到这种发生之前的模式。我们提出了一项正式的框架,试图解决复杂事件预测(CEF)的问题。我们的框架结合了两个形式主义:a)用于编码复杂事件模式的符号自动机; b)预测后缀树,可以提供自动机构的行为的简洁概率描述。我们比较我们提出的方法,以防止最先进的方法,并在准确性和效率方面展示其优势。特别地,预测后缀树是可变的马尔可夫模型,可以通过仅记住足够的信息的过去序列来捕获流中的长期依赖性。我们的实验结果表明了能够捕获这种长期依赖性的准确性的益处。这是通过增加我们模型的顺序来实现的,以满足需要执行给定顺序的所有可能的过去序列的所有可能的过去序列的详尽枚举的全阶马尔可夫模型。我们还广泛讨论CEF解决方案如何最佳地评估其预测的质量。
translated by 谷歌翻译