变压器注意机制中的设计选择,包括弱电感偏置和二次计算复杂性,限制了其用于建模长序列的应用。在本文中,我们介绍了一个简单的,理论上的,单头的门控注意机制,配备了(指数)移动平均线,以将局部依赖性的电感偏置纳入位置 - 敏锐的注意机制中。我们进一步提出了一个具有线性时间和空间复杂性的大型变体,但通过将整个序列分为固定长度的多个块,仅产生最小的质量损失。对广泛的序列建模基准测试的广泛实验,包括远距离竞技场,神经机器翻译,自动回归语言建模以及图像和语音分类,表明,巨人比其他序列模型取得了重大改进,包括变种物的变体和最新的变体模型状态空间模型。
translated by 谷歌翻译
变压器注意机制的二次计算和内存复杂性限制了对长序列建模的可扩展性。在本文中,我们提出了Luna,一种线性统一嵌套关注机制,使Softmax注意力具有两个嵌套线性关注功能,仅产生线性(与二次)的时间和空间复杂度相反。具体地,通过第一注意功能,LUNA将输入序列包装成固定长度的序列。然后,使用第二关注功能未包装包装序列。与更传统的关注机制相比,LUNA引入具有固定长度的附加序列作为输入和额外的相应输出,允许LUNA线性地进行关注操作,同时还存储足够的上下文信息。我们对三个序列建模任务的基准进行了广泛的评估:长上下文序列建模,神经机平移和大型预磨损的屏蔽语言建模。竞争甚至更好的实验结果表明了Luna的有效性和效率与各种各样相比
translated by 谷歌翻译
由于其二次复杂性,是变压器中的关注模块,其是变压器中的重要组件不能高效地扩展到长序列。许多工作侧重于近似于尺寸的圆点 - 指数的软MAX功能,导致分二次甚至线性复杂性变压器架构。但是,我们表明这些方法不能应用于超出点的指数样式的更强大的注意模块,例如,具有相对位置编码(RPE)的变压器。由于在许多最先进的模型中,相对位置编码被用作默认,设计可以包含RPE的高效变压器是吸引人的。在本文中,我们提出了一种新颖的方法来加速对RPE的转化仪的关注计算在核心化的关注之上。基于观察到相对位置编码形成Toeplitz矩阵,我们数在数学上表明,可以使用快速傅里叶变换(FFT)有效地计算具有RPE的核化注意。使用FFT,我们的方法实现$ \ mathcal {o}(n \ log n)$时间复杂性。有趣的是,我们进一步证明使用相对位置编码适当地可以减轻香草群关注的培训不稳定问题。在广泛的任务上,我们经验证明我们的模型可以从头开始培训,没有任何优化问题。学习模型比许多高效的变压器变体更好地执行,并且在长序列制度中比标准变压器更快。
translated by 谷歌翻译
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. * Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea. Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research.† Work performed while at Google Brain.‡ Work performed while at Google Research.
translated by 谷歌翻译
变形金刚在语言和视觉域中取得了成功。然而,将它们缩放到长期序列(例如长)或高分辨率图像,因为自我关注机构相对于输入序列长度具有二次时间和存储器复杂性。在本文中,我们提出了长短变压器(变压器-LS),是一种有效的自我关注机制,用于对语言和视觉任务进行线性复杂性建模的长序列。它用动态投影聚集了一种新的远程关注,以模拟远处相关性和短期注意,以捕获细粒度的局部相关性。我们提出了双重正径策略,以解释两个注意机制之间的规模不匹配。变压器-LS可以应用于自回归和双向模型,而无需额外复杂。我们的方法在语言和视觉域中的多个任务中优于最先进的模型,包括远程竞技场基准,自回归语言建模和想象成分类。例如,变换器-LS使用比以前的方法的一半在eNWIK8上实现0.97测试BPC,同时与其在同一硬件上的全部关注版本相比,可以更快地处理3倍。在Imagenet上,它可以获得最先进的结果(例如,适度大小的55.8M模型,仅在224x224 Imagenet-1K上培训,可以获得顶级1精度84.1%),同时在高分辨率上更加可扩展图片。源代码和模型在https://github.com/nvidia/transformer-ls上发布。
translated by 谷歌翻译
Transformer models have achieved superior performance in various natural language processing tasks. However, the quadratic computational cost of the attention mechanism limits its practicality for long sequences. There are existing attention variants that improve the computational efficiency, but they have limited ability to effectively compute global information. In parallel to Transformer models, state space models (SSMs) are tailored for long sequences, but they are not flexible enough to capture complicated local information. We propose SPADE, short for $\underline{\textbf{S}}$tate s$\underline{\textbf{P}}$ace $\underline{\textbf{A}}$ugmente$\underline{\textbf{D}}$ Transform$\underline{\textbf{E}}$r. Specifically, we augment a SSM into the bottom layer of SPADE, and we employ efficient local attention methods for the other layers. The SSM augments global information, which complements the lack of long-range dependency issue in local attention methods. Experimental results on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method. To further demonstrate the scalability of SPADE, we pre-train large encoder-decoder models and present fine-tuning results on natural language understanding and natural language generation tasks.
translated by 谷歌翻译
变压器架构现在是序列建模任务的核心。注意机制是核心,它可以在序列中对长期依赖性进行有效的建模。最近,变压器已成功地应用于计算机视觉域,在该域中首先将2D图像分割成斑块,然后将其视为1D序列。然而,这种线性化会损害图像中空间位置的概念,该图像具有重要的视觉线索。为了弥合差距,我们提出了连锁反应,这是视觉变压器的次级注意机制。基于最近基于内核的有效注意机制,我们设计了一种新型的动态编程算法,该算法将不同令牌的贡献加重了与它们在线性观察到的2D空间中相对空间距离的查询的贡献。广泛的实验和分析证明了连锁反应对各种视觉任务的有效性。
translated by 谷歌翻译
最近,提出了随机特征专注(RFA),以通过线性化指数核来近似线性时间和空间复杂性的软磁性注意力。在本文中,我们首先提出了一种新颖的观点,以通过将RFA重新铸造为自称的重要性采样器来理解这种近似值的偏见。这种观点进一步阐明了整个软磁注意的\ emph {nobaled}估计量,称为随机注意(RA)。RA通过特定的分布构建积极的随机特征,并享有极大的改善近似保真度,尽管表现出二次复杂性。通过结合RA中的表现力和RFA的效率,我们开发了一种新型的线性复杂性自我发项机制,称为线性随机注意(LARA)。跨各个领域的广泛实验表明,RA和LARA可显着提高RFA的性能。
translated by 谷歌翻译
事实证明,构象异构体在许多语音处理任务中都是有效的。它结合了使用卷积和使用自我注意的全球依赖性提取本地依赖的好处。受此启发,我们提出了一个更灵活,可解释和可自定义的编码器替代方案,分支机构,并在端到端语音处理中对各种远程依赖关系进行建模。在每个编码器层中,一个分支都采用自我注意事项或其变体来捕获远程依赖性,而另一个分支则利用带有卷积门控(CGMLP)的MLP模块来提取局部关系。我们对几种语音识别和口语理解基准进行实验。结果表明,我们的模型优于变压器和CGMLP。它还与构象异构体获得的最先进结果相匹配。此外,由于两分支结构,我们展示了减少计算的各种策略,包括在单个训练有素的模型中具有可变的推理复杂性的能力。合并分支的权重表明如何在不同层中使用本地和全球依赖性,从而使模型设计受益。
translated by 谷歌翻译
变压器模型是置换等分之一的。要提供输入令牌的顺序和类型信息,通常将位置和段嵌入式添加到输入中。最近的作品提出了具有相对位置编码的位置编码的变化,实现了更好的性能。我们的分析表明,增益实际上来自从输入中将位置信息移动到注意层。由此激励,我们介绍了变压器(饮食)的解耦的位置注意,一个简单但有效的机制,将位置和分段信息编码为变压器模型。该方法具有更快的培训和推理时间,同时在胶水,Xtreme和WMT基准上实现竞争性能。我们进一步概括了我们的方法到远程变压器并显示性能增益。
translated by 谷歌翻译
自我发挥作用机制通过在所有输入令牌之间使用成对的注意来对远程环境进行建模。在这样做时,他们假设由个体令牌(例如文本字符或图像像素)定义的固定注意粒度,这对于在较高级别上建模复杂依赖性可能不是最佳的。在本文中,我们提出了ContextPool,通过调整每个令牌的注意力粒度来解决此问题。受到与合并以捕获远程依赖关系的Convnets成功的启发,我们学会了为每个令牌汇总相邻功能,然后在给定的注意力层中计算注意力。合并的权重和支撑大小是自适应确定的,允许汇总功能以不同的规模编码有意义的上下文。我们表明,ContextPool使注意力模型更具表现力,经常以更少的层次实现强大的性能,从而大大降低了成本。实验验证我们的上下文池模块插入变压器模型时,使用几种语言和图像基准的计算较少计算,匹配或超越了最先进的性能,胜过最新的作品,这些作品具有学习的上下文大小或稀疏注意的模式,并且也适用为了进行有效的功能学习。
translated by 谷歌翻译
多头注意力是最先进的变压器背后的推动力,它在各种自然语言处理(NLP)和计算机视觉任务中实现了出色的性能。已经观察到,对于许多应用,这些注意力头会学习冗余嵌入,并且大多数可以在不降低模型性能的情况下去除。受到这一观察的启发,我们提出了变压器的混合物(变压器-MGK)的混合物,这是一种新型的变压器架构,用每个头部的钥匙混合了变压器中的冗余头部。这些键的混合物遵循高斯混合模型,并使每个注意力头有效地集中在输入序列的不同部分上。与传统的变压器对应物相比,变压器-MGK会加速训练和推理,具有较少的参数,并且需要更少的拖船来计算,同时实现跨任务的可比性或更高的准确性。 Transformer-MGK也可以轻松扩展到线性注意力。我们从经验上证明了在一系列实用应用中变形金属MGK的优势,包括语言建模和涉及非常长序列的任务。在Wikitext-103和远程竞技场基准中,具有4个头部的变压器MGK具有与基线变压器具有8个头的可比性或更好的性能。
translated by 谷歌翻译
Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset. * Equal contribution, alphabetically by last name. † Work performed while at Google Brain.
translated by 谷歌翻译
状态空间模型已显示在建模远距离依赖性方面有效,特别是序列分类任务。在这项工作中,我们着重于对英语书籍,GitHub源代码和Arxiv数学文章的自回旋序列建模。基于围绕封闭激活功能的有效性的最新发展,我们提出了一个名为“封闭状态空间(GSS)”的新层,并表明它的训练速度明显快于TPU的S4(即DSS)的对角线版本,具有相当竞争力 - 基于变压器的基线,并表现出零击向更长的输入,同时直接实施。最后,我们表明,利用自我意见来建模局部依赖性,可以进一步提高GSS的性能。
translated by 谷歌翻译
In this paper, we propose a novel architecture, the Enhanced Interactive Transformer (EIT), to address the issue of head degradation in self-attention mechanisms. Our approach replaces the traditional multi-head self-attention mechanism with the Enhanced Multi-Head Attention (EMHA) mechanism, which relaxes the one-to-one mapping constraint among queries and keys, allowing each query to attend to multiple keys. Furthermore, we introduce two interaction models, Inner-Subspace Interaction and Cross-Subspace Interaction, to fully utilize the many-to-many mapping capabilities of EMHA. Extensive experiments on a wide range of tasks (e.g. machine translation, abstractive summarization, grammar correction, language modelling and brain disease automatic diagnosis) show its superiority with a very modest increase in model size.
translated by 谷歌翻译
变压器建立在多头缩放的点产生关注和位置编码的基础上,旨在学习特征表示和令牌依赖性。在这项工作中,我们专注于通过学习通过变压器中的自我发项机制来增强特征图来增强独特的表示。具体而言,我们提出了水平的关注,以重新权重降低维度降低的点产量注意的多头输出,并提出垂直注意力以通过对不同的相互依赖性在不同的相互依赖性的方面自适应重新校准的频道响应,以使不同频道。我们证明了配备了两种专注的变压器模型在不同监督的学习任务中具有很高的概括能力,并具有较小的额外计算成本开销。提出的水平和垂直注意力是高度模块化的,可以将其插入各种变压器模型中,以进一步提高性能。我们的代码在补充材料中可用。
translated by 谷歌翻译
序列建模的一个中心目标是设计一个单个原则模型,该模型可以解决各种方式和任务,尤其是在远程依赖方面的序列数据。尽管包括RNN,CNN和Transformers在内的传统模型具有用于捕获长期依赖性的专业变体,但它们仍然很难扩展到长时间的10000美元或更多步骤。通过模拟基本状态空间模型(SSM)\(x'(t)= ax(t)= ax(t) + bu(t),y(t)= cx(t) + du(t) + du(t)\ ), and showed that for appropriate choices of the state matrix \( A \), this system could handle long-range dependencies mathematically and empirically.但是,该方法具有过度的计算和内存需求,使其无法作为一般序列建模解决方案。我们根据SSM的新参数化提出了结构化状态空间序列模型(S4),并表明它可以比以前的方法更有效地计算出其理论强度。我们的技术涉及对\(a \)进行低级校正的调节,从而使其对角度稳定,并将SSM降低到库奇内核的精心研究的计算中。 S4在各种既定的基准测试范围内取得了强劲的经验结果,包括(i)在顺序CIFAR-10上的91 \%精度,没有数据增强或辅助损失,与较大的2-D Resnet相当,(ii)实质上关闭。在图像和语言建模任务上与变形金刚的差距,同时在远程竞技场基准的每个任务上执行每一代$ 60 \ times $ $(iii)sota,包括求解所有先前工作的挑战性path-x任务,而所有先前工作的长度为16K,同时与所有竞争对手一样高效。
translated by 谷歌翻译
Transformers do not scale very well to long sequence lengths largely because of quadratic self-attention complexity. In the recent months, a wide spectrum of efficient, fast Transformers have been proposed to tackle this problem, more often than not claiming superior or comparable model quality to vanilla Transformer models. To this date, there is no well-established consensus on how to evaluate this class of models. Moreover, inconsistent benchmarking on a wide spectrum of tasks and datasets makes it difficult to assess relative model quality amongst many models. This paper proposes a systematic and unified benchmark, Long-Range Arena, specifically focused on evaluating model quality under long-context scenarios. Our benchmark is a suite of tasks consisting of sequences ranging from 1K to 16K tokens, encompassing a wide range of data types and modalities such as text, natural, synthetic images, and mathematical expressions requiring similarity, structural, and visual-spatial reasoning. We systematically evaluate ten well-established long-range Transformer models (Reformers, Linformers, Linear Transformers, Sinkhorn Transformers, Performers, Synthesizers, Sparse Transformers, and Longformers) on our newly proposed benchmark suite. Long-Range Arena paves the way towards better understanding this class of efficient Transformer models, facilitates more research in this direction, and presents new challenging tasks to tackle. Our benchmark code will be released at https://github.com/google-research/long-range-arena.
translated by 谷歌翻译
Attention-based neural networks, such as Transformers, have become ubiquitous in numerous applications, including computer vision, natural language processing, and time-series analysis. In all kinds of attention networks, the attention maps are crucial as they encode semantic dependencies between input tokens. However, most existing attention networks perform modeling or reasoning based on representations, wherein the attention maps of different layers are learned separately without explicit interactions. In this paper, we propose a novel and generic evolving attention mechanism, which directly models the evolution of inter-token relationships through a chain of residual convolutional modules. The major motivations are twofold. On the one hand, the attention maps in different layers share transferable knowledge, thus adding a residual connection can facilitate the information flow of inter-token relationships across layers. On the other hand, there is naturally an evolutionary trend among attention maps at different abstraction levels, so it is beneficial to exploit a dedicated convolution-based module to capture this process. Equipped with the proposed mechanism, the convolution-enhanced evolving attention networks achieve superior performance in various applications, including time-series representation, natural language understanding, machine translation, and image classification. Especially on time-series representation tasks, Evolving Attention-enhanced Dilated Convolutional (EA-DC-) Transformer outperforms state-of-the-art models significantly, achieving an average of 17% improvement compared to the best SOTA. To the best of our knowledge, this is the first work that explicitly models the layer-wise evolution of attention maps. Our implementation is available at https://github.com/pkuyym/EvolvingAttention
translated by 谷歌翻译
多尺度特征层次结构已在计算机视觉区域的成功中得到了见证。这进一步激发了研究人员设计自然语言处理的多尺度变压器,主要是基于自我发项机制。例如,限制跨头部的接收场或通过卷积提取局部细粒度特征。但是,大多数现有作品都直接建模了本地功能,但忽略了单词边界信息。这导致了缺乏解释性的多余和模棱两可的注意力分布。在这项工作中,我们在不同的语言单元中定义了这些量表,包括子字,单词和短语。我们通过基于单词边界信息和短语级别的先验知识之间建立量表之间的关系来构建多尺度变压器模型。提出的\ textbf {u} niversal \ textbf {m} ulti \ textbf {s} cale \ textbf {t} ransformer,即在两个序列生成任务上评估。值得注意的是,它在几个测试组上的强大基线上产生了一致的性能,而无需牺牲效率。
translated by 谷歌翻译