变压器注意机制中的设计选择,包括弱电感偏置和二次计算复杂性,限制了其用于建模长序列的应用。在本文中,我们介绍了一个简单的,理论上的,单头的门控注意机制,配备了(指数)移动平均线,以将局部依赖性的电感偏置纳入位置 - 敏锐的注意机制中。我们进一步提出了一个具有线性时间和空间复杂性的大型变体,但通过将整个序列分为固定长度的多个块,仅产生最小的质量损失。对广泛的序列建模基准测试的广泛实验,包括远距离竞技场,神经机器翻译,自动回归语言建模以及图像和语音分类,表明,巨人比其他序列模型取得了重大改进,包括变种物的变体和最新的变体模型状态空间模型。
translated by 谷歌翻译
随着新趋势影响在线讨论,用户生成的社交媒体数据正在不断变化,从而导致社交媒体NLP应用程序的测试数据分布变化。此外,随着用户数据删除,培训数据通常可能会更改。当前的大多数NLP系统都是静态的,并且依赖固定培训数据。结果,他们无法在没有频繁,昂贵的重新训练的情况下适应时间变化 - 既包括测试分配变化又删除了培训数据。在本文中,我们通过纵向主题标签预测的任务来研究时间适应,并提出一种非参数技术作为一种简单但有效的解决方案:非参数分类器使用可以更新的数据存储器,以适应测试分配移位或培训数据删除,无需重新训练。我们发布了一个新的基准数据集,该数据集由2021年的713m推文以及它们的主题标签组成,分为连续的颞桶。我们将需要重新训练进行适应的参数神经主题标签分类和标签生成模型与非参数,无训练的密集检索方法进行了比较,该方法基于文本嵌入距离返回最近的邻居的主题标签。在我们的纵向Twitter数据集的实验中,我们发现密集的邻居检索的相对性能增益比测试集的最佳参数基线的相对性能增长率为64.12%,该测试集的表现出分布移位而不需要基于梯度的重新训练。此外,我们表明我们的数据存储方法特别适合动态删除的用户数据,并具有可忽略的计算成本和性能损失。我们的新颖基准数据集和实证分析可以支持未来对现实世界用户数据中AI系统部署时的重要挑战的研究。
translated by 谷歌翻译
One of the most impressive results of recent NLP history is the ability of pre-trained language models to solve new tasks in a zero-shot setting. To achieve this, NLP tasks are framed as natural language prompts, generating a response indicating the predicted output. Nonetheless, the performance in such settings often lags far behind its supervised counterpart, suggesting a large space for potential improvement. In this paper, we explore methods to utilize unlabeled data to improve zero-shot performance. Specifically, we take advantage of the fact that multiple prompts can be used to specify a single task, and propose to regularize prompt consistency, encouraging consistent predictions over this diverse set of prompts. Our method makes it possible to fine-tune the model either with extra unlabeled training data, or directly on test input at inference time in an unsupervised manner. In experiments, our approach outperforms the state-of-the-art zero-shot learner, T0 (Sanh et al., 2022), on 9 out of 11 datasets across 4 NLP tasks by up to 10.6 absolute points in terms of accuracy. The gains are often attained with a small number of unlabeled examples.
translated by 谷歌翻译
基于检索的语言模型(R-LM)通过将标准语言模型(LM)与在测试时从外部数据存储中检索的示例结合使用自然语言文本的概率。虽然有效,但在实践中使用这些模型的主要瓶颈是计算昂贵的数据存储搜索,可以像每个时间步骤一样频繁地执行。在本文中,我们提出了retomaton-检索自动机 - 基于(1)在连续的数据存储条目之间保存指针,以及(2)将条目聚类到“状态”中。这有效地导致了在数据存储顶部构建的加权有限自动机,而不是将数据存储表示为平面列表。自动机的创建是无监督的,可以从任何文本集合中构造一个retomaton:原始训练语料库或另一个域。在推理时与LM推理并行遍历此自动机,将其困惑降低到1.85,或者可节省多达$ k $ nn-lm的最近邻居搜索的83%(Khandelwal等,2020年,没有),没有伤害困惑。我们的代码和训练有素的模型可在https://github.com/neulab/retomaton上找到。
translated by 谷歌翻译
微调下游任务的大型预训练语言模型已成为NLP中的事实上学习范式。然而,常规方法微调预先训练模型的所有参数,这变得越来越稳定,因为模型尺寸和增长的任务数量。最近的工作提出了各种参数有效的转移学习方法,只需微调少数(额外)参数以获得强大的性能。虽然有效,但各种方法中的成功和联系的关键成分尚不清楚。在本文中,我们分解了最先进的参数有效的传输学习方法的设计,并提出了一个在它们之间建立连接的统一框架。具体而言,我们将它们重新框架作为预先训练的模型对特定隐藏状态的修改,并定义了一组设计尺寸,不同的方法变化,例如计算修改的功能和应用修改的位置。通过跨机翻译的全面实证研究,文本摘要,语言理解和文本分类基准,我们利用统一的视图来确定以前的方法中的重要设计选择。此外,我们的统一框架使得能够在不同的方法中传输设计元素,因此我们能够实例化新的参数高效的微调方法,该方法比以前的方法更加有效,而是更有效,实现可比的结果在所有四个任务上调整所有参数。
translated by 谷歌翻译
非参数神经语言模型(NLMS)学习利用外部数据存储的预测性的文本分布,这允许他们通过显式记忆训练数据点来学习。虽然有效,这些模型通常需要从测试时间的大型数据存储中检索,从而显着增加推断开销,从而限制了在实际应用中的非参数NLMS的部署。在本文中,我们采取最近提出的$ k $-n $邻居语言模型(Khandelwal等,2020),例如探索沿各种尺寸提高其效率的方法。标准Wikitext-103基准和域 - 适应数据集的实验表明,我们的方法能够在推理速度的推动速度上实现高达6倍,同时保留可比性。我们所呈现的实证分析可以为未来的研究指导提供寻求开发或部署更高效的非参数NLM的指导。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Text clustering and topic extraction are two important tasks in text mining. Usually, these two tasks are performed separately. For topic extraction to facilitate clustering, we can first project texts into a topic space and then perform a clustering algorithm to obtain clusters. To promote topic extraction by clustering, we can first obtain clusters with a clustering algorithm and then extract cluster-specific topics. However, this naive strategy ignores the fact that text clustering and topic extraction are strongly correlated and follow a chicken-and-egg relationship. Performing them separately fails to make them mutually benefit each other to achieve the best overall performance. In this paper, we propose an unsupervised text clustering and topic extraction framework (ClusTop) which integrates text clustering and topic extraction into a unified framework and can achieve high-quality clustering result and extract topics from each cluster simultaneously. Our framework includes four components: enhanced language model training, dimensionality reduction, clustering and topic extraction, where the enhanced language model can be viewed as a bridge between clustering and topic extraction. On one hand, it provides text embeddings with a strong cluster structure which facilitates effective text clustering; on the other hand, it pays high attention on the topic related words for topic extraction because of its self-attention architecture. Moreover, the training of enhanced language model is unsupervised. Experiments on two datasets demonstrate the effectiveness of our framework and provide benchmarks for different model combinations in this framework.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译