Previous studies observed that finetuned models may be better base models than the vanilla pretrained model. Such a model, finetuned on some source dataset, may provide a better starting point for a new finetuning process on a desired target dataset. Here, we perform a systematic analysis of this intertraining scheme, over a wide range of English classification tasks. Surprisingly, our analysis suggests that the potential intertraining gain can be analyzed independently for the target dataset under consideration, and for a base model being considered as a starting point. This is in contrast to current perception that the alignment between the target dataset and the source dataset used to generate the base model is a major factor in determining intertraining success. We analyze different aspects that contribute to each. Furthermore, we leverage our analysis to propose a practical and efficient approach to determine if and how to select a base model in real-world settings. Last, we release an updating ranking of best models in the HuggingFace hub per architecture https://ibm.github.io/model-recycling/.
translated by 谷歌翻译
Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
translated by 谷歌翻译
对于大多数自然语言处理任务,主要的实践是使用较小的下游数据集对大型预验证变压器模型(例如BERT)。尽管这种方法取得了成功,但尚不清楚这些收益在多大程度上归因于用于预处理而不是训练预处理的目标本身所采用的大量背景语料库。本文介绍了一项大规模的自我预测研究,其中相同的(下游)训练数据都用于预训练和填充。在解决Electra和Roberta型号以及10个不同下游数据集的实验中,我们观察到在BookWiki语料库上进行自我预测的竞争对手标准预告片(尽管使用了$ 10 \ times $ $ -500 \ times $ -500 \ times $少的数据),在7美元上以7美元的价格优于$ 7 $和$ 5 $数据集。令人惊讶的是,这些特定于任务的预预性模型通常在其他任务(包括胶水基准)上表现良好。我们的结果表明,在许多情况下,可归因于预处理的绩效收益主要是由预处理目标本身驱动的,并不总是归因于大规模数据集的合并。考虑到网络规模预处理数据中对知识产权和进攻内容的担忧,这些发现尤其重要。
translated by 谷歌翻译
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase. However, the out-of-distribution (OOD) generalization problem remains a challenge in many NLP tasks, limiting the real-world deployment of these methods. This paper presents the first attempt at creating a unified benchmark named GLUE-X for evaluating OOD robustness in NLP models, highlighting the importance of OOD robustness and providing insights on how to measure the robustness of a model and how to improve it. The benchmark includes 13 publicly available datasets for OOD testing, and evaluations are conducted on 8 classic NLP tasks over 19 popularly used PLMs. Our findings confirm the need for improved OOD accuracy in NLP tasks, as significant performance degradation was observed in all settings compared to in-distribution (ID) accuracy.
translated by 谷歌翻译
情绪分析的研究分散在不同的标签格式(例如,极性类型,基本情感类别和情感尺寸),语言水平(词与句子与话语),当然,(几乎没有资源但更多资源不足)自然语言和文本类型(例如,产品评论,推文,新闻)。由此产生的异质性使得在这些冲突的限制下开发的数据和软件难以比较和挑战整合。为了解决这种不满意的事态,我们在这里提出了一种培训计划,该培训计划学习与不同标签格式,自然语言,甚至不同的模型架构无关的情感共享潜在的情绪。在各种数据集上的实验表明该方法不会产生所需的互操作性,而不会惩罚预测质量。代码和数据在DOI 10.5281 / ZENODO.5466068下存档。
translated by 谷歌翻译
近年来,预制语言模型彻底改变了NLP世界,同时在各种下游任务中实现了最先进的性能。但是,在许多情况下,当标记数据稀缺时,这些模型不会表现良好,并且预计模型将在零或几秒钟内执行。最近,有几项工作表明,与下游任务更好地对准的预先预测或执行第二阶段,可以导致改进的结果,尤其是在稀缺数据设置中。在此,我们建议利用携带的情绪话语标记来产生大规模的弱标记数据,这又可以用于适应语言模型进行情感分析。广泛的实验结果显示了我们在各种基准数据集中的方法的价值,包括金融域。在https://github.com/ibm/tslm-discourse-markers上提供代码,模型和数据。
translated by 谷歌翻译
随着时间的推移,保持语言技术的性能是很好的实际兴趣。在这里,我们在涉及系统性能的时间效果,建立更细微的术语,用于讨论该主题和适当的实验设计,以支持有关观察到的现象的效果的调查。我们提出了一系列与由大型神经预磨削表示的系统进行用于英语的系统,证明{\ EM时间模型恶化}并不像较大的关注,有一些模型实际上在从稍后的时间段绘制的数据上进行测试时改善。然而,{\ EM时间域自适应}是有益的,当系统在时间上训练时,可以更好地进行给定时间段的性能更好。我们的实验表明,在预磨削表示时,时间模型劣化和时间域适应之间的区别变得突出。最后,我们研究了两种方法对时间域适应的效果,没有人为的新数据的注释,自我标签证明是优于持续的预训练。值得注意的是,对于命名实体识别,自我标签导致比人类注释更好的时间适应。
translated by 谷歌翻译
Changing how pre-trained models behave -- e.g., improving their performance on a downstream task or mitigating biases learned during pre-training -- is a common practice when developing machine learning systems. In this work, we propose a new paradigm for steering the behavior of neural networks, centered around \textit{task vectors}. A task vector specifies a direction in the weight space of a pre-trained model, such that movement in that direction improves performance on the task. We build task vectors by subtracting the weights of a pre-trained model from the weights of the same model after fine-tuning on a task. We show that these task vectors can be modified and combined together through arithmetic operations such as negation and addition, and the behavior of the resulting model is steered accordingly. Negating a task vector decreases performance on the target task, with little change in model behavior on control tasks. Moreover, adding task vectors together can improve performance on multiple tasks at once. Finally, when tasks are linked by an analogy relationship of the form ``A is to B as C is to D", combining task vectors from three of the tasks can improve performance on the fourth, even when no data from the fourth task is used for training. Overall, our experiments with several models, modalities and tasks show that task arithmetic is a simple, efficient and effective way of editing models.
translated by 谷歌翻译
Transfer learning, where a model is first pre-trained on a data-rich task before being finetuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
translated by 谷歌翻译
机器学习中的终身学习范式是一个有吸引力的替代方案,不仅是由于其与生物学学习的相似之处,而且它通过避免过度模型重新训练来减少能量浪费的可能性。对此范式的关键挑战是灾难性遗忘的现象。随着在机器学习中训练有素的模型的越来越受欢迎和成功,我们提出了问题:终身学习中的训练前比赛,特别是关于灾难性的遗忘?我们在大型预先训练模型的上下文中调查现有方法,并在各种文本和图像分类任务中评估其性能,包括使用15个不同的NLP任务的新型数据集进行大规模研究。在所有设置中,我们观察到,通用预训练隐含地减轻了在与随机初始化模型相比依次学习多个任务时灾难性忘记的影响。然后,我们进一步调查为什么预先训练缓解在这个环境中忘记。我们通过分析损失景观来研究这种现象,发现预先训练的重量似乎可以通过导致更宽的最小值来缓解遗忘。基于这一洞察力,我们提出了对当前任务损失和损失盆地锐利的共同优化,以便在连续微调期间明确鼓励更广泛的盆地。我们表明,这种优化方法导致与跨多个设置的任务顺序持续学习的性能相当,而无需保留具有任务数量的大小的内存。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
在文本分类模型由于数据变化而随着时间的变化而下降的情况下,其持续时间持续时间的模型的开发很重要。预测模型随着时间的推移能力的能力可以帮助设计模型,这些模型可以在更长的时间内有效使用。在本文中,我们通过评估各种语言模型和分类算法随着时间的推移持续存在的能力,以及数据集特性如何帮助预测不同模型的时间稳定性,从而研究了这个问题。我们在跨越6到19年的三个数据集上执行纵向分类实验,并涉及各种任务和类型的数据。我们发现,人们可以根据(i)模型在限制时间段内的性能及其外推到更长的时间段,以及(ii)数据集的语言特征,以及(ii)数据集的语言特征,如何估算模型如何在时间上保持其性能。例如不同年份的子集之间的熟悉程度。这些实验的发现对文本分类模型的设计具有重要意义,目的是保留随着时间的推移性能。
translated by 谷歌翻译
在多标签学习中,单个数据点与多个目标标签相关联的多任务学习的特定情况,在文献中广泛假定,为了获得最佳准确性,应明确建模标签之间的依赖性。这个前提导致提供的方法的扩散,以学习和预测标签,例如,一个标签的预测会影响对其他标签的预测。即使现在人们承认,在许多情况下,最佳性能并不需要一种依赖模型,但此类模型在某些情况下继续超越独立模型,这暗示了其对其性能的替代解释以外的标签依赖性,而文献仅是文献才是最近开始解开。利用并扩展了最近的发现,我们将多标签学习的原始前提转移到其头上,并在任务标签之间没有任何可衡量的依赖性的情况下特别处理联合模型的问题;例如,当任务标签来自单独的问题域时。我们将洞察力从这项研究转移到建立转移学习方法,该方法挑战了长期以来的假设,即任务的可转移性来自源和目标域或模型之间相似性的测量。这使我们能够设计和测试一种传输学习方法,该方法是模型驱动的,而不是纯粹的数据驱动,并且它是黑匣子和模型不合时式(可以考虑任何基本模型类)。我们表明,从本质上讲,我们可以根据源模型容量创建任务依赖性。我们获得的结果具有重要的含义,并在多标签和转移学习领域为将来的工作提供了明确的方向。
translated by 谷歌翻译
尽管最近的多任务学习和自然语言处理的转移学习成功(NLP),但很少有效地研究了在训练中缩放任务数量的效果。迈出了这一目标,介绍了Exmix(极端混合物):跨越各个领域和任务家庭的大规模收集107个监督的NLP任务。使用EXMIX,我们研究了最大规模的多任务预培训的影响,并分析了普通任务家庭之间的共同培训转移。通过此分析,我们表明手动策划用于多任务预训练的理想任务,并不简单,而且多任务缩放可以自行改进模型。最后,我们提出了Ext5:使用自我监督跨度去噪和监督EXMIX的多任务目标预先训练的模型。通过广泛的实验,我们表明Ext5优于超级格,宝石,彩虹,封闭书QA任务的强大T5基线,以及Exmix之外的几个任务。 Ext5在预训练时也显着提高了样品效率。
translated by 谷歌翻译
Automated Program Repair (APR) is defined as the process of fixing a bug/defect in the source code, by an automated tool. APR tools have recently experienced promising results by leveraging state-of-the-art Neural Language Processing (NLP) techniques. APR tools such as TFix and CodeXGLUE combine text-to-text transformers with software-specific techniques are outperforming alternatives, these days. However, in most APR studies the train and test sets are chosen from the same set of projects. In reality, however, APR models are meant to be generalizable to new and different projects. Therefore, there is a potential threat that reported APR models with high effectiveness perform poorly when the characteristics of the new project or its bugs are different than the training set's(Domain Shift). In this study, we first define and measure the domain shift problem in automated program repair. Then, we then propose a domain adaptation framework that can adapt an APR model for a given target project. We conduct an empirical study with three domain adaptation methods FullFineTuning, TuningWithLightWeightAdapterLayers, and CurriculumLearning using two state-of-the-art domain adaptation tools (TFix and CodeXGLUE) and two APR models on 611 bugs from 19 projects. The results show that our proposed framework can improve the effectiveness of TFix by 13.05% and CodeXGLUE by 23.4%. Another contribution of this study is the proposal of a data synthesis method to address the lack of labelled data in APR. We leverage transformers to create a bug generator model. We use the generated synthetic data to domain adapt TFix and CodeXGLUE on the projects with no data (Zero-shot learning), which results in an average improvement of 5.76% and 24.42% for TFix and CodeXGLUE, respectively.
translated by 谷歌翻译
在NLP中,大量的任务涉及两种序列之间的成对比较(例如句子相似性和解释识别)。主要是,两种配方用于句子 - 对任务:双编码器和交叉编码器。双编码器产生固定尺寸句子表示,并且在计算上有效,但是,它们通常是跨编码器的表现不佳。交叉编码器可以利用他们的注意力头来利用句子间交互以获得更好的性能,但它们需要任务微调,并且计算更昂贵。在本文中,我们提出了一个完全无监督的句子表示模型被称为跨编码器,它将两个学习范例结合到迭代联合框架中,以同时学习增强的双和交叉编码器。具体而言,在预先接受训练的语言模型(PLM)的顶部,我们首先将其转换为无监督的双编码器,然后在双编码器任务配方之间交替。在每次交替中,一个任务制定将产生伪标签,该伪标签用作其他任务制定的学习信号。然后,我们提出了一种平行于多个PLMS在多个PLM上进行这种自蒸馏方法的延伸,并使用其伪标签的平均值进行互蒸馏。 Trans-encoder据我们所知,创建了第一个完全无监督的跨编码器以及用于句子相似性的最先进的无人监督的双编码器。跨编码器的双编码器和交叉编码器配方均最近提出了最先进的无监督句子编码器,例如镜像相似基准在句子相似基准上最多5%的镜像 - BERT和SIMCSE。
translated by 谷歌翻译
Recent trends in language modeling have focused on increasing performance through scaling, and have resulted in an environment where training language models is out of reach for most researchers and practitioners. While most in the community are asking how to push the limits of extreme computation, we ask the opposite question: How far can we get with a single GPU in just one day? We investigate the downstream performance achievable with a transformer-based language model trained completely from scratch with masked language modeling for a single day on a single consumer GPU. Aside from re-analyzing nearly all components of the pretraining pipeline for this scenario and providing a modified pipeline with performance close to BERT, we investigate why scaling down is hard, and which modifications actually improve performance in this scenario. We provide evidence that even in this constrained setting, performance closely follows scaling laws observed in large-compute settings. Through the lens of scaling laws, we categorize a range of recent improvements to training and architecture and discuss their merit and practical applicability (or lack thereof) for the limited compute setting.
translated by 谷歌翻译
对于自然语言处理系统,两种证据支持在大型未解除的基层上的神经语言模型中使用文本表示:在应用程序启发基准上的表现(Peters等,2018年,除其他外)以及出现的出现这些陈述中的句法抽象(Tenney等,2019年,尤其)。另一方面,缺乏接地的监督呼吁质疑这些表现如何捕获意义(Bender和Koller,2020)。我们对最近的语言模型应用小说探针 - 特别关注由语义依赖性运作的谓词参数结构(Ivanova等,2012) - 并发现,与语法不同,语义不是通过今天的预磨款模型带到表面上。然后,我们使用卷积图编码器将语义解析明确地将语义解析结合到特定于任务的FineTuning中,为胶水基准测试中的自然语言理解(NLU)任务产生益处。这种方法展示了通用(而不是任务特定的)语言监督的潜力,以上和超越传统的预威胁和芬特。有几个诊断有助于本地化我们方法的好处。
translated by 谷歌翻译
为了了解神经网络行为,最近的作品定量比较使用规范相关分析(CCA),居中内核对准(CKA)和其他不同措施的不同网络的学习表示。不幸的是,这些广泛使用的措施往往不同意基本观察,例如只有在随机初始化中不同的深度网络都会学习类似的表示。这些分歧提出了问题:我们应该相信哪些,如果有的话,那么这些不相似措施?我们通过具体的测试提供了一个框架来解决这个问题:措施应该具有对影响功能行为的变化的敏感性,以及对没有的变化的特异性。我们通过各种功能行为量化,包括探测准确性和稳健性与分布换档,并检查变化的随机初始化和删除主组件。我们发现当前的指标表现出不同的弱点,请注意,经典基线令人惊讶地表现出令人惊讶的良好,并且突出显示所有度量都失败的设置,从而为进一步改进提供挑战。
translated by 谷歌翻译
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
translated by 谷歌翻译