Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning trains on a mixture of supervised datasets and produces improved performance compared to self-supervised pretraining. Until now, massively multitask learning required simultaneous access to all datasets in the mixture and heavy compute resources that are only available to well-resourced teams. In this paper, we propose ColD Fusion, a method that provides the benefits of multitask learning but leverages distributed computation and requires limited communication and no sharing of data. Consequentially, ColD Fusion can create a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based on. We show that ColD Fusion yields comparable benefits to multitask pretraining by producing a model that (a) attains strong performance on all of the datasets it was multitask trained on and (b) is a better starting point for finetuning on unseen datasets. We find ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.45 points in average without any changes to the architecture.
translated by 谷歌翻译
Question answering models commonly have access to two sources of "knowledge" during inference time: (1) parametric knowledge - the factual knowledge encoded in the model weights, and (2) contextual knowledge - external knowledge (e.g., a Wikipedia passage) given to the model to generate a grounded answer. Having these two sources of knowledge entangled together is a core issue for generative QA models as it is unclear whether the answer stems from the given non-parametric knowledge or not. This unclarity has implications on issues of trust, interpretability and factuality. In this work, we propose a new paradigm in which QA models are trained to disentangle the two sources of knowledge. Using counterfactual data augmentation, we introduce a model that predicts two answers for a given question: one based on given contextual knowledge and one based on parametric knowledge. Our experiments on the Natural Questions dataset show that this approach improves the performance of QA models by making them more robust to knowledge conflicts between the two knowledge sources, while generating useful disentangled answers.
translated by 谷歌翻译
Previous studies observed that finetuned models may be better base models than the vanilla pretrained model. Such a model, finetuned on some source dataset, may provide a better starting point for a new finetuning process on a desired target dataset. Here, we perform a systematic analysis of this intertraining scheme, over a wide range of English classification tasks. Surprisingly, our analysis suggests that the potential intertraining gain can be analyzed independently for the target dataset under consideration, and for a base model being considered as a starting point. This is in contrast to current perception that the alignment between the target dataset and the source dataset used to generate the base model is a major factor in determining intertraining success. We analyze different aspects that contribute to each. Furthermore, we leverage our analysis to propose a practical and efficient approach to determine if and how to select a base model in real-world settings. Last, we release an updating ranking of best models in the HuggingFace hub per architecture https://ibm.github.io/model-recycling/.
translated by 谷歌翻译
文本分类在许多真实世界的情况下可能很有用,为最终用户节省了很多时间。但是,构建自定义分类器通常需要编码技能和ML知识,这对许多潜在用户构成了重大障碍。为了提高此障碍,我们介绍了标签侦探,这是一种免费的开源系统,用于标记和创建文本分类器。该系统对于(a)是一个无代码系统是独一无二的分类器在几个小时内,(c)开发用于开发人员进行配置和扩展。通过开放采购标签侦探,我们希望建立一个用户和开发人员社区,以扩大NLP模型的利用率。
translated by 谷歌翻译
We present the task of PreQuEL, Pre-(Quality-Estimation) Learning. A PreQuEL system predicts how well a given sentence will be translated, without recourse to the actual translation, thus eschewing unnecessary resource allocation when translation quality is bound to be low. PreQuEL can be defined relative to a given MT system (e.g., some industry service) or generally relative to the state-of-the-art. From a theoretical perspective, PreQuEL places the focus on the source text, tracing properties, possibly linguistic features, that make a sentence harder to machine translate. We develop a baseline model for the task and analyze its performance. We also develop a data augmentation method (from parallel corpora), that improves results substantially. We show that this augmentation method can improve the performance of the Quality-Estimation task as well. We investigate the properties of the input text that our model is sensitive to, by testing it on challenge sets and different languages. We conclude that it is aware of syntactic and semantic distinctions, and correlates and even over-emphasizes the importance of standard NLP features.
translated by 谷歌翻译
数据探索是每个数据科学和机器学习项目的重要步骤,包括涉及文本数据的项目。我们以公开可用的Python库的形式提供一种新颖的语言工具,用于从文本数据中提取模式。该图书馆集成了现有的GRASP算法的首次公开实施。它允许用户使用多种通用内置的语言属性(例如HyperNyms,eart语音标签和句法依赖性标签)提取图案,如原始算法所设想的,以及特定领域的自定义可以通过实现两个函数将可以合并到库中的属性。该库配备了一个基于Web的接口,授权人类用户通过提取的模式方便地探索数据,并使用以模式为中心的互补图案和示例视图:前者包括每种提取模式的自然语言和统计信息;后者显示了每种提取模式在训练示例中的应用。我们证明了库在分类(垃圾邮件检测和参数挖掘),模型分析(机器翻译)和数据集中的伪影发现(SNLI和20newSgroups)中的有用性。
translated by 谷歌翻译
垂直分布式学习利用了多个学习工人收集的本地特征,以形成更好的全球模型。但是,工人与模型聚合器之间的数据交换进行参数培训会导致沉重的沟通负担,尤其是当学习系统建立在容量受限的无线网络上时。在本文中,我们提出了一个新型的层次分布式学习框架,每个工人分别学习了其本地观察到的数据的低维嵌入。然后,他们执行沟通有效的分布式最大 - 以有效地将合成的输入传输到聚合器。对于通过共享无线通道进行的数据交换,我们提出了一个基于机会性载体传感的协议,以实现所有学习工人的输出数据的最大功能操作。我们的仿真实验表明,提出的学习框架能够使用学习工人的所有原始输出的串联来实现与学习模型几乎相同的模型精度,同时需要独立于工人数量的沟通负载。
translated by 谷歌翻译