In this paper, we present ExtremeBERT, a toolkit for accelerating and customizing BERT pretraining. Our goal is to provide an easy-to-use BERT pretraining toolkit for the research community and industry. Thus, the pretraining of popular language models on customized datasets is affordable with limited resources. Experiments show that, to achieve the same or better GLUE scores, the time cost of our toolkit is over $6\times$ times less for BERT Base and $9\times$ times less for BERT Large when compared with the original BERT paper. The documentation and code are released at https://github.com/extreme-bert/extreme-bert under the Apache-2.0 license.
translated by 谷歌翻译
Recent progress in natural language processing has been driven by advances in both model architecture and model pretraining. Transformer architectures have facilitated building higher-capacity models and pretraining has made it possible to effectively utilize this capacity for a wide variety of tasks. Transformers is an open-source library with the goal of opening up these advances to the wider machine learning community. The library consists of carefully engineered stateof-the art Transformer architectures under a unified API. Backing this library is a curated collection of pretrained models made by and available for the community. Transformers is designed to be extensible by researchers, simple for practitioners, and fast and robust in industrial deployments. The library is available at https://github.com/ huggingface/transformers.
translated by 谷歌翻译
Recent progress in natural language processing has been driven by advances in both model architecture and model pretraining. Transformer architectures have facilitated building higher-capacity models and pretraining has made it possible to effectively utilize this capacity for a wide variety of tasks. Transformers is an open-source library with the goal of opening up these advances to the wider machine learning community. The library consists of carefully engineered stateof-the art Transformer architectures under a unified API. Backing this library is a curated collection of pretrained models made by and available for the community. Transformers is designed to be extensible by researchers, simple for practitioners, and fast and robust in industrial deployments. The library is available at https://github.com/ huggingface/transformers.
translated by 谷歌翻译
域和部署设置的机器学习模型的快速增殖使各种社区(例如行业从业人员)引起,该社区寻求跨个人价值的任务和目标的基准模型。不幸的是,这些用户不能使用标准基准导致执行如传统基准的价值驱动的比较,因为传统的基准在单个目标(例如平均精度)上评估模型,并且无法促进控制混淆变量(例如计算预算)的标准化训练框架(例如计算预算),使公平比较困难。为解决这些挑战,我们介绍了开源Ludwig基准测试工具包(LBT),一个个性化基准工具包,用于运行端到端的基准研究(从超级计量优化到评估),跨易于扩展的任务,深度学习模型,数据集和评估指标。 LBT提供了一种可配置的界面,用于控制培训和定制评估,是消除混淆变量的标准化培训框架,以及支持多目标评估。我们展示LBT如何用于创建个性化基准研究,具有7个模型和9个数据集的文本分类的大规模比较分析。我们探讨推理延迟和性能之间的权衡,数据集属性和性能之间的关系,以及预先介绍对融合和鲁棒性的影响,展示了LBT如何用于满足各种基准测试目标。
translated by 谷歌翻译
Recent trends in language modeling have focused on increasing performance through scaling, and have resulted in an environment where training language models is out of reach for most researchers and practitioners. While most in the community are asking how to push the limits of extreme computation, we ask the opposite question: How far can we get with a single GPU in just one day? We investigate the downstream performance achievable with a transformer-based language model trained completely from scratch with masked language modeling for a single day on a single consumer GPU. Aside from re-analyzing nearly all components of the pretraining pipeline for this scenario and providing a modified pipeline with performance close to BERT, we investigate why scaling down is hard, and which modifications actually improve performance in this scenario. We provide evidence that even in this constrained setting, performance closely follows scaling laws observed in large-compute settings. Through the lens of scaling laws, we categorize a range of recent improvements to training and architecture and discuss their merit and practical applicability (or lack thereof) for the limited compute setting.
translated by 谷歌翻译
Many prior language modeling efforts have shown that pre-training on an in-domain corpus can significantly improve performance on downstream domain-specific NLP tasks. However, the difficulties associated with collecting enough in-domain data might discourage researchers from approaching this pre-training task. In this paper, we conducted a series of experiments by pre-training Bidirectional Encoder Representations from Transformers (BERT) with different sizes of biomedical corpora. The results demonstrate that pre-training on a relatively small amount of in-domain data (4GB) with limited training steps, can lead to better performance on downstream domain-specific NLP tasks compared with fine-tuning models pre-trained on general corpora.
translated by 谷歌翻译
The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.
translated by 谷歌翻译
Machine Learning for Source Code (ML4Code) is an active research field in which extensive experimentation is needed to discover how to best use source code's richly structured information. With this in mind, we introduce JEMMA, an Extensible Java Dataset for ML4Code Applications, which is a large-scale, diverse, and high-quality dataset targeted at ML4Code. Our goal with JEMMA is to lower the barrier to entry in ML4Code by providing the building blocks to experiment with source code models and tasks. JEMMA comes with a considerable amount of pre-processed information such as metadata, representations (e.g., code tokens, ASTs, graphs), and several properties (e.g., metrics, static analysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2 million classes and over 8 million methods. JEMMA is also extensible allowing users to add new properties and representations to the dataset, and evaluate tasks on them. Thus, JEMMA becomes a workbench that researchers can use to experiment with novel representations and tasks operating on source code. To demonstrate the utility of the dataset, we also report results from two empirical studies on our data, ultimately showing that significant work lies ahead in the design of context-aware source code models that can reason over a broader network of source code entities in a software project, the very task that JEMMA is designed to help with.
translated by 谷歌翻译
代码生成是一个长期的挑战,旨在根据自然语言描述生成代码段。通常,昂贵的文本编码配对数据对于培训代码生成模型至关重要。最近,由于培训预培训技术的成功,大型语言模型接受了大规模未标记的代码语料库的培训,并在代码生成方面表现良好。在本文中,我们调查了如何利用未标记的代码语料库来训练以图书馆为导向的代码生成的模型。由于对于程序员重复使用第三方库是一种普遍的做法,因此由于库数量大量,文本编码配对数据很难获得。我们观察到面向图书馆的代码片段更有可能共享类似的代码草图。因此,我们为证书提供了两个步骤:草图器生成草图,然后发电机填充了草图中的详细信息。 Sketcher和Generator都使用未标记的数据在基本模型上不断预先训练。此外,我们制作了两个名为Pandaseval和NumpyeVal的基准,以评估面向图书馆的代码生成。实验结果证明了CERT的表现令人印象深刻。例如,它超过了基本模型,在pandaseval上的Pass@1方面,绝对提高了15.67%。我们的工作可在https://github.com/microsoft/pycodegpt上获得。
translated by 谷歌翻译
培训和评估语言模型越来越多地要求构建元数据 - 多样化的策划数据收集,并具有清晰的出处。自然语言提示最近通过将现有的,有监督的数据集转换为多种新颖的预处理任务,突出了元数据策划的好处,从而改善了零击的概括。尽管将这些以数据为中心的方法转化为生物医学语言建模的通用域文本成功,但由于标记的生物医学数据集在流行的数据中心中的代表性大大不足,因此仍然具有挑战性。为了应对这一挑战,我们介绍了BigBio一个由126个以上的生物医学NLP数据集的社区库,目前涵盖12个任务类别和10多种语言。 BigBio通过对数据集及其元数据进行程序化访问来促进可再现的元数据策划,并与当前的平台兼容,以及时工程和端到端的几个/零射击语言模型评估。我们讨论了我们的任务架构协调,数据审核,贡献指南的过程,并概述了两个说明性用例:生物医学提示和大规模,多任务学习的零射门评估。 BigBio是一项持续的社区努力,可在https://github.com/bigscience-workshop/biomedical上获得。
translated by 谷歌翻译
This paper presents the OPUS ecosystem with a focus on the development of open machine translation models and tools, and their integration into end-user applications, development platforms and professional workflows. We discuss our on-going mission of increasing language coverage and translation quality, and also describe on-going work on the development of modular translation models and speed-optimized compact solutions for real-time translation on regular desktops and small devices.
translated by 谷歌翻译
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
最近的工作表明,在适应新域时,域名语言模型可以提高性能。但是,与培训前提出的成本提出了一个重要问题:给出了固定预算,NLP从业者应该采取哪些步骤来最大限度地提高绩效?在本文中,我们在预算限制下研究域适应,并将其作为数据注释和预培训之间的客户选择问题。具体而言,我们测量三个程序文本数据集的注释成本以及三种域语言模型的预培训成本。然后,我们评估不同预算限制下的预训练和数据注释的不同组合的效用,以评估哪种组合策略最佳效果。我们发现,对于小预算,支出所有资金都会导致最佳表现;一旦预算变得足够大,数据注释和域内预训练的组合更优先。因此,我们建议任务特定的数据注释应该是在将NLP模型调整到新域时的经济策略的一部分。
translated by 谷歌翻译
Today's software is bloated leading to significant resource wastage. This bloat is prevalent across the entire software stack, from the operating system, all the way to software backends, frontends, and web-pages. In this paper, we study how prevalent bloat is in machine learning containers. We develop MMLB, a framework to analyze bloat in machine learning containers, measuring the amount of bloat that exists on the container and package levels. Our tool quantifies the sources of bloat and removes them. We integrate our tool with vulnerability analysis tools to measure how bloat affects container vulnerabilities. We experimentally study 15 machine learning containers from the official Tensorflow, Pytorch, and NVIDIA container registries under different tasks, (i.e., training, tuning, and serving). Our findings show that machine learning containers contain bloat encompassing up to 80\% of the container size. We find that debloating machine learning containers speeds provisioning times by up to $3.7\times$ and removes up to 98\% of all vulnerabilities detected by vulnerability analysis tools such as Grype. Finally, we relate our results to the larger discussion about technical debt in machine learning systems.
translated by 谷歌翻译
基于变压器的大型语言模型在自然语言处理中表现出色。通过考虑这些模型在一个领域中获得的知识的可传递性,以及自然语言与高级编程语言(例如C/C ++)的亲密关系,这项工作研究了如何利用(大)基于变压器语言模型检测软件漏洞以及这些模型在漏洞检测任务方面的良好程度。在这方面,首先提出了一个系统的(凝聚)框架,详细介绍了源代码翻译,模型准备和推理。然后,使用具有多个漏洞的C/C ++源代码的软件漏洞数据集进行经验分析,该数据集对应于库功能调用,指针使用,数组使用情况和算术表达式。我们的经验结果证明了语言模型在脆弱性检测中的良好性能。此外,这些语言模型具有比当代模型更好的性能指标,例如F1得分,即双向长期记忆和双向封闭式复发单元。由于计算资源,平台,库和依赖项的要求,对语言模型进行实验始终是具有挑战性的。因此,本文还分析了流行的平台,以有效地微调这些模型并在选择平台时提出建议。
translated by 谷歌翻译
我们提出了Pangu-Coder,这是一种仅预读的解码器语言模型,该模型采用pangu-alpha架构进行文本到代码生成,即给定自然语言问题描述的编程语言解决方案的合成。我们使用两阶段策略训练Pangu-Coder:第一阶段采用因果语言建模(CLM)来预先培训原始编程语言数据,而第二阶段则使用因果语言建模和掩盖语言建模(MLM)的组合培训目标,专注于文本到代码生成的下游任务,并培训松散的自然语言程序定义和代码功能。最后,我们讨论了pangu-coder-ft,该pander the是通过竞争性编程问题和代码与持续集成测试的结合进行了微调的。我们评估了pangu-coder,重点是它是否生成功能上正确的程序,并证明它在参加较小的上下文窗口和较少的数据培训的同时,它比诸如Codex之类的类似大小的模型(例如Codex)实现等效性或更好的性能。
translated by 谷歌翻译
Given a natural language that describes the user's demands, the NL2Code task aims to generate code that addresses the demands. This is a critical but challenging task that mirrors the capabilities of AI-powered programming. The NL2Code task is inherently versatile, diverse and complex. For example, a demand can be described in different languages, in different formats, and at different levels of granularity. This inspired us to do this survey for NL2Code. In this survey, we focus on how does neural network (NN) solves NL2Code. We first propose a comprehensive framework, which is able to cover all studies in this field. Then, we in-depth parse the existing studies into this framework. We create an online website to record the parsing results, which tracks existing and recent NL2Code progress. In addition, we summarize the current challenges of NL2Code as well as its future directions. We hope that this survey can foster the evolution of this field.
translated by 谷歌翻译
源代码对于研究人员重现方法并复制人工智能(AI)论文的结果至关重要。一些组织和研究人员手动收集具有可用源代码的AI论文,以对AI社区做出贡献。但是,手动收集是一项劳动密集型且耗时的任务。为了解决此问题,我们提出了一种方法,可以自动识别具有可用源代码的论文并提取其源代码存储库URL。通过这种方法,我们发现,从2010年到2019年发布的10个最高AI会议的常规论文中有20.5%被确定为具有可用源代码的论文,并且这些源代码存储库中有8.1%不再可访问。我们还创建了XMU NLP Lab ReadMe数据集,这是用于源代码文档研究的标记已读数文件的最大数据集。通过此数据集,我们发现了很多读书文件没有提供的安装说明或使用教程。此外,对AI会议论文的源代码的一般图片进行了大规模的综合统计分析。提出的解决方案还可以超越AI会议论文,以分析来自期刊和会议的其他科学论文,以阐明更多领域。
translated by 谷歌翻译
Pre-training large transformer models with in-domain data improves domain adaptation and helps gain performance on the domain-specific downstream tasks. However, sharing models pre-trained on potentially sensitive data is prone to adversarial privacy attacks. In this paper, we asked to which extent we can guarantee privacy of pre-training data and, at the same time, achieve better downstream performance on legal tasks without the need of additional labeled data. We extensively experiment with scalable self-supervised learning of transformer models under the formal paradigm of differential privacy and show that under specific training configurations we can improve downstream performance without sacrifying privacy protection for the in-domain data. Our main contribution is utilizing differential privacy for large-scale pre-training of transformer language models in the legal NLP domain, which, to the best of our knowledge, has not been addressed before.
translated by 谷歌翻译
基础模型正在成为主要的深度学习技术。由于模型参数和训练数据集的大规模,预处理基础模型始终耗时。除了计算密集型外,培训过程还非常密集和沟通密集。这些功能使得需要应用3D并行性,该平行性整合数据并行性,管道模型并行性和张量模型并行性,以实现高训练效率。为了实现这一目标,开发了一些自定义软件框架,例如Megatron-LM和DeepSpeed。但是,当前的3D平行框架仍然符合两个问题:i)它们对模型开发人员不透明,这些开发人员需要手动修改模型以并行化培训。 ii)它们对计算,GPU存储器和网络带宽的利用不足。我们提出了Merak,这是一个自动化的3D并行性深度学习培训框架,并具有高度资源利用。 Merak会自动使用自动模型分区仪部署,该分区仪在模型的代理表示上使用图形sharding算法。 Merak还提出了非侵入性的API,用于通过最小的代码修改来扩展基础模型培训。此外,我们在Merak设计了高性能的3D平行运行时引擎。它使用多种技术来利用可用的培训资源,包括移动的关键路径管道时间表,该计划带来了更高的计算利用率,阶段感知的重新计算,可利用空闲工作者的记忆以及子额定张量的模型并行性,这些模型并联与通信和计算重叠。 64 GPU的实验显示,Merak可以加快在最新的3D平行性框架上,具有1.5、2.5、8.3和20亿的模型框架,最高可达1.42x,1.39x,1.43x和1.61 x分别。
translated by 谷歌翻译