Code completion is a valuable topic in both academia and industry. Recently, large-scale mono-programming-lingual (MonoPL) pre-training models have been proposed to boost the performance of code completion. However, the code completion on low-resource programming languages (PL) is difficult for the data-driven paradigm, while there are plenty of developers using low-resource PLs. On the other hand, there are few studies exploring the effects of multi-programming-lingual (MultiPL) pre-training for the code completion, especially the impact on low-resource programming languages. To this end, we propose the MultiCoder to enhance the low-resource code completion via MultiPL pre-training and MultiPL Mixture-of-Experts (MoE) layers. We further propose a novel PL-level MoE routing strategy (PL-MoE) for improving the code completion on all PLs. Experimental results on CodeXGLUE and MultiCC demonstrate that 1) the proposed MultiCoder significantly outperforms the MonoPL baselines on low-resource programming languages, and 2) the PL-MoE module further boosts the performance on six programming languages. In addition, we analyze the effects of the proposed method in details and explore the effectiveness of our method in a variety of scenarios.
translated by 谷歌翻译
稀疏激活的变压器(例如专家的混合物(MOE))由于其极端的缩放能力而引起了极大的兴趣,这可以使模型大小的急剧增加而没有大幅增加计算成本。为了实现这一目标,MOE模型用变压器中的Experts子层取代了前馈子层,并使用门控网络将每个令牌路由到其指定的专家。由于对此类模型进行有效培训的共同实践需要在不同的机器上分发专家和代币,因此这种路由策略通常会产生巨大的跨机器通信成本,因为代币及其分配的专家可能居住在不同的机器中。在本文中,我们提出了\ emph {门控辍学},它允许代币忽略门控网络并留在其本地机器,从而减少了交叉机器的通信。与传统辍学类似,我们还表明,门控辍学在训练过程中具有正规化效果,从而改善了概括性能。我们验证了对多语言机器翻译任务中门控辍学的有效性。我们的结果表明,门控辍学可改善具有更快的壁式时间收敛速率的最先进的MOE模型,并为各种模型尺寸和数据集提供更好的BLEU分数。
translated by 谷歌翻译
专家(MOE)的稀疏混合物由于具有负担得起的计算开销而有希望的缩放能力,因此引起了极大的兴趣。 Moe将密集的层转换为稀疏的专家,并利用封闭式路由网络使专家有条件地激活。但是,随着专家的数量的增长,带有残酷参数的MOE会受到过度拟合和稀疏数据分配的影响。此类问题在数据有限的任务上尤为严重,因此阻碍了MOE模型通过扩展来提高性能的进度。在这项工作中,我们提出了专家群集的混合 - 一种通用方法,可以使专家层通过在路由阶段施加基于方差的约束来学习更多多样化和适当的知识。我们进一步提出了专门为专家集群结构设计的集群级专家辍学策略。我们的实验表明,MEEC可以提高机器翻译和自然语言理解任务的性能,并提高在有限数据下扩展专家的性能上限。我们还验证了MEEC在缓解过度拟合和稀疏数据分配中起积极的作用。
translated by 谷歌翻译
交叉语言语音适应旨在解决利用多种丰富资源语言来构建低资源目标语言的模型的问题。由于低资源语言具有有限的培训数据,语音识别模型可以容易地过度装备。在本文中,我们建议使用适配器来研究多种适配器的性能,用于参数有效的交叉语音语音适应。基于我们以前的MetaAdapter,隐含地利用适配器,我们提出了一种名为SimAdapter的新算法,用于从Adapters明确学习知识。我们的算法利用了可以轻松集成到变压器结构中的适配器.METAADAPTER利用元学习将一般知识从训练数据转移到测试语言。 SimAdapter旨在使用适配器微调期间了解源语言与目标语言之间的相似性。我们在公共语音数据集中对五种低资源语言进行广泛的实验。结果表明,与强大的全型微调基线相比,我们的MetaAdapter和SimAdapter方法可以将WER减小2.98%和2.55%,只有2.5%和15.5%的培训参数。此外,我们还表明这两种新型算法可以集成,以便更好的性能,相对减少高达3.55%。
translated by 谷歌翻译
Sparsely gated Mixture of Experts (MoE) models have been shown to be a compute-efficient method to scale model capacity for multilingual machine translation. However, for low-resource tasks, MoE models severely over-fit. We show effective regularization strategies, namely dropout techniques for MoE layers in EOM and FOM, Conditional MoE Routing and Curriculum Learning methods that prevent over-fitting and improve the performance of MoE models on low-resource tasks without adversely affecting high-resource tasks. On a massively multilingual machine translation benchmark, our strategies result in about +1 chrF++ improvement in very low resource language pairs. We perform an extensive analysis of the learned MoE routing to better understand the impact of our regularization methods and how we can improve them.
translated by 谷歌翻译
Software engineers working with the same programming language (PL) may speak different natural languages (NLs) and vice versa, erecting huge barriers to communication and working efficiency. Recent studies have demonstrated the effectiveness of generative pre-training in computer programs, yet they are always English-centric. In this work, we step towards bridging the gap between multilingual NLs and multilingual PLs for large language models (LLMs). We release ERNIE-Code, a unified pre-trained language model for 116 NLs and 6 PLs. We employ two methods for universal cross-lingual pre-training: span-corruption language modeling that learns patterns from monolingual NL or PL; and pivot-based translation language modeling that relies on parallel data of many NLs and PLs. Extensive results show that ERNIE-Code outperforms previous multilingual LLMs for PL or NL across a wide range of end tasks of code intelligence, including multilingual code-to-text, text-to-code, code-to-code, and text-to-text generation. We further show its advantage of zero-shot prompting on multilingual code summarization and text-to-text translation. We will make our code and pre-trained models publicly available.
translated by 谷歌翻译
在深度学习中,模型通常重用所有输入的相同参数。专家的混合(MOE)违反了这一点,而是为每个传入示例选择不同的参数。结果是一个稀疏激活的模型 - 具有残酷数量的参数 - 但恒定的计算成本。然而,尽管MOE取得了一些显着的成功,但复杂性,沟通成本和培训不稳定的阻碍了广泛的采用 - 我们使用Switch Transformer解决了这些领域。我们简化了MOE路由算法和设计直观的改进模型,以降低的通信和计算成本。我们提出的培训技术有助于纠缠不稳定,我们表明稀疏模型可能首次以较低的精度(BFLOAT16)格式进行培训。我们设计了基于T5基数和T5总数的模型,以使用相同的计算资源获得高达7倍的训练速度。这些改进扩展到多语言设置,我们在所有101种语言中衡量对MT5基本版本的收益。最后,我们通过在“巨大的清洁爬行语料库”上预先培训高达数万亿个参数模型,并在T5-XXL模型上实现4倍的速度,从而提高了语言模型的当前规模。
translated by 谷歌翻译
预训练模型已在许多代码智能任务中有效。这些模型在大规模未标记的语料库中进行了预训练,然后在下游任务中进行了微调。但是,由于预训练和下游任务的输入是不同的形式,因此很难充分探索预训练模型的知识。此外,微调的性能强烈依赖于下游数据的量,而实际上,具有稀缺数据的场景很常见。自然语言处理(NLP)领域的最新研究表明,迅速调整,一种调整的新范式,减轻上述问题并在各种NLP任务中实现了有希望的结果。在迅速调整中,在调整过程中插入的提示提供了特定于任务的知识,这对于具有相对较少数据的任务特别有益。在本文中,我们凭经验评估了代码智能任务中迅速调整的用法和效果。我们对流行的预训练模型Codebert和codet5进行及时调整,并尝试三个代码智能任务,包括缺陷预测,代码摘要和代码翻译。我们的实验结果表明,在所有三个任务中,迅速调整始终优于微调。此外,及时调整在低资源场景中显示出很大的潜力,例如,对于代码摘要,平均将微调的BLEU分数提高了26%以上。我们的结果表明,我们可以调整代码智能任务的迅速调整,以实现更好的性能,尤其是在缺乏特定于任务的数据时,我们可以调整及时调整。
translated by 谷歌翻译
We present CodeBERT, a bimodal pre-trained model for programming language (PL) and natural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language code search, code documentation generation, etc. We develop Code-BERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both "bimodal" data of NL-PL pairs and "unimodal" data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing. 1
translated by 谷歌翻译
训练有素的机器学习模型,利用大量的开源软件数据,现在已经成为自动化许多软件工程任务的有趣方法。几个硒任务都受到这种方法,在过去的几年里,性能逐渐改善,具有更好的模型和培训方法。更多,更多样化,清洁,标记数据更好的培训;但构建高质量的数据集是耗时和挑战。增强清洁量和多样性的方法,标记数据通常具有广泛的适用性。对于某些语言(例如,Ruby)标记的数据不那么丰富;在其他(例如,JavaScript)中,可用数据可能更多地关注某些应用域,从而更加多样化。作为围绕此类数据瓶颈,我们提出了证据表明,不同语言(执行相同功能)的人写代码相当相似,特别是保留标识符命名模式;我们进一步提出了证据表明标识符是软件工程任务培训数据的一个非常重要的要素。我们利用这种相当偶然的现象来查找可用的多语言训练数据(跨不同语言)的证据可用于放大性能。我们研究这一点3个不同的任务:代码摘要,代码检索和功能命名。我们注意到,这种数据增强方法与不同的任务,语言和机器学习模型广泛兼容。
translated by 谷歌翻译
随着预培训的语言模型变得更加要求资源,因此资源丰富的语言(例如英语和资源筛选)语言之间的不平等正在恶化。这可以归因于以下事实:每种语言中的可用培训数据量都遵循幂律分布,并且大多数语言都属于分布的长尾巴。一些研究领域试图缓解这个问题。例如,在跨语言转移学习和多语言培训中,目标是通过从资源丰富的语言中获得的知识使长尾语言受益。尽管成功,但现有工作主要集中于尝试尽可能多的语言。结果,有针对性的深入分析主要不存在。在这项研究中,我们专注于单一的低资源语言,并使用跨语性培训(XPT)进行广泛的评估和探测实验。为了使转移方案具有挑战性,我们选择韩语作为目标语言,因为它是一种孤立的语言,因此与英语几乎没有类型的分类。结果表明,XPT不仅优于表现或与单语模型相当,该模型训练有大小的数据,而且在传输过程中也很高。
translated by 谷歌翻译
由于(1)低资源语言的数据稀缺,(2)培训和清爽100+单语言模型的昂贵计算成本,培训和部署混合语音识别的变压器LMS以低资源语言重新排行第二通道是具有挑战性的。,以及(3)考虑流量稀疏的效率低下。在这项研究中,我们提出了一种新的方法,将多个低资源的区域分组在一起,并优化ASR中多语言变压器LMS的性能。我们的本地组多语言变压器LMS的表现优于传统的多语言LM,以及降低维护成本和运营费用。此外,对于部署单语模型的低资源但人口流量的地区是可行的,我们表明,对我们的语言环境组的多语言LMS进行微调可产生比基线单语LMS更好的单语LM候选者。
translated by 谷歌翻译
Pre-trained models have achieved remarkable success in natural language processing (NLP). However, existing pre-training methods underutilize the benefits of language understanding for generation. Inspired by the idea of Generative Adversarial Networks (GANs), we propose a GAN-style model for encoder-decoder pre-training by introducing an auxiliary discriminator, unifying the ability of language understanding and generation in a single model. Our model, named as GanLM, is trained with two pre-training objectives: replaced token detection and replaced token denoising. Specifically, given masked source sentences, the generator outputs the target distribution and the discriminator predicts whether the target sampled tokens from distribution are incorrect. The target sentence is replaced with misclassified tokens to construct noisy previous context, which is used to generate the gold sentence. In general, both tasks improve the ability of language understanding and generation by selectively using the denoising data. Extensive experiments in language generation benchmarks show that GanLM with the powerful language understanding capability outperforms various strong pre-trained language models (PLMs) and achieves state-of-the-art performance.
translated by 谷歌翻译
专家混合物(MOE)由于其成功提高了模型质量,特别是在变压器方面的成功而变得流行。通过向几个专家提供稀疏门的令牌,每个专家只包含完整模型的一部分,Moe将模型尺寸保持不变,并且显着降低了每次标记计算,从而有效地缩放神经网络。但是,我们发现,目前的联合训练专家和稀疏门的方法引入了对模型精度的负面影响,缩短了昂贵的大规模模型训练的效率。在这项工作中,我们提出了用于MOE训练的密集至稀疏的门(DTS-Gate)。具体而言,代替使用永久稀疏门,DTS-Gate开始作为向所有专家路由令牌的密集栅极开始,然后逐渐和自适应地成为稀疏,而路线较少到更少的专家。与DTS-Gate的Moe自然地通过培训所有专家训练专家和稀疏门的训练,然后学习稀疏门。实验表明,与GPT-MOE(1.5B)模型中的最先进的开关门相比,使用OpenWeBtext数据集(40GB),DTS-Gate可以获得2.0倍的加速以达到相同的验证困惑,如以及更高的拖鞋 - 效率为1.42倍的加速。
translated by 谷歌翻译
Pre-trained language models are trained on large-scale unsupervised data, and they can be fine-tuned on small-scale labeled datasets and achieve good results. Multilingual pre-trained language models can be trained on multiple languages and understand multiple languages at the same time. At present, the research on pre-trained models mainly focuses on rich-resource language, while there is relatively little research on low-resource languages such as minority languages, and the public multilingual pre-trained language model can not work well for minority languages. Therefore, this paper constructs a multilingual pre-trained language model named MiLMo that performs better on minority language tasks, including Mongolian, Tibetan, Uyghur, Kazakh and Korean. To solve the problem of scarcity of datasets on minority languages and verify the effectiveness of the MiLMo model, this paper constructs a minority multilingual text classification dataset named MiTC, and trains a word2vec model for each language. By comparing the word2vec model and the pre-trained model in the text classification task, this paper provides an optimal scheme for the downstream task research of minority languages. The final experimental results show that the performance of the pre-trained model is better than that of the word2vec model, and it has achieved the best results in minority multilingual text classification. The multilingual pre-trained language model MiLMo, multilingual word2vec model and multilingual text classification dataset MiTC are published on https://milmo.cmli-nlp.com.
translated by 谷歌翻译
稀疏的专家模型是一个三十年来的概念,作为深度学习中流行的建筑。这类体系结构包括专家的混合物,交换变压器,路由网络,基础层等,所有这些都以一个统一的想法,即每个示例都由参数的一个子集进行。通过这样做,稀疏度将参数计数与每个示例的计算分解,从而允许使用极大但有效的模型。最终的模型显示了各种领域的显着改善,例如自然语言处理,计算机视觉和语音识别。我们回顾了稀疏专家模型的概念,提供了对常见算法的基本描述,将深度学习时代的进步进行上下文化,并通过突出未来工作的领域来结束。
translated by 谷歌翻译
最近,Experts(简称为MOE)体系结构在提高大规模语言模型的模型能力方面取得了巨大的成功。但是,MOE需要比要扩展的基本模型要合并更多的参数。在本文中,我们建议通过跨专家共享信息来构建一个有效的MOE架构。我们采用矩阵产品运营商(MPO,量子多体物理学的张量分解)来重建专家层中的参数矩阵,并通过共享中央张量的参数(包含核心信息)来增加预训练语言模型的模型容量( )在不同专家的同时,通过不同专家的辅助张量(补充中央张量)实现特异性。为了解决不平衡的优化问题,我们进一步设计了基于MPO的MOE体系结构的梯度面膜策略。基于T5和GPT-2的广泛实验表明,预训练的语言模型的性能和效率提高(与开关变压器相比,高级模型性能的总参数降低了27.2倍)。我们的代码可在\ url {https://github.com/rucaibox/mpo/mpoe}上公开获得。
translated by 谷歌翻译
通过多种语言对培训的多语言神经机器翻译(MNMT),由于模型参数的较少和较低的培训成本,通过在多种语言之间共享知识,引起了人们的关注。尽管如此,由于不同翻译方向之间的负面干扰,尤其是在高资源语言上,因此,多语言培训在共享参数中受到语言干扰退化的困扰。在本文中,我们提出了具有高资源语言特定培训(HLT-MT)的多语言翻译模型,以减轻负面干扰,该干扰采用了具有特定于语言的选择机制的两阶段培训。具体而言,我们首先仅使用高资源对训练多语言模型,然后选择解码器顶部的语言特定模块,以增强高资源方向的翻译质量。接下来,对所有可用语料库进行进一步培训,将知识从高资源语言(HRLS)转移到低资源语言(LRLS)。实验结果表明,HLT-MT在WMT-10和Opus-100基准测试上的表现优于各种强基础。此外,分析实验验证了我们方法在减轻多语言训练中负面干扰方面的有效性。
translated by 谷歌翻译
只有在模型在大规模的多语言环境中培训的情况下,才有可能在无监督的机器翻译(UMT)上进行无监督的机器翻译(UMT),这意味着有能力的无监督翻译(例如尼泊尔或辛哈拉)的胜任的不受监督的翻译,例如尼泊尔或辛哈拉语。与高资源对应物混合。尽管如此,尽管高资源语言极大地帮助启动了目标低资源翻译任务,但它们之间的语言差异可能会阻碍他们的进一步改进。在这项工作中,我们提出了一个简单的完善程序,以将语言与预先训练的多语言UMT模型相关联,以仅关注目标低资源任务。我们的方法在完全无监督的翻译任务中实现了最新的尼泊尔,僧伽罗,古吉拉特语,拉脱维亚,爱沙尼亚和哈萨克的最新技术,分别为3.5、3.3、3.3、4.1、4.2、4.2和3.3。我们的代码库可从https://github.com/nxphi47/refine_unsup_multlingual_mt获得
translated by 谷歌翻译
While large pre-trained models have transformed the field of natural language processing (NLP), the high training cost and low cross-lingual availability of such models prevent the new advances from being equally shared by users across all languages, especially the less spoken ones. To promote equal opportunities for all language speakers in NLP research and to reduce energy consumption for sustainability, this study proposes an effective and energy-efficient framework GreenPLM that uses bilingual lexicons to directly translate language models of one language into other languages at (almost) no additional cost. We validate this approach in 18 languages and show that this framework is comparable to, if not better than, other heuristics trained with high cost. In addition, when given a low computational cost (2.5\%), the framework outperforms the original monolingual language models in six out of seven tested languages. We release language models in 50 languages translated from English and the source code here.
translated by 谷歌翻译