As the complexity of modern software continues to escalate, software engineering has become an increasingly daunting and error-prone endeavor. In recent years, the field of Neural Code Intelligence (NCI) has emerged as a promising solution, leveraging the power of deep learning techniques to tackle analytical tasks on source code with the goal of improving programming efficiency and minimizing human errors within the software industry. Pretrained language models have become a dominant force in NCI research, consistently delivering state-of-the-art results across a wide range of tasks, including code summarization, generation, and translation. In this paper, we present a comprehensive survey of the NCI domain, including a thorough review of pretraining techniques, tasks, datasets, and model architectures. We hope this paper will serve as a bridge between the natural language and programming language communities, offering insights for future research in this rapidly evolving field.
translated by 谷歌翻译
Natural language processing for programming, which aims to use NLP techniques to assist programming, has experienced an explosion in recent years. However, there is no literature that systematically reviews related work from the full spectrum. In this paper, we comprehensively investigate existing work, ranging from early deductive models to the latest competition-level models. Another advantage of this paper is the completeness of the technique category, which provides easy access to locating and comparing future works.
translated by 谷歌翻译
在过去的几年中,使用机器学习的编程语言处理(PLP)取得了广泛的改进。越来越多的人有兴趣探索这个有前途的领域。但是,对于新的研究人员和开发人员来说,要找到合适的组件来构建自己的机器学习管道,鉴于要解决的多样化的PLP任务,已发布大量数据集和模型,以及一组复杂的编译器或工具。涉及。为了改善机器学习组件的可发现性,可访问性,互操作性和可重复性(公平性),我们在基于机器学习的PLP领域中收集和分析了一组代表性论文。然后,我们确定并表征关键概念,包括PLP任务,模型架构和支持工具。最后,我们展示了一些利用可重复使用的组件来构建机器学习管道以解决一组PLP任务的例子。
translated by 谷歌翻译
我们提出了Pangu-Coder,这是一种仅预读的解码器语言模型,该模型采用pangu-alpha架构进行文本到代码生成,即给定自然语言问题描述的编程语言解决方案的合成。我们使用两阶段策略训练Pangu-Coder:第一阶段采用因果语言建模(CLM)来预先培训原始编程语言数据,而第二阶段则使用因果语言建模和掩盖语言建模(MLM)的组合培训目标,专注于文本到代码生成的下游任务,并培训松散的自然语言程序定义和代码功能。最后,我们讨论了pangu-coder-ft,该pander the是通过竞争性编程问题和代码与持续集成测试的结合进行了微调的。我们评估了pangu-coder,重点是它是否生成功能上正确的程序,并证明它在参加较小的上下文窗口和较少的数据培训的同时,它比诸如Codex之类的类似大小的模型(例如Codex)实现等效性或更好的性能。
translated by 谷歌翻译
Software engineers working with the same programming language (PL) may speak different natural languages (NLs) and vice versa, erecting huge barriers to communication and working efficiency. Recent studies have demonstrated the effectiveness of generative pre-training in computer programs, yet they are always English-centric. In this work, we step towards bridging the gap between multilingual NLs and multilingual PLs for large language models (LLMs). We release ERNIE-Code, a unified pre-trained language model for 116 NLs and 6 PLs. We employ two methods for universal cross-lingual pre-training: span-corruption language modeling that learns patterns from monolingual NL or PL; and pivot-based translation language modeling that relies on parallel data of many NLs and PLs. Extensive results show that ERNIE-Code outperforms previous multilingual LLMs for PL or NL across a wide range of end tasks of code intelligence, including multilingual code-to-text, text-to-code, code-to-code, and text-to-text generation. We further show its advantage of zero-shot prompting on multilingual code summarization and text-to-text translation. We will make our code and pre-trained models publicly available.
translated by 谷歌翻译
最近,人们对使用深度学习自动化软件工程任务的兴趣激增。这项工作解决了代码生成问题的问题,该问题是在其中以不同的语言或自然语言描述生成目标代码的目标代码。代码生成的大多数最先进的深度学习模型都使用主要是为自然语言设计的培训策略。但是,理解和生成代码需要对代码语法和语义的更严格理解。通过这种动机,我们开发了一个编码器变压器模型,其中训练编码器和解码器分别识别源和目标代码中的语法和数据流。我们不仅通过利用源代码的语法树和数据流程图来使编码器结构感知,而且我们还确保我们的解码器通过引入两个辅助任务来保留目标代码的语法和数据流:路径预测和数据流预测。据我们所知,这是第一项引入结构感知的变压器解码器,以通过对目标语法和数据流进行建模来增强生成代码的质量。所提出的结构编码器模型在CodexGlue基准测试中实现了代码翻译和文本对代码生成任务的最新性能。
translated by 谷歌翻译
GitHub提交的记录,该代码随着自然语言消息的描述而变化,对于软件开发人员来说,在理解软件演变方面起着至关重要的作用。为了促进开源软件社区的开发,我们收集了一个提交基准,包括790万次跨7种编程语言的投入。基于此基准测试,我们提出了Citsbart,这是GitHub提交的大型预训练的编码器变压器模型。该模型由三个类别(即,为了学习提交碎片表示的六个预训练任务)预先培训(即,剥夺目标,跨模式生成和对比度学习)。此外,我们将一个“委托智能”框架与一项理解任务和提交的三个世代任务统一。这些任务的综合实验表明,提案巴特大大优于以前的代码预先培训作品。进一步的分析还揭示了每个预训练任务可增强模型性能。我们鼓励后续研究人员在将来为我们的框架贡献更多与承诺相关的下游任务。
translated by 谷歌翻译
Given a natural language that describes the user's demands, the NL2Code task aims to generate code that addresses the demands. This is a critical but challenging task that mirrors the capabilities of AI-powered programming. The NL2Code task is inherently versatile, diverse and complex. For example, a demand can be described in different languages, in different formats, and at different levels of granularity. This inspired us to do this survey for NL2Code. In this survey, we focus on how does neural network (NN) solves NL2Code. We first propose a comprehensive framework, which is able to cover all studies in this field. Then, we in-depth parse the existing studies into this framework. We create an online website to record the parsing results, which tracks existing and recent NL2Code progress. In addition, we summarize the current challenges of NL2Code as well as its future directions. We hope that this survey can foster the evolution of this field.
translated by 谷歌翻译
Machine Learning for Source Code (ML4Code) is an active research field in which extensive experimentation is needed to discover how to best use source code's richly structured information. With this in mind, we introduce JEMMA, an Extensible Java Dataset for ML4Code Applications, which is a large-scale, diverse, and high-quality dataset targeted at ML4Code. Our goal with JEMMA is to lower the barrier to entry in ML4Code by providing the building blocks to experiment with source code models and tasks. JEMMA comes with a considerable amount of pre-processed information such as metadata, representations (e.g., code tokens, ASTs, graphs), and several properties (e.g., metrics, static analysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2 million classes and over 8 million methods. JEMMA is also extensible allowing users to add new properties and representations to the dataset, and evaluate tasks on them. Thus, JEMMA becomes a workbench that researchers can use to experiment with novel representations and tasks operating on source code. To demonstrate the utility of the dataset, we also report results from two empirical studies on our data, ultimately showing that significant work lies ahead in the design of context-aware source code models that can reason over a broader network of source code entities in a software project, the very task that JEMMA is designed to help with.
translated by 谷歌翻译
Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain.
translated by 谷歌翻译
The problem of reversing the compilation process, decompilation, is an important tool in reverse engineering of computer software. Recently, researchers have proposed using techniques from neural machine translation to automate the process in decompilation. Although such techniques hold the promise of targeting a wider range of source and assembly languages, to date they have primarily targeted C code. In this paper we argue that existing neural decompilers have achieved higher accuracy at the cost of requiring language-specific domain knowledge such as tokenizers and parsers to build an abstract syntax tree (AST) for the source language, which increases the overhead of supporting new languages. We explore a different tradeoff that, to the extent possible, treats the assembly and source languages as plain text, and show that this allows us to build a decompiler that is easily retargetable to new languages. We evaluate our prototype decompiler, Beyond The C (BTC), on Go, Fortran, OCaml, and C, and examine the impact of parameters such as tokenization and training data selection on the quality of decompilation, finding that it achieves comparable decompilation results to prior work in neural decompilation with significantly less domain knowledge. We will release our training data, trained decompilation models, and code to help encourage future research into language-agnostic decompilation.
translated by 谷歌翻译
We present CodeBERT, a bimodal pre-trained model for programming language (PL) and natural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language code search, code documentation generation, etc. We develop Code-BERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both "bimodal" data of NL-PL pairs and "unimodal" data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing. 1
translated by 谷歌翻译
程序合成或代码生成旨在生成满足问题规范的程序。使用大规模预处理的语言模型(LMS)的最新方法显示出令人鼓舞的结果,但它们有一些关键的局限性。特别是,他们经常遵循标准监督的微调程序,仅从对自然语言问题描述和基础真相计划对培训代码生成模型。这种范式在很大程度上忽略了问题规范中的一些重要但潜在的信号,例如单位测试,因此在求解复杂的看不见的编码任务时通常会导致性能差。为了解决这些局限性,我们提出了“ Coderl”,这是通过验证的LMS和深入强化学习(RL)实现程序合成任务的新框架。具体而言,在培训期间,我们将代码生成的LM视为参与者网络,并引入批评网络,该网络经过培训,以预测生成的程序的功能正确性,并为演员提供密集的反馈信号。在推理期间,我们引入了一种新一代程序,具有关键的抽样策略,该过程允许模型根据示例单位测试和评论家分数的反馈自动重新生成程序。对于模型骨架,我们扩展了Codet5的编码器架构,具有增强的学习目标,更大的模型大小和更好的预处理数据。我们的方法不仅在具有挑战性的应用程序基准上实现了新的SOTA结果,而且还显示出强大的零弹性传输能力,并在简单的MBPP基准上具有新的SOTA结果。
translated by 谷歌翻译
源代码的预训练的生成语言模型(例如PLBART,CODET5,SPT-CODE)在过去几年中对多个任务(包括代码生成和翻译)产生了强劲的结果。这些模型采用了不同的训练前目标,以自我监督的方式从非常大规模的语料库中学习代码构建的统计数据。预训练模型的成功很大程度上取决于这些预训练的目标。本文提出了一个新的预训练目标,即“归化”源代码,利用代码的双峰,双通道(正式和自然渠道)性质。与自然语言不同,代码的双峰,双通道的性质使我们能够大规模生成语义上等效的代码。我们介绍了六类的语义保存转换,以引入非自然的代码形式,然后强迫我们的模型制作开发人员编写的更自然的原创程序。学习在没有明确的手动监督的情况下,通过大型的开源代码来生成等效但更自然的代码,有助于模型学习摄入和生成代码。我们将模型在三个生成软件工程任务中微调:代码生成,代码翻译和代码改进,具有有限的人类策划标记数据并实现最先进的性能与CODET5。我们表明,我们的预训练模型在零射门和少数学习方面特别有竞争力,并且在学习代码属性(例如语法,数据流)方面更好。
translated by 谷歌翻译
预审前的语言模型已被证明在许多与软件有关的一代任务中都是有效的。但是,它们不适合编辑任务,因为它们不是为了推理编辑的原因。为了解决这个问题,我们提出了一个新颖的预处理目标,该目标明确地对编辑进行了建模并使用它来构建Coditt5,这是一种用于软件相关编辑任务的大型语言模型,该任务是在大量源代码和自然语言评论中鉴定的。我们将其对各种下游编辑任务进行微调,包括评论更新,错误修复和自动代码审核。通过优于基于纯生成的模型,我们证明了方法的普遍性及其对编辑任务的适用性。我们还展示了纯生成模型和我们的基于编辑的模型如何通过简单的重读策略相互补充,我们可以通过该策略实现三个下游编辑任务的最新性能。
translated by 谷歌翻译
As an important fine-grained sentiment analysis problem, aspect-based sentiment analysis (ABSA), aiming to analyze and understand people's opinions at the aspect level, has been attracting considerable interest in the last decade. To handle ABSA in different scenarios, various tasks are introduced for analyzing different sentiment elements and their relations, including the aspect term, aspect category, opinion term, and sentiment polarity. Unlike early ABSA works focusing on a single sentiment element, many compound ABSA tasks involving multiple elements have been studied in recent years for capturing more complete aspect-level sentiment information. However, a systematic review of various ABSA tasks and their corresponding solutions is still lacking, which we aim to fill in this survey. More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks. From the perspective of solutions, we summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage. Besides, techniques for building more practical ABSA systems in cross-domain/lingual scenarios are discussed. Finally, we review some emerging topics and discuss some open challenges to outlook potential future directions of ABSA.
translated by 谷歌翻译
图形神经网络已被证明可以为各种软件工程任务产生令人印象深刻的结果。但是,现有技术仍然有两个问题:(1)长期依赖性和(2)不同的代码组件在不应该的情况下被视为平等。为了解决这些问题,我们提出了一种表示代码为层次结构(代码层次结构)的方法,其中不同的代码组件在各个粒度级别分别表示。然后,为了处理每个表示级别的表示,我们设计了一个新颖的网络体系结构Echelon,它结合了异质图形变压器网络和基于树的卷积神经网络的优势,以学习具有代码依赖性信息丰富的抽象语法树。我们还提出了一个新颖的预处理目标,称为缺失子树预测以补充我们的代码层次结构。评估结果表明,我们的方法在三个任务中大大优于其他基准:任何代码完成,代码分类和代码克隆检测。
translated by 谷歌翻译
在本文中,我们利用低级编译器中间表示(IR)来改善代码翻译。传统的转运器依赖于句法信息和手工制作的规则,这限制了其适用性并产生不自然的代码。将神经机器翻译(NMT)方法应用于代码,已成功扩大了可以获得自然翻译的程序集。但是,它们将代码视为文本令牌的序列,并且在具有不同语言的语义不同的类似代码之间仍然没有足够的区分。结果是低质量的翻译,降低了NMT的实用性,并强调对方法的需求显着提高了其准确性。在这里,我们建议与IRS,特别是LLVM IR增强代码翻译,并在C ++,Java,Rust和Go语言上进行结果。我们的方法改善了无监督的代码翻译的最新技术状态,将正确翻译的数量平均增加了11%,而Java -Rust Pair则最多可提高79%。我们通过添加数百个GO和RUST功能来扩展代码翻译的先前测试集。此外,我们在IR代表问题,从IR生成编程源代码以及使用IRS作为中介枢轴进行翻译的研究。
translated by 谷歌翻译
自动化程序维修(APR)旨在自动修复源代码中的错误。最近,随着深度学习(DL)领域的进步,神经程序修复(NPR)研究的兴起,该研究将APR作为翻译任务从Buggy Code开始,以纠正代码并采用基于编码器decoder架构的神经网络。与其他APR技术相比,NPR方法在适用性方面具有很大的优势,因为它们不需要任何规范(即测试套件)。尽管NPR一直是一个热门的研究方向,但该领域还没有任何概述。为了帮助感兴趣的读者了解现有NPR系统的体系结构,挑战和相应的解决方案,我们对本文的最新研究进行了文献综述。我们首先介绍该领域的背景知识。接下来,要理解,我们将NPR过程分解为一系列模块,并在每个模块上阐述各种设计选择。此外,我们确定了一些挑战并讨论现有解决方案的影响。最后,我们得出结论,并为未来的研究提供了一些有希望的方向。
translated by 谷歌翻译
预训练模型已在许多代码智能任务中有效。这些模型在大规模未标记的语料库中进行了预训练,然后在下游任务中进行了微调。但是,由于预训练和下游任务的输入是不同的形式,因此很难充分探索预训练模型的知识。此外,微调的性能强烈依赖于下游数据的量,而实际上,具有稀缺数据的场景很常见。自然语言处理(NLP)领域的最新研究表明,迅速调整,一种调整的新范式,减轻上述问题并在各种NLP任务中实现了有希望的结果。在迅速调整中,在调整过程中插入的提示提供了特定于任务的知识,这对于具有相对较少数据的任务特别有益。在本文中,我们凭经验评估了代码智能任务中迅速调整的用法和效果。我们对流行的预训练模型Codebert和codet5进行及时调整,并尝试三个代码智能任务,包括缺陷预测,代码摘要和代码翻译。我们的实验结果表明,在所有三个任务中,迅速调整始终优于微调。此外,及时调整在低资源场景中显示出很大的潜力,例如,对于代码摘要,平均将微调的BLEU分数提高了26%以上。我们的结果表明,我们可以调整代码智能任务的迅速调整,以实现更好的性能,尤其是在缺乏特定于任务的数据时,我们可以调整及时调整。
translated by 谷歌翻译