Pre-trained language models for programming languages have shown a powerful ability on processing many Software Engineering (SE) tasks, e.g., program synthesis, code completion, and code search. However, it remains to be seen what is behind their success. Recent studies have examined how pre-trained models can effectively learn syntax information based on Abstract Syntax Trees. In this paper, we figure out what role the self-attention mechanism plays in understanding code syntax and semantics based on AST and static analysis. We focus on a well-known representative code model, CodeBERT, and study how it can learn code syntax and semantics by the self-attention mechanism and Masked Language Modelling (MLM) at the token level. We propose a group of probing tasks to analyze CodeBERT. Based on AST and static analysis, we establish the relationships among the code tokens. First, Our results show that CodeBERT can acquire syntax and semantics knowledge through self-attention and MLM. Second, we demonstrate that the self-attention mechanism pays more attention to dependence-relationship tokens than to other tokens. Different attention heads play different roles in learning code semantics; we show that some of them are weak at encoding code semantics. Different layers have different competencies to represent different code properties. Deep CodeBERT layers can encode the semantic information that requires some complex inference in the code context. More importantly, we show that our analysis is helpful and leverage our conclusions to improve CodeBERT. We show an alternative approach for pre-training models, which makes fully use of the current pre-training strategy, i.e, MLM, to learn code syntax and semantics, instead of combining features from different code data formats, e.g., data-flow, running-time states, and program outputs.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
由于其在多个工业应用领域的竞争性能,深度学习在我们的日常生活中起着越来越重要的作用。作为基于DL的系统的核心,深度神经网络会自动从精心收集和有组织的培训数据中学习知识,以获得预测看不见数据的标签的能力。与需要全面测试的传统软件系统类似,还需要仔细评估DNN,以确保受过训练的模型的质量满足需求。实际上,评估行业中DNN质量的事实上的标准是检查其在收集的标记测试数据集中的性能(准确性)。但是,准备这样的标记数据通常不容易部分,部分原因是标签工作巨大,即数据标记是劳动密集型的,尤其是每天有大量新的新传入的未标记数据。最近的研究表明,DNN的测试选择是一个有希望的方向,可以通过选择最小的代表性数据来标记并使用这些数据来评估模型来解决此问题。但是,它仍然需要人类的努力,不能自动。在本文中,我们提出了一种名为Aries的新技术,可以使用原始测试数据获得的信息估算新未标记数据的DNN的性能。我们技术背后的关键见解是,该模型在与决策边界具有相似距离的数据上应具有相似的预测准确性。我们对13种数据转换方法的技术进行了大规模评估。结果表明,我们技术的有用性是,白羊座的估计准确性仅为0.03%-2.60%(平均0.61%),从真实的准确性中差。此外,在大多数(128个)情况下,白羊座还优于最先进的选择标记方法。
translated by 谷歌翻译
在过去的几年中,深度学习(DL)一直在不断扩大其应用程序,并成为大型法规时代大规模源代码分析的推动力。由于意外的准确性降解,测试集与训练集不同的分布与训练集不同的分布与训练集不同。尽管最近在计算机视觉和自然语言过程等领域取得了分配转移基准测试的最新进展。对于源代码任务的分配转移分析和基准测试,进展有限,由于其数量和支持几乎所有工业部门的基础,都有很大的需求。为了填补这一空白,本文启动了提出代码,即用于源代码学习的分销基准数据集。具体而言,代码支持2种编程语言(即Java和Python)和5种代码分发偏移(即任务,程序员,时间戳记,代币和CST)。据我们所知,我们是第一个定义基于代码表示的分布变化的人。在实验中,我们首先评估现有分布探测器的有效性以及分配移位定义的合理性,然后测量流行代码学习模型(例如Codebert)对分类任务的模型概括。结果表明,1)仅基于SoftMax得分的OOD检测器在代码上表现良好,2)分配转移会导致所有代码分类模型中的准确性降解,3)基于表示的分布转移对模型的影响比其他模型具有更高的影响,并且4)预训练的模型对分布变化更具抵抗力。我们公开提供代码,从而实现了有关代码学习模型质量评估的后续研究。
translated by 谷歌翻译
代码搜索目标是根据自然语言查询检索相关的代码片段,以提高软件生产力和质量。但是,由于源代码和查询之间的语义间隙,自动代码搜索是具有挑战性的。大多数现有方法主要考虑嵌入的顺序信息,其中文本背后的结构信息不完全考虑。在本文中,我们设计了一个名为GraphsearchNet的新型神经网络框架,通过共同学习源代码和查询的富集语义来启用有效和准确的源代码搜索。具体地,我们建议将源代码和查询编码为两个图,其中双向GGNN以捕获图表的本地结构信息。此外,我们通过利用有效的多主题来增强BigGNN,以补充BigGNN错过的全球依赖。关于Java和Python数据集的广泛实验说明了GraphSearchNet优于当前最先进的工作原位。
translated by 谷歌翻译
迄今为止,统计类型推理系统彻底依赖于监督的学习方法,这些方法需要艰苦的手动努力来收集和标记大量数据。大多数图灵完整的命令式语言共享相似的控制和数据流结构,这使得将知识从一种语言转移到另一种语言。在本文中,我们提出了一个跨语言转移学习框架,即柏拉图,用于统计类型推理,这使我们能够利用一种从一种语言的标签数据集中学到的先验知识并将其转移到另一种语言的数据集中,例如Python,将其转移到JavaScript,Java,Java对于JavaScript等。柏拉图由一种新颖的核心注意机制提供动力,以限制主干变压器模型的注意范围,以便模型被迫将其预测基于语言之间普遍共享的特征。此外,我们提出了语法增强功能,以增强语言域之间的特征重叠的学习。此外,柏拉图还可以通过引入跨语言扩展来用于提高常规监督类型推理的性能,这使该模型能够学习多种语言的更多一般功能。我们在两种设置下评估了柏拉图:1)在跨域方案下,目标语言数据未标记或标记部分,结果表明,柏拉图的表现优于最先进的域传输技术,例如。 ,它通过+14.6%@em,+18.6%@weighted-f1和2)在传统单语言监督场景下改善了Python的打字稿基线,Plato将python的基线改进了+4.10%@em,+1.90%@weighted em -f1引入了跨语性增强。
translated by 谷歌翻译
目前的高保真发电和高精度检测DeepFake图像位于臂赛中。我们认为,生产高度逼真和“检测逃避”的深度可以服务于改善未来一代深度检测能力的最终目标。在本文中,我们提出了一种简单但强大的管道,以通过执行隐式空间域陷波滤波来减少假图像的伪影图案而不会损伤图像质量。我们首先表明频域陷波滤波,尽管由于陷波滤波器所需的手动设计,我们的任务对于我们的任务是有效的,但是频域陷波过滤虽然是有效的。因此,我们诉诸基于学习的方法来重现陷波滤波效果,而是仅在空间域中。我们采用添加压倒性的空间噪声来打破周期性噪声模式和深映像滤波来重建无噪声假图像,我们将我们的方法命名为Deadnotch。深度图像过滤为嘈杂图像中的每个像素提供专用过滤器,与其DeepFake对应物相比,产生具有高保真度的滤波图像。此外,我们还使用图像的语义信息来生成对抗性引导映射,以智能地添加噪声。我们对3种代表性的最先进的深蓝进行的大规模评估(在16种DeepFakes上测试)已经证明,我们的技术显着降低了这3种假图像检测方法的准确性,平均和高度为36.79% 97.02%在最好的情况下。
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
A step-search sequential quadratic programming method is proposed for solving nonlinear equality constrained stochastic optimization problems. It is assumed that constraint function values and derivatives are available, but only stochastic approximations of the objective function and its associated derivatives can be computed via inexact probabilistic zeroth- and first-order oracles. Under reasonable assumptions, a high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived. Numerical results on standard nonlinear optimization test problems illustrate the advantages and limitations of our proposed method.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译