The World Wide Web has grown to be a primary source of information for millions of people. Due to the size of the Web, search engines have become the major access point for this information. However, "commercial" search engines use hidden algorithms that put the integrity of their results in doubt, collect user data that raises privacy concerns, and target the general public thus fail to serve the needs of specific search users. Open source search, like open source operating systems, offers alternatives. The goal of the Open Source Information Retrieval Workshop (OSIR) is to bring together practitioners developing open source search technologies in the context of a premier IR research conference to share their recent advances, and to coordinate their strategy and research plans. The intent is to foster community-based development, to promote distribution of transparent Web search tools, and to strengthen the interaction with the research community in IR. A workshop about Open Source Web Information Retrieval was held last year in Compigne, France as part of WI 2005. The focus of this worksop is broadened to the whole open source information retrieval community. We want to thank all the authors of the submitted papers, the members of the program committee:, and the several reviewers whose contributions have resulted in these high quality proceedings. ABSTRACT There has been a resurgence of interest in index maintenance (or incremental indexing) in the academic community in the last three years. Most of this work focuses on how to build indexes as quickly as possible, given the need to run queries during the build process. This work is based on a different set of assumptions than previous work. First, we focus on latency instead of through-put. We focus on reducing index latency (the amount of time between when a new document is available to be indexed and when it is available to be queried) and query latency (the amount of time that an incoming query must wait because of index processing). Additionally, we assume that users are unwilling to tune parameters to make the system more efficient. We show how this set of assumptions has driven the development of the Indri index maintenance strategy, and describe the details of our implementation.
translated by 谷歌翻译
Every programmer has a characteristic style, ranging from preferences about identifier naming to preferences about object relationships and design patterns. Coding conventions define a consistent syntactic style, fostering readability and hence maintainability. When collaborating, programmers strive to obey a project's coding conventions. However, one third of reviews of changes contain feedback about coding conventions, indicating that programmers do not always follow them and that project members care deeply about adherence. Unfortunately, programmers are often unaware of coding conventions because inferring them requires a global view, one that aggregates the many local decisions programmers make and identifies emergent consensus on style. We present NATURALIZE , a framework that learns the style of a codebase, and suggests revisions to improve stylistic consistency. NATURALIZE builds on recent work in applying statistical natural language processing to source code. We apply NATURALIZE to suggest natural identifier names and formatting conventions. We present four tools focused on ensuring natural code during development and release management , including code review. NATURALIZE achieves 94% accuracy in its top suggestions for identifier names and can even transfer knowledge about conventions across projects, leveraging a corpus of 10,968 open source projects. We used NATURALIZE to generate 18 patches for 5 open source projects: 14 were accepted.
translated by 谷歌翻译
本报告描述了18个项目,这些项目探讨了如何在国家实验室中将商业云计算服务用于科学计算。这些演示包括在云环境中部署专有软件,以利用已建立的基于云的分析工作流来处理科学数据集。总的来说,这些项目非常成功,并且他们共同认为云计算可以成为国家实验室科学计算的宝贵计算资源。
translated by 谷歌翻译
Source code authorship attribution is a significant privacy threat to anonymous code contributors. However, it may also enable attribution of successful attacks from code left behind on an infected system, or aid in resolving copyright, copyleft, and plagiarism issues in the programming fields. In this work, we investigate machine learning methods to de-anonymize source code authors of C/C++ using coding style. Our Code Stylometry Feature Set is a novel representation of coding style found in source code that reflects coding style from properties derived from abstract syntax trees. Our random forest and abstract syntax tree-based approach attributes more authors (1,600 and 250) with significantly higher accuracy (94% and 98%) on a larger data set (Google Code Jam) than has been previously achieved. Furthermore, these novel features are robust, difficult to obfuscate, and can be used in other programming languages, such as Python. We also find that (i) the code resulting from difficult programming tasks is easier to attribute than easier tasks and (ii) skilled programmers (who can complete the more difficult tasks) are easier to attribute than less skilled programmers.
translated by 谷歌翻译
Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file-and 480 method-level pairs across eight real-world Java systems; 93% of the file-and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers.
translated by 谷歌翻译
基于变更的代码审查在工业软件开发中得到广泛应用。因此,对帮助审阅者实现更好的审核性能的工具的研究可以产生很大的影响。我们分析了为审阅者提供认知支持的一种可能性:确定更改部分的重要性以进行审核,具体确定代码更改的哪些部分可以从审核中遗漏而不会受到损害。为了确定变更部分的重要性,我们从软件存储库中提取数据,并基于此数据构建预测模型以进行查看备注。该方法将详细讨论。结合输入数据,我们提出了一种新的算法来跟踪对触发器的评论。我们将我们的方法应用于中型软件公司。在这家公司中,我们可以避免审查25%的变更部分和23%的已更改Java源代码行,而只丢失约1%的评论标记。尽管如此,我们还观察到尝试方法的严重局限:大部分节省是由于简单的句法规则,数据中的噪声阻碍了这些更好的预测模型,并且案例中的一些开发人员采用了采用的方法。除了审查评论触发器的挖掘和预测的主要结果之外,我们还提供了新的,多目标和交互式规则挖掘方法的经验。来自公司的匿名数据集以及设计算法的实现也是可用的。
translated by 谷歌翻译
We introduce computational network (CN), a unified framework for describing arbitrary learning machines, such as deep neural networks (DNNs), con-volutional neural networks (CNNs), recurrent neural networks (RNNs), long short term memory (LSTM), logistic regression, and maximum entropy model, that can be illustrated as a series of computational steps. A CN is a directed graph in which each leaf node represents an input value or a parameter and each non-leaf node represents a matrix operation upon its children. We describe algorithms to carry out forward computation and gradient calculation in CN and introduce most popular computation node types used in a typical CN. We further introduce the computational network toolkit (CNTK), an implementation of CN that supports both GPU and CPU. We describe the architecture and the key components of the CNTK, the command line options to use CNTK, and the network definition and model editing language, and provide sample setups for acoustic model, language model, and spoken language understanding. We also describe the Argon speech recognition decoder as an example to integrate with CNTK.
translated by 谷歌翻译
In this paper we introduce a new approach for learning precise and general probabilistic models of code based on decision tree learning. Our approach directly benefits an emerging class of statistical programming tools which leverage probabilistic models of code learned over large codebases (e.g., GitHub) to make predictions about new programs (e.g., code completion, repair, etc). The key idea is to phrase the problem of learning a proba-bilistic model of code as learning a decision tree in a domain specific language over abstract syntax trees (called TGEN). This allows us to condition the prediction of a program element on a dynamically computed context. Further, our problem formulation enables us to easily instantiate known decision tree learning algorithms such as ID3, but also to obtain new variants we refer to as ID3+ and E13, not previously explored and ones that outperform ID3 in prediction accuracy. Our approach is general and can be used to learn a prob-abilistic model of any programming language. We implemented our approach in a system called DEEP3 and evaluated it for the challenging task of learning probabilistic models of JavaScript and Python. Our experimental results indicate that DEEP3 predicts elements of JavaScript and Python code with precision above 82% and 69%, respectively. Further , DEEP3 often significantly outperforms state-of-the-art approaches in overall prediction accuracy.
translated by 谷歌翻译
The tens of thousands of high-quality open source software projects on the Internet raise the exciting possibility of studying software development by finding patterns across truly large source code repositories. This could enable new tools for developing code, encouraging reuse, and navigating large projects. In this paper, we build the first giga-token probabilistic language model of source code, based on 352 million lines of Java. This is 100 times the scale of the pioneering work by Hindle et al. The giga-token model is significantly better at the code suggestion task than previous models. More broadly, our approach provides a new "lens" for analyzing software projects, enabling new complexity metrics based on statistical analysis of large corpora. We call these metrics data-driven complexity metrics. We propose new metrics that measure the complexity of a code module and the topical centrality of a module to a software project. In particular, it is possible to distinguish reusable utility classes from classes that are part of a program's core logic based solely on general information theoretic criteria.
translated by 谷歌翻译
Learning tasks on source code (i.e., formal languages) have been consideredrecently, but most work has tried to transfer natural language methods and doesnot capitalize on the unique opportunities offered by code's known syntax. Forexample, long-range dependencies induced by using the same variable or functionin distant locations are often not considered. We propose to use graphs torepresent both the syntactic and semantic structure of code and use graph-baseddeep learning methods to learn to reason over program structures. In this work, we present how to construct graphs from source code and how toscale Gated Graph Neural Networks training to such large graphs. We evaluateour method on two tasks: VarNaming, in which a network attempts to predict thename of a variable given its usage, and VarMisuse, in which the network learnsto reason about selecting the correct variable that should be used at a givenprogram location. Our comparison to methods that use less structured programrepresentations shows the advantages of modeling known structure, and suggeststhat our models learn to infer meaningful names and to solve the VarMisuse taskin many cases. Additionally, our testing showed that VarMisuse identifies anumber of bugs in mature open-source projects.
translated by 谷歌翻译
基于树的机器学习模型,例如随机森林,决策树和梯度增强树,是当今实践中使用的最流行的非线性预测模型,但是对于解释它们的预测却相对较少关注。在这里,我们通过三个主要贡献显着提高了树模型的可解性:1)基于游戏理论计算最优解释的第一个多项式时间算法。 2)一种直接测量局部特征相互作用效果的新型解释。 3)一组新的工具,用于基于每个预测的许多局部解释来理解全局模型结构。我们将这些工具应用于三个医学机器学习问题,并展示如何结合许多高质量的局部解释使我们能够代表全局结构,同时保持对原始模型的本地忠诚度。这些工具可以使我们知道i)在美国一般人群中识别高幅度但低频率的非线性死亡率因素,ii)突出显示具有共同风险特征的不同群体子群,iii)识别慢性肾病危险因素之间的非线性相互作用, iv)通过识别哪些特征随着时间的推移降低模型的性能来监控部署在医院中的机器学习模型。鉴于基于树的机器学习模型的普及,这些对其可解释性的改进对于广泛的领域具有影响。
translated by 谷歌翻译