在过去的三年里,自动评分发动机已被用于评分大约五百万个测试者。由于Covid-19和相关的教育和测试自动化,这个数字进一步增加。尽管使用了这么广泛,但基于AI的测试文献非常缺乏。提出新模型的大多数论文仅依赖于基于二次加权的Kappa(QWK)与人类评估者的协议,以显示模型效能。然而,这有效地忽略了论文评分的高度多重特征性质。论文评分取决于相干性,语法,相关性,充足和,词汇等特征。迄今为止,没有研究检测自动化论文评分:AES系统在全面上的所有这些功能。通过这种动机,我们为AES系统提出了一种模型不良反对派评估计划和相关指标,以测试其自然语言的理解能力和整体鲁棒性。我们使用所提出的方案评估当前的最先进的AES模型,并在最近的五个模型上报告结果。这些型号范围从基于特征为本的最新深度学习算法的方法。我们发现AES模型是高度不夸张的。即使是重型修改(高达25%)与问题无关的内容也不会降低模型产生的分数。另一方面,平均不相关的内容增加了分数,从而表明应该重新考虑模型评估策略和尺寸。我们还要求200名人类评估者在看到人类可以检测到两者之间的差异以及是否同意自动分数分配的分数的同意,以获得原始和对抗的反应。
translated by 谷歌翻译
Code generation models have achieved impressive performance. However, they tend to be brittle as slight edits to a prompt could lead to very different generations; these robustness properties, critical for user experience when deployed in real-life applications, are not well understood. Most existing works on robustness in text or code tasks have focused on classification, while robustness in generation tasks is an uncharted area and to date there is no comprehensive benchmark for robustness in code generation. In this paper, we propose ReCode, a comprehensive robustness evaluation benchmark for code generation models. We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format. They are carefully designed to be natural in real-life coding practice, preserve the original semantic meaning, and thus provide multifaceted assessments of a model's robustness performance. With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt. In addition, we define robustness metrics for code generation models considering the worst-case behavior under each type of perturbation, taking advantage of the fact that executing the generated code can serve as objective evaluation. We demonstrate ReCode on SOTA models using HumanEval, MBPP, as well as function completion tasks derived from them. Interesting observations include: better robustness for CodeGen over InCoder and GPT-J; models are most sensitive to syntax perturbations; more challenging robustness evaluation on MBPP over HumanEval.
translated by 谷歌翻译
While pre-trained language models (LM) for code have achieved great success in code completion, they generate code conditioned only on the contents within the file, i.e., in-file context, but ignore the rich semantics in other files within the same project, i.e., cross-file context, a critical source of information that is especially useful in modern modular software development. Such overlooking constrains code language models' capacity in code completion, leading to unexpected behaviors such as generating hallucinated class member functions or function calls with unexpected arguments. In this work, we develop a cross-file context finder tool, CCFINDER, that effectively locates and retrieves the most relevant cross-file context. We propose CoCoMIC, a framework that incorporates cross-file context to learn the in-file and cross-file context jointly on top of pretrained code LMs. CoCoMIC successfully improves the existing code LM with a 19.30% relative increase in exact match and a 15.41% relative increase in identifier matching for code completion when the cross-file context is provided.
translated by 谷歌翻译
Inferring reward functions from human behavior is at the center of value alignment - aligning AI objectives with what we, humans, actually want. But doing so relies on models of how humans behave given their objectives. After decades of research in cognitive science, neuroscience, and behavioral economics, obtaining accurate human models remains an open research topic. This begs the question: how accurate do these models need to be in order for the reward inference to be accurate? On the one hand, if small errors in the model can lead to catastrophic error in inference, the entire framework of reward learning seems ill-fated, as we will never have perfect models of human behavior. On the other hand, if as our models improve, we can have a guarantee that reward accuracy also improves, this would show the benefit of more work on the modeling side. We study this question both theoretically and empirically. We do show that it is unfortunately possible to construct small adversarial biases in behavior that lead to arbitrarily large errors in the inferred reward. However, and arguably more importantly, we are also able to identify reasonable assumptions under which the reward inference error can be bounded linearly in the error in the human model. Finally, we verify our theoretical insights in discrete and continuous control tasks with simulated and human data.
translated by 谷歌翻译
In unstructured environments, robots run the risk of unexpected collisions. How well they react to these events is determined by how transparent they are to collisions. Transparency is affected by structural properties as well as sensing and control architectures. In this paper, we propose the collision reflex metric as a way to formally quantify transparency. It is defined as the total impulse transferred in collision, which determines the collision mitigation capabilities of a closed-loop robotic system taking into account structure, sensing, and control. We analyze the effect of motor scaling, stiffness, and configuration on the collision reflex of a system using an analytical model. Physical experiments using the move-until-touch behavior are conducted to compare the collision reflex of direct-drive and quasi-direct-drive actuators and robotic hands (Schunk WSG-50 and Dexterous DDHand.) For transparent systems, we see a counter-intuitive trend: the impulse may be lower at higher pre-impact velocities.
translated by 谷歌翻译
资源说明框架(RDF)和属性图(PG)是表示,存储和查询图数据的两个最常用的数据模型。我们提出了表达推理图存储(ERGS) - 构建在Janusgraph(属性图存储)顶部的图存储,该图还允许存储和查询RDF数据集。首先,我们描述了如何将RDF数据转换为属性图表示,然后描述将SPARQL查询转换为一系列Gremlin遍历的查询翻译模块。因此,开发的转换器和翻译器可以允许任何Apache TinkerPop符合图形数据库存储和查询RDF数据集。我们证明了使用JanusGraph作为基本属性图存储的建议方法的有效性,并将其性能与标准RDF系统进行比较。
translated by 谷歌翻译
大型基于变压器的预训练的语言模型在各种知识密集的任务上取得了令人印象深刻的表现,并可以在其参数中捕获事实知识。我们认为,考虑到不断增长的知识和资源需求,在模型参数中存储大量知识是亚最佳选择。我们认为,更有效的替代方法是向模型提供对上下文相关的结构化知识的明确访问,并训练它以使用该知识。我们提出了LM核 - 实现这一目标的一般框架 - 允许从外部知识源对语言模型培训的\ textit {解耦},并允许后者更新而不会影响已经训练的模型。实验结果表明,LM核心获得外部知识,在知识探索任务上的最先进的知识增强语言模型中实现了重要而强大的优于性能。可以有效处理知识更新;并在两个下游任务上表现良好。我们还提出了一个彻底的错误分析,突出了LM核的成功和失败。
translated by 谷歌翻译
感知哈希映射图像具有相同语义内容与相同的$ n $ bit Hash值相同的图像,同时将语义不同的图像映射到不同的哈希。这些算法在网络安全方面具有重要的应用,例如版权侵权检测,内容指纹和监视。苹果的神经哈什(Neuralhash)就是这样一种系统,旨在检测用户设备上非法内容的存在,而不会损害消费者的隐私。我们提出了令人惊讶的发现,即神经锤差是线性的,这激发了新型黑盒攻击的发展,该攻击可以(i)逃避对“非法”图像的检测,(ii)产生近乎收集,以及(iii)有关哈希德的泄漏信息。图像,全部无访问模型参数。这些脆弱性对神经哈什的安全目标构成了严重威胁;为了解决这些问题,我们建议使用经典加密标准提出一个简单的修复程序。
translated by 谷歌翻译
最近出现了变异推断,成为大规模贝叶斯推理中古典马尔特·卡洛(MCMC)的流行替代品。变异推断的核心思想是贸易统计准确性以达到计算效率。它旨在近似后部,以降低计算成本,但可能损害其统计准确性。在这项工作中,我们通过推论模型选择中的案例研究研究了这种统计和计算权衡。侧重于具有对角和低级精度矩阵的高斯推论模型(又名变异近似族),我们在两个方面启动了对权衡的理论研究,贝叶斯后期推断误差和频繁的不确定性不确定定量误差。从贝叶斯后推理的角度来看,我们表征了相对于精确后部的变异后部的误差。我们证明,鉴于固定的计算预算,较低的推论模型会产生具有较高统计近似误差的变异后期,但计算误差较低。它减少了随机优化的方差,进而加速收敛。从频繁的不确定性定量角度来看,我们将变异后部的精度矩阵视为不确定性估计值。我们发现,相对于真实的渐近精度,变异近似遭受了来自数据的采样不确定性的附加统计误差。此外,随着计算预算的增加,这种统计误差成为主要因素。结果,对于小型数据集,推论模型不必全等级即可达到最佳估计误差。我们最终证明了在经验研究之间的这些统计和计算权衡推论,从而证实了理论发现。
translated by 谷歌翻译
对RAS和RAF蛋白的行为与细胞膜中局部脂质环境之间关系之间的关系的了解对了解癌症形成的基础机制至关重要。在这项工作中,我们采用深度学习(DL)来学习这种关系,通过预测基于脂质膜的RAS和RAS-RAF蛋白复合物的蛋白质定位状态,该状态基于蛋白质结构域周围的脂质密度(CG),相对于脂质膜。分子动力学(MD)模拟。我们的DL模型可以预测六个蛋白质状态,总体准确性超过80%。这项工作的发现为蛋白质如何调节脂质环境提供了新的见解,这反过来又可以帮助设计新型疗法以调节与癌症发展相关的机制中的这种相互作用。
translated by 谷歌翻译