In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Text-to-SQL semantic parsing is an important NLP task, which greatly facilitates the interaction between users and the database and becomes the key component in many human-computer interaction systems. Much recent progress in text-to-SQL has been driven by large-scale datasets, but most of them are centered on English. In this work, we present MultiSpider, the largest multilingual text-to-SQL dataset which covers seven languages (English, German, French, Spanish, Japanese, Chinese, and Vietnamese). Upon MultiSpider, we further identify the lexical and structural challenges of text-to-SQL (caused by specific language properties and dialect sayings) and their intensity across different languages. Experimental results under three typical settings (zero-shot, monolingual and multilingual) reveal a 6.1% absolute drop in accuracy in non-English languages. Qualitative and quantitative analyses are conducted to understand the reason for the performance drop of each language. Besides the dataset, we also propose a simple schema augmentation framework SAVe (Schema-Augmentation-with-Verification), which significantly boosts the overall performance by about 1.8% and closes the 29.5% performance gap across languages.
translated by 谷歌翻译
In this paper, we present a pure-Python open-source library, called PyPop7, for black-box optimization (BBO). It provides a unified and modular interface for more than 60 versions and variants of different black-box optimization algorithms, particularly population-based optimizers, which can be classified into 12 popular families: Evolution Strategies (ES), Natural Evolution Strategies (NES), Estimation of Distribution Algorithms (EDA), Cross-Entropy Method (CEM), Differential Evolution (DE), Particle Swarm Optimizer (PSO), Cooperative Coevolution (CC), Simulated Annealing (SA), Genetic Algorithms (GA), Evolutionary Programming (EP), Pattern Search (PS), and Random Search (RS). It also provides many examples, interesting tutorials, and full-fledged API documentations. Through this new library, we expect to provide a well-designed platform for benchmarking of optimizers and promote their real-world applications, especially for large-scale BBO. Its source code and documentations are available at https://github.com/Evolutionary-Intelligence/pypop and https://pypop.readthedocs.io/en/latest, respectively.
translated by 谷歌翻译
我们提出了一种精确,有效的正常估计方法,可以处理非结构化3D点云的噪声和不均匀密度。与直接采用补丁并忽略当地邻里关系的现有方法不同,这使它们容易受到诸如尖锐边缘等挑战区域的影响,我们建议学习以正常估计的图形卷积特征表示,该图表强调了更多的本地邻里几何形状,并有效地编码了内在关系。此外,我们根据注意机制设计了一种新型的自适应模块,以将点特征与其相邻特征整合在一起,从而进一步增强了提出的正常估计器对点密度变化的鲁棒性。为了使其更有区别,我们在图形块中引入了多尺度体系结构,以学习更丰富的几何特征。我们的方法以各种基准数据集的最先进的精度优于竞争对手,并且对噪声,异常值以及密度变化非常有力。
translated by 谷歌翻译
最近,对分布(OOD)数据具有相关性转移的概括引起了极大的关注。相关转移是由与类标签相关的虚假属性引起的,因为它们之间的相关性可能在训练和测试数据中有所不同。对于这样一个问题,我们表明,鉴于类标签,有条件独立的虚假属性模型是可推广的。基于此,提出了控制OOD泛化误差的度量条件伪变异(CSV),以衡量这种条件独立性。为了改善OOD的概括,我们将培训过程正常使用拟议的CSV。在温和的假设下,我们的训练目标可以作为非Convex-Concave Mini-Max问题提出。提出了具有可证明的收敛速率的算法来解决该问题。广泛的经验结果验证了我们算法在改善OOD概括方面的功效。
translated by 谷歌翻译
大规模蛋白质语言模型(PLM)在蛋白质预测任务中的性能提高,范围从3D结构预测到各种功能预测。特别是,Alphafold(一种开创性的AI系统)可能会重塑结构生物学。但是,尚未探索超出结构预测的AlphaFold,Evoformer的PLM模块的效用。在本文中,我们研究了三个流行PLM的表示能力:ESM-1B(单序),MSA转换器(多个序列比对)和Evoformer(结构),并特别关注Evoformer。具体而言,我们旨在回答以下关键问题:(i)作为Alphafold的一部分,Evoformer是否会产生可预测蛋白质功能的表示形式? (ii)如果是的,可以替换ESM-1B和MSA转换器? (iii)这些PLM多少依赖于进化相关的蛋白质数据?在这方面,他们彼此补充吗?我们通过实证研究以及新的见解和结论来比较这些模型。最后,我们发布代码和数据集以获得可重复性。
translated by 谷歌翻译
结构重新参数化(REP)方法已在传统的卷积网络上取得了重大的性能提高。大多数当前的REP方法依靠先验知识来选择重新聚集操作。但是,体系结构的性能受到操作类型和先验知识的限制。为了打破这项限制,在这项工作中,设计了改进的重新参数化搜索空间,其中包括更多类型的重新参数操作。具体而言,搜索空间可以进一步提高卷积网络的性能。为了有效地探索该搜索空间,基于神经体系结构搜索(NAS)设计了自动重新参数增强策略,该策略可以搜索出色的重新参数化体系结构。此外,我们可视化体系结构的输出功能,以分析形成重新参数架构的原因。在公共数据集中,我们取得了更好的结果。在与RESNET相同的训练条件下,我们将Resnet-50的准确性提高了Imagenet-1K的1.82%。
translated by 谷歌翻译
知识图(KGS)代表作为三元组的事实已被广泛采用在许多应用中。 LIGHT预测和规则感应等推理任务对于KG的开发很重要。已经提出了知识图形嵌入式(KGES)将kg的实体和kg与持续向量空间的关系进行了建议,以获得这些推理任务,并被证明是有效和强大的。但在实际应用中申请和部署KGE的合理性和可行性尚未探索。在本文中,我们讨论并报告我们在真实域应用程序中部署KGE的经验:电子商务。我们首先为电子商务KG系统提供三个重要的探索者:1)注意推理,推理几个目标关系更为关注而不是全部; 2)解释,提供预测的解释,帮助用户和业务运营商理解为什么预测; 3)可转让规则,生成可重用的规则,以加速将千克部署到新系统。虽然非现有KGE可以满足所有这些DesiderATA,但我们提出了一种新颖的一种,可说明的知识图表注意网络,通过建模三元组之间的相关性而不是纯粹依赖于其头实体,关系和尾部实体嵌入来预测。它可以自动选择预测的注意力三倍,并同时记录它们的贡献,从该解释可以很容易地提供,可以有效地生产可转移规则。我们经验表明,我们的方法能够在我们的电子商务应用程序中满足所有三个DesiderATA,并从实际域应用程序中倾斜于数据集的典型基线。
translated by 谷歌翻译
这封信提供了在沟通限制下进行多机器人探索的完整框架会议 - 结合措施。考虑到沟通在现实世界中的带宽和范围都受到限制,我们提出了一种轻巧的环境演示方法和有效的合作探索策略。对于较低的带宽,每个机器人都利用特定的多面有来维护自由空间和超级边界信息(SFI)作为勘探决策的来源。为了减少重复的探索,我们开发了一种基于任务的协议,该协议驱动机器人以稳定的会合方式共享收集的信息。我们还为集中式和分散案件设计了完整的路径计划计划。为了验证我们的框架是实用且通用的,我们提出了广泛的基准,并将系统部署到多UGV和多UAV平台中。
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译