In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Text-to-SQL semantic parsing is an important NLP task, which greatly facilitates the interaction between users and the database and becomes the key component in many human-computer interaction systems. Much recent progress in text-to-SQL has been driven by large-scale datasets, but most of them are centered on English. In this work, we present MultiSpider, the largest multilingual text-to-SQL dataset which covers seven languages (English, German, French, Spanish, Japanese, Chinese, and Vietnamese). Upon MultiSpider, we further identify the lexical and structural challenges of text-to-SQL (caused by specific language properties and dialect sayings) and their intensity across different languages. Experimental results under three typical settings (zero-shot, monolingual and multilingual) reveal a 6.1% absolute drop in accuracy in non-English languages. Qualitative and quantitative analyses are conducted to understand the reason for the performance drop of each language. Besides the dataset, we also propose a simple schema augmentation framework SAVe (Schema-Augmentation-with-Verification), which significantly boosts the overall performance by about 1.8% and closes the 29.5% performance gap across languages.
translated by 谷歌翻译
Table-and-text hybrid question answering (HybridQA) is a widely used and challenging NLP task commonly applied in the financial and scientific domain. The early research focuses on migrating other QA task methods to HybridQA, while with further research, more and more HybridQA-specific methods have been present. With the rapid development of HybridQA, the systematic survey is still under-explored to summarize the main techniques and advance further research. So we present this work to summarize the current HybridQA benchmarks and methods, then analyze the challenges and future directions of this task. The contributions of this paper can be summarized in three folds: (1) first survey, to our best knowledge, including benchmarks, methods and challenges for HybridQA; (2) systematic investigation with the reasonable comparison of the existing systems to articulate their advantages and shortcomings; (3) detailed analysis of challenges in four important dimensions to shed light on future directions.
translated by 谷歌翻译
Cross-speaker style transfer in speech synthesis aims at transferring a style from source speaker to synthesised speech of a target speaker's timbre. Most previous approaches rely on data with style labels, but manually-annotated labels are expensive and not always reliable. In response to this problem, we propose Style-Label-Free, a cross-speaker style transfer method, which can realize the style transfer from source speaker to target speaker without style labels. Firstly, a reference encoder structure based on quantized variational autoencoder (Q-VAE) and style bottleneck is designed to extract discrete style representations. Secondly, a speaker-wise batch normalization layer is proposed to reduce the source speaker leakage. In order to improve the style extraction ability of the reference encoder, a style invariant and contrastive data augmentation method is proposed. Experimental results show that the method outperforms the baseline. We provide a website with audio samples.
translated by 谷歌翻译
Pre-trained Language Model (PLM) has become a representative foundation model in the natural language processing field. Most PLMs are trained with linguistic-agnostic pre-training tasks on the surface form of the text, such as the masked language model (MLM). To further empower the PLMs with richer linguistic features, in this paper, we aim to propose a simple but effective way to learn linguistic features for pre-trained language models. We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy. We carried out extensive experiments on ten Chinese NLU tasks, and the experimental results show that LERT could bring significant improvements over various comparable baselines. Furthermore, we also conduct analytical experiments in various linguistic aspects, and the results prove that the design of LERT is valid and effective. Resources are available at https://github.com/ymcui/LERT
translated by 谷歌翻译
提示方法被认为是几次自然语言处理的关键进展之一。最近对基于离散令牌的``硬提示''转移到连续``软提示''的最新研究,这些提示将可学习的向量用作伪提示代币并实现更好的性能。尽管显示出有希望的前景,但观察到这些软宣传的方法在很大程度上依赖良好的初始化来生效。不幸的是,获得软提示的完美初始化需要了解内在语言模型的工作和精心设计,这绝非易事,必须从头开始重新启动每个新任务。为了解决此问题,我们提出了一种称为Metaprompting的广义软提示方法,该方法采用了良好认可的模型 - 静态元学习算法,以自动找到更好的及时初始化,从而快速适应新的促进任务。问题并在四个不同的数据集上带来了显着改善(1次设置的准确性提高了6分),从而实现了新的最新性能。
translated by 谷歌翻译
在强化学习中,蒙特卡洛算法通过平均偶发回报来更新Q功能。在Monte Carlo UCB(MC-UCB)算法中,在每个状态下采取的动作是最大化Q函数加上UCB勘探项的动作,该术语偏向于选择频率较低的动作的选择。尽管在为MC-UCB建立遗憾界限方面已经进行了重要的工作,但大多数工作都集中在该问题的有限培训版本上,每个情节都在不断数量的步骤后终止。对于此类有限的Horizo​​n问题,最佳策略既取决于当前状态和情节中的时间。但是,对于许多自然的情节问题,例如GO,CHESS和机器人任务等游戏,该情节是随机的,最佳政策是静止的。对于此类环境,MC-UCB中的Q功能是否会收敛到最佳Q函数,这是一个空旷的问题。我们猜想,与Q学习不同,它并不是所有MDP的收敛。尽管如此,我们表明,对于大型MDP,其中包括二十一点和确定性MDP等随机MDP,例如GO,MC-UCB中的Q功能几乎可以肯定地收敛到最佳Q函数。该结果的直接推论是,它几乎肯定会为所有有限的Horizo​​n MDP收敛。我们还提供了数值实验,为MC-UCB提供了进一步的见解。
translated by 谷歌翻译
在本文中,我们介绍了CTC 2021的概述,这是针对母语人士的中文文本校正任务。我们详细描述了任务定义以及培训和评估的数据。我们还总结了该任务参与者调查的方法。我们希望为此任务收集和注释的数据集可以促进并加快该研究领域的未来发展。因此,伪培训数据,金标准验证数据和整个排行榜可在https://destwang.github.io/ctc2021-explorer/上在线公开获取。
translated by 谷歌翻译
宫颈异常细胞检测是一项具有挑战性的任务,因为异常细胞和正常细胞之间的形态差异通常是微妙的。为了确定宫颈细胞是正常还是异常,细胞病理学家总是将周围细胞作为参考,并进行仔细比较以鉴定其异常。为了模仿这些临床行为,我们建议探索上下文关系,以提高宫颈异常细胞检测的性能。具体而言,利用细胞和细胞到全球图像之间的上下文关系,以增强每个感兴趣区域(ROI)建议的特征。因此,开发了两个模块,称为ROI关系注意模块(RRAM)和全球ROI注意模块(GRAM),还研究了它们的组合策略。我们通过使用特征金字塔网络(FPN)使用单头或双头更快的R-CNN来设置强基础,并将我们的RRAM和革兰氏集整合到它们中以验证提出的模块的有效性。由40,000个细胞学图像组成的大宫颈细胞检测数据集进行的实验表明,RRAM和GRAM的引入都比基线方法获得了更好的平均精度(AP)。此外,当级联RRAM和GRAM时,我们的方法优于最先进的方法(SOTA)方法。此外,我们还显示了提出的功能增强方案可以促进图像级别和涂片级别的分类。代码和训练有素的模型可在https://github.com/cviu-csu/cr4cacd上公开获得。
translated by 谷歌翻译
通常使用自回归生成模型,尤其是对于涉及顺序数据的那些任务。然而,由于链式有条件建模的内在特征(例如,暴露偏见或缺乏远距离连贯性),由于许多固有的缺陷而困扰着它们,严重限制了它们正确模型分布的能力。在本文中,我们提出了一种独特的方法,该方法称为训练自回旋生成模型,以利用精心设计的基于能量的学习目标。通过利用SoftMax操作的额外自由度,我们被允许使自回归模型本身成为基于能量的模型,用于衡量输入的可能性,而无需引入任何额外的参数。此外,我们表明可以有效地训练电子臂,并能够减轻暴露偏置问题并增加自回归生成模型的时间连贯性。广泛的经验结果涵盖了语言建模,神经机器翻译和图像产生等基准,证明了拟议方法的有效性。
translated by 谷歌翻译