While pre-trained Chinese language models have demonstrated impressive performance on a wide range of NLP tasks, the Chinese Spell Checking (CSC) task remains a challenge. Previous research has explored using information such as glyphs and phonetics to improve the ability to distinguish misspelled characters, with good results. However, the generalization ability of these models is not well understood: it is unclear whether they incorporate glyph-phonetic information and, if so, whether this information is fully utilized. In this paper, we aim to better understand the role of glyph-phonetic information in the CSC task and suggest directions for improvement. Additionally, we propose a new, more challenging, and practical setting for testing the generalizability of CSC models. All code is made publicly available.
translated by 谷歌翻译
Due to the ambiguity of homophones, Chinese Spell Checking (CSC) has widespread applications. Existing systems typically utilize BERT for text encoding. However, CSC requires the model to account for both phonetic and graphemic information. To adapt BERT to the CSC task, we propose a token-level self-distillation contrastive learning method. We employ BERT to encode both the corrupted and corresponding correct sentence. Then, we use contrastive learning loss to regularize corrupted tokens' hidden states to be closer to counterparts in the correct sentence. On three CSC datasets, we confirmed our method provides a significant improvement above baselines.
translated by 谷歌翻译
由于最近的自然语言处理的进步,几种作品已经将伯特的预先接受审查的屏蔽语言模型(MLM)应用于语音识别的后校正。然而,现有的预先训练的模型仅考虑语义校正,同时忽略了单词的语音特征。因此,语义后校正将降低性能,因为在中国ASR中同音误差相当常见。在本文中,我们提出了一种集体利用了语境化表示的新方法以及错误与其替换候选人之间的语音信息来缓解中国ASR的错误率。我们对现实世界语音识别数据集的实验结果表明,我们所提出的方法明显地低于基线模型的CER,其利用预先训练的BERT MLM作为校正器。
translated by 谷歌翻译
中文拼写检查(CSC)任务旨在检测和纠正中文拼写错误。近年来,相关研究的重点是引入“混乱设置”以增强CSC模型的角色相似性,忽略了包含更丰富信息的字符的上下文。为了更好地利用上下文相似性,我们为CSC任务提供了一个简单而有效的课程学习框架。借助我们设计的模型不足框架,现有的CSC型号将从人类学习汉字并取得进一步改进的培训。对广泛使用的Sighan数据集进行了广泛的实验和详细分析表明,我们的方法的表现优于先前的最新方法。
translated by 谷歌翻译
预训练的语言模型(PLM)在自然语言理解中的许多下游任务中取得了显着的性能增长。已提出了各种中文PLM,以学习更好的中文表示。但是,大多数当前模型都使用中文字符作为输入,并且无法编码中文单词中包含的语义信息。虽然最近的预训练模型同时融合了单词和字符,但它们通常会遭受不足的语义互动,并且无法捕获单词和字符之间的语义关系。为了解决上述问题,我们提出了一个简单而有效的PLM小扣手,该小扣子采用了对单词和性格表示的对比度学习。特别是,Clower通过对多透明信息的对比学习将粗粒的信息(即单词)隐式编码为细粒度表示(即字符)。在现实的情况下,小电动器具有很大的价值,因为它可以轻松地将其纳入任何现有的基于细粒的PLM中而无需修改生产管道。在一系列下游任务上进行的扩展实验表明,小动物的卓越性能超过了几个最先进的实验 - 艺术基线。
translated by 谷歌翻译
来自变压器(BERT)的双向编码器表示显示了各种NLP任务的奇妙改进,并且已经提出了其连续的变体来进一步提高预先训练的语言模型的性能。在本文中,我们的目标是首先介绍中国伯特的全文掩蔽(WWM)策略,以及一系列中国预培训的语言模型。然后我们还提出了一种简单但有效的型号,称为Macbert,这在几种方面提高了罗伯塔。特别是,我们提出了一种称为MLM作为校正(MAC)的新掩蔽策略。为了展示这些模型的有效性,我们创建了一系列中国预先培训的语言模型,作为我们的基线,包括BERT,Roberta,Electra,RBT等。我们对十个中国NLP任务进行了广泛的实验,以评估创建的中国人托管语言模型以及提议的麦克白。实验结果表明,Macbert可以在许多NLP任务上实现最先进的表演,我们还通过几种可能有助于未来的研究的调查结果来消融细节。我们开源我们的预先培训的语言模型,以进一步促进我们的研究界。资源可用:https://github.com/ymcui/chinese-bert-wwm
translated by 谷歌翻译
令牌化是预用语言模型(PLMS)的基础。用于中文PLMS的现有销量化方法通常将每个角色视为不可分割的令牌。然而,它们忽略了中文写字系统的独特特征,其中附加语言信息在字符级别下方,即在子字符级别。要利用此类信息,我们提出了子字符(Sub Const for Short)标记。具体地,我们首先通过基于其字形或发音将每个汉字转换为短序列来编码输入文本,然后根据具有子字标记化的编码文本构造词汇表。实验结果表明,Sub Colar标记与现有标记均具有两个主要优点:1)它们可以将输入牌销料到更短的序列中,从而提高计算效率。 2)基于发音的Sub Col.Tokenizers可以将中文同音铭器编码为相同的音译序列并产生相同的标记输出,因此对所有同音声音拼写的强大。与此同时,使用Sub Colar标记培训的模型竞争地执行下游任务。我们在https://github.com/thunlp/subchartoken中发布我们的代码,以促进未来的工作。
translated by 谷歌翻译
中文拼写检查(CSC)的任务旨在检测和纠正文本中可以找到的拼写错误。虽然手动注释高质量的数据集很昂贵且耗时,因此培训数据集的规模通常很小(例如,Sighan15仅包含2339个用于培训的样本),因此基于学习的模型通常会遭受数据稀疏限制。和过度合适的问题,尤其是在大语言模型时代。在本文中,我们致力于研究\ textbf {无监督}范式来解决CSC问题,我们提出了一个名为\ textbf {uchecker}的框架,以进行无监督的拼写错误检测和校正。考虑到其强大的语言诊断能力,将蒙面审慎的语言模型(例如BERT)引入为骨干模型。从各种且灵活的掩蔽操作中受益,我们提出了一种混乱集引导的掩盖策略,以精细培训掩盖语言模型,以进一步提高无监督的检测和校正的性能。标准数据集的实验结果证明了我们提出的模型Uchecker在字符级别和句子级的准确性,精度,回忆和F1分别对拼写错误检测和校正任务分别进行的效率。
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
Discriminative pre-trained language models (PLMs) learn to predict original texts from intentionally corrupted ones. Taking the former text as positive and the latter as negative samples, the PLM can be trained effectively for contextualized representation. However, the training of such a type of PLMs highly relies on the quality of the automatically constructed samples. Existing PLMs simply treat all corrupted texts as equal negative without any examination, which actually lets the resulting model inevitably suffer from the false negative issue where training is carried out on pseudo-negative data and leads to less efficiency and less robustness in the resulting PLMs. In this work, on the basis of defining the false negative issue in discriminative PLMs that has been ignored for a long time, we design enhanced pre-training methods to counteract false negative predictions and encourage pre-training language models on true negatives by correcting the harmful gradient updates subject to false negative predictions. Experimental results on GLUE and SQuAD benchmarks show that our counter-false-negative pre-training methods indeed bring about better performance together with stronger robustness.
translated by 谷歌翻译
该报告描述了一个预先训练的语言模型Erlangshen,其倾向校正损失是线索语义匹配挑战中的第一名。在预训练阶段,我们基于掩盖语言建模(MLM)的知识构建动态掩盖策略,并具有整个单词掩盖。此外,通过观察数据集的特定结构,预先训练的Erlangshen在微调阶段应用了经倾向校正的损失(PCL)。总体而言,我们在F1得分中获得72.54分,测试集的准确性为78.90分。我们的代码可在以下网址公开获取:https://github.com/idea-ccnl/fengshenbang-lm/tree/hf-ds/fengshen/examples/clue_sim。
translated by 谷歌翻译
大多数中国预训练的模型都采用字符作为下游任务的基本单元。但是,这些模型忽略了单词传递的信息,从而导致某些重要语义的丧失。在本文中,我们提出了一种新方法来利用单词结构并将词汇语义集成到预训练模型的特征表示中。具体而言,我们根据相似度的重量将单词嵌入其内部字符的嵌入中。为了加强边界信息一词,我们将一个单词中内部字符的表示形式混合在一起。之后,我们将单词到字符对准注意机制通过掩盖不重要的角色来强调重要角色。此外,为了减少单词分割引起的误差传播,我们提出了一种合奏方法,以结合不同的标记者给出的分割结果。实验结果表明,我们的方法在不同的中文NLP任务上取得了优于基本预训练的模型Bert,Bert-WWM和Ernie:情感分类,句子对匹配,自然语言推断和机器阅读理解。我们进行进一步的分析以证明我们模型每个组成部分的有效性。
translated by 谷歌翻译
标准预审进的语言模型可在子字代币序列上运行,而无需直接访问组成每个令牌字符串表示的字符。我们探究了预审前的语言模型的嵌入层,并表明模型在一个令人惊讶的程度上学习了整个单词和子字代币的内部字符组成,而没有看到字符和令牌。我们的结果表明,罗伯塔(Roberta)的嵌入层具有足够的信息,可以准确地阐明词汇的三分之一,并在所有令牌类型上达到高平均角色Ngram重叠。我们进一步测试了使用其他字符信息丰富子词模型是否可以改善语言建模,并观察到该方法具有几乎相同的学习曲线,作为训练而无需基于拼写的丰富。总体而言,我们的结果表明,语言建模目标激励模型隐式学习一些拼写概念,并且明确教授模型如何拼写的方式似乎并没有增强其在此类任务上的绩效。
translated by 谷歌翻译
Pre-trained Language Model (PLM) has become a representative foundation model in the natural language processing field. Most PLMs are trained with linguistic-agnostic pre-training tasks on the surface form of the text, such as the masked language model (MLM). To further empower the PLMs with richer linguistic features, in this paper, we aim to propose a simple but effective way to learn linguistic features for pre-trained language models. We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy. We carried out extensive experiments on ten Chinese NLU tasks, and the experimental results show that LERT could bring significant improvements over various comparable baselines. Furthermore, we also conduct analytical experiments in various linguistic aspects, and the results prove that the design of LERT is valid and effective. Resources are available at https://github.com/ymcui/LERT
translated by 谷歌翻译
The lack of label data is one of the significant bottlenecks for Chinese Spelling Check (CSC). Existing researches use the method of automatic generation by exploiting unlabeled data to expand the supervised corpus. However, there is a big gap between the real input scenario and automatic generated corpus. Thus, we develop a competitive general speller ECSpell which adopts the Error Consistent masking strategy to create data for pretraining. This error consistency masking strategy is used to specify the error types of automatically generated sentences which is consistent with real scene. The experimental result indicates our model outperforms previous state-of-the-art models on the general benchmark. Moreover, spellers often work within a particular domain in real life. Due to lots of uncommon domain terms, experiments on our built domain specific datasets show that general models perform terribly. Inspired by the common practice of input methods, we propose to add an alterable user dictionary to handle the zero-shot domain adaption problem. Specifically, we attach a User Dictionary guided inference module (UD) to a general token classification based speller. Our experiments demonstrate that ECSpell$^{UD}$, namely ECSpell combined with UD, surpasses all the other baselines largely, even approaching the performance on the general benchmark.
translated by 谷歌翻译
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
translated by 谷歌翻译
本文旨在通过介绍第一个中国数学预训练的语言模型〜(PLM)来提高机器的数学智能,以有效理解和表示数学问题。与其他标准NLP任务不同,数学文本很难理解,因为它们在问题陈述中涉及数学术语,符号和公式。通常,它需要复杂的数学逻辑和背景知识来解决数学问题。考虑到数学文本的复杂性质,我们设计了一种新的课程预培训方法,用于改善由基本和高级课程组成的数学PLM的学习。特别是,我们首先根据位置偏见的掩盖策略执行令牌级预训练,然后设计基于逻辑的预训练任务,旨在分别恢复改组的句子和公式。最后,我们介绍了一项更加困难的预训练任务,该任务强制执行PLM以检测和纠正其生成的解决方案中的错误。我们对离线评估(包括九个与数学相关的任务)和在线$ A/B $测试进行了广泛的实验。实验结果证明了与许多竞争基线相比,我们的方法的有效性。我们的代码可在:\ textColor {blue} {\ url {https://github.com/rucaibox/jiuzhang}}}中获得。
translated by 谷歌翻译
基于变压器的语言模型最近在许多自然语言任务中取得了显着的结果。但是,通常通过利用大量培训数据来实现排行榜的性能,并且很少通过将明确的语言知识编码为神经模型。这使许多人质疑语言学对现代自然语言处理的相关性。在本文中,我介绍了几个案例研究,以说明理论语言学和神经语言模型仍然相互关联。首先,语言模型通过提供一个客观的工具来测量语义距离,这对语言学家很有用,语义距离很难使用传统方法。另一方面,语言理论通过提供框架和数据源来探究我们的语言模型,以了解语言理解的特定方面,从而有助于语言建模研究。本论文贡献了三项研究,探讨了语言模型中语法 - 听觉界面的不同方面。在论文的第一部分中,我将语言模型应用于单词类灵活性的问题。我将Mbert作为语义距离测量的来源,我提供了有利于将单词类灵活性分析为方向过程的证据。在论文的第二部分中,我提出了一种方法来测量语言模型中间层的惊奇方法。我的实验表明,包含形态句法异常的句子触发了语言模型早期的惊喜,而不是语义和常识异常。最后,在论文的第三部分中,我适应了一些心理语言学研究,以表明语言模型包含了论证结构结构的知识。总而言之,我的论文在自然语言处理,语言理论和心理语言学之间建立了新的联系,以为语言模型的解释提供新的观点。
translated by 谷歌翻译
Lexicon信息和预先训练的型号,如伯特,已被组合以探索由于各自的优势而探索中文序列标签任务。然而,现有方法通过浅和随机初始化的序列层仅熔断词典特征,并且不会将它们集成到伯特的底层中。在本文中,我们提出了用于汉语序列标记的Lexicon增强型BERT(Lebert),其直接通过Lexicon适配器层将外部词典知识集成到BERT层中。与现有方法相比,我们的模型促进了伯特下层的深层词典知识融合。关于十个任务的十个中文数据集的实验,包括命名实体识别,单词分段和言语部分标记,表明Lebert实现了最先进的结果。
translated by 谷歌翻译
语言模型是通过有限的输入集定义的,当我们尝试扩展支持语言的数量时,该输入会产生词汇瓶颈。解决此瓶颈会导致在嵌入矩阵中可以表示的与输出层中的计算问题之间的权衡。本文介绍了基于像素的语言编码器Pixel,这两个问题都没有遭受这些问题的影响。 Pixel是一种验证的语言模型,可将文本作为图像呈现,使基于拼字法相似性或像素的共激活的语言传输表示形式。 Pixel经过训练可以重建蒙版贴片的像素,而不是预测令牌上的分布。我们在与BERT相同的英语数据上为8600万参数像素模型预告,并对包括各种非拉丁语脚本在内的类型上多样化的语言中的句法和语义任务进行了评估。我们发现,Pixel在预读取数据中找不到的脚本上的句法和语义处理任务大大优于BERT,但是在使用拉丁文脚本时,Pixel比BERT稍弱。此外,我们发现像素对嘈杂的文本输入比bert更强大,进一步证实了用像素建模语言的好处。
translated by 谷歌翻译