孟加拉语键入大多是使用英语键盘进行的,并且由于存在化合物和类似明显的字母,因此可能是错误的。拼写错误的单词的拼写校正需要了解单词键入模式以及用法一词的上下文。我们提出了一个专业的BERT模型,Bspell针对词校正句子级别。Bspell包含一个可训练的CNN子模型,名为Semanticnet以及专门的辅助损失。这使得Bspell在存在拼写错误的情况下专门研究高度易转的孟加拉词汇。我们进一步提出了将单词级别和字符水平掩蔽组合的混合预读方案。利用这种预审前的方案,BSPELL在现实生活中的孟加拉语拼写校正验证设置中实现了91.5%的准确性。对两个孟加拉语和一个印地语拼写校正数据集进行了详细比较,显示了拟议的Bspell优于现有咒语检查器的优势。
translated by 谷歌翻译
虽然为英语和中文等高资源语言(LM)的语言建模(LM)有大量的工作,但对于孟加拉和印地文等低资源语言仍然是未开发的。我们提出了一个名为COCNN的最终可训练记忆高效CNN架构,以处理孟加拉和印地语的高拐点,形态丰富,灵活的单词顺序等特定特征,以及孟加拉和印地语的语音拼写错误。特别是,我们在Word和句子级别介绍了两个学习的卷积子模型,这些子模型结束了最终培训。我们展示了最先进的(SOTA)变压器模型,包括佩尔雷达伯特不一定会给孟加拉和印地语产生最佳表现。 COCNN优于Preverting Bert,参数减少16倍,它可以在多个真实数据集上的SOTA LSTM模型实现更好的性能。这是第一次研究不同架构的有效性,从三个深度学习范式 - 卷积,经常性和变压器神经网络,用于建模两种广泛使用的语言,孟加拉和印地语。
translated by 谷歌翻译
Spelling error correction is the task of identifying and rectifying misspelled words in texts. It is a potential and active research topic in Natural Language Processing because of numerous applications in human language understanding. The phonetically or visually similar yet semantically distinct characters make it an arduous task in any language. Earlier efforts on spelling error correction in Bangla and resource-scarce Indic languages focused on rule-based, statistical, and machine learning-based methods which we found rather inefficient. In particular, machine learning-based approaches, which exhibit superior performance to rule-based and statistical methods, are ineffective as they correct each character regardless of its appropriateness. In this work, we propose a novel detector-purificator-corrector framework based on denoising transformers by addressing previous issues. Moreover, we present a method for large-scale corpus creation from scratch which in turn resolves the resource limitation problem of any left-to-right scripted language. The empirical outcomes demonstrate the effectiveness of our approach that outperforms previous state-of-the-art methods by a significant margin for Bangla spelling error correction. The models and corpus are publicly available at https://tinyurl.com/DPCSpell.
translated by 谷歌翻译
中文拼写检查(CSC)的任务旨在检测和纠正文本中可以找到的拼写错误。虽然手动注释高质量的数据集很昂贵且耗时,因此培训数据集的规模通常很小(例如,Sighan15仅包含2339个用于培训的样本),因此基于学习的模型通常会遭受数据稀疏限制。和过度合适的问题,尤其是在大语言模型时代。在本文中,我们致力于研究\ textbf {无监督}范式来解决CSC问题,我们提出了一个名为\ textbf {uchecker}的框架,以进行无监督的拼写错误检测和校正。考虑到其强大的语言诊断能力,将蒙面审慎的语言模型(例如BERT)引入为骨干模型。从各种且灵活的掩蔽操作中受益,我们提出了一种混乱集引导的掩盖策略,以精细培训掩盖语言模型,以进一步提高无监督的检测和校正的性能。标准数据集的实验结果证明了我们提出的模型Uchecker在字符级别和句子级的准确性,精度,回忆和F1分别对拼写错误检测和校正任务分别进行的效率。
translated by 谷歌翻译
Lexicon信息和预先训练的型号,如伯特,已被组合以探索由于各自的优势而探索中文序列标签任务。然而,现有方法通过浅和随机初始化的序列层仅熔断词典特征,并且不会将它们集成到伯特的底层中。在本文中,我们提出了用于汉语序列标记的Lexicon增强型BERT(Lebert),其直接通过Lexicon适配器层将外部词典知识集成到BERT层中。与现有方法相比,我们的模型促进了伯特下层的深层词典知识融合。关于十个任务的十个中文数据集的实验,包括命名实体识别,单词分段和言语部分标记,表明Lebert实现了最先进的结果。
translated by 谷歌翻译
Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but also orthographic and semantic errors, such as misspellings and word choice errors respectively. The field has seen significant progress in the last decade, motivated in part by a series of five shared tasks, which drove the development of rule-based methods, statistical classifiers, statistical machine translation, and finally neural machine translation systems which represent the current dominant state of the art. In this survey paper, we condense the field into a single article and first outline some of the linguistic challenges of the task, introduce the most popular datasets that are available to researchers (for both English and other languages), and summarise the various methods and techniques that have been developed with a particular focus on artificial error generation. We next describe the many different approaches to evaluation as well as concerns surrounding metric reliability, especially in relation to subjective human judgements, before concluding with an overview of recent progress and suggestions for future work and remaining challenges. We hope that this survey will serve as comprehensive resource for researchers who are new to the field or who want to be kept apprised of recent developments.
translated by 谷歌翻译
The lack of label data is one of the significant bottlenecks for Chinese Spelling Check (CSC). Existing researches use the method of automatic generation by exploiting unlabeled data to expand the supervised corpus. However, there is a big gap between the real input scenario and automatic generated corpus. Thus, we develop a competitive general speller ECSpell which adopts the Error Consistent masking strategy to create data for pretraining. This error consistency masking strategy is used to specify the error types of automatically generated sentences which is consistent with real scene. The experimental result indicates our model outperforms previous state-of-the-art models on the general benchmark. Moreover, spellers often work within a particular domain in real life. Due to lots of uncommon domain terms, experiments on our built domain specific datasets show that general models perform terribly. Inspired by the common practice of input methods, we propose to add an alterable user dictionary to handle the zero-shot domain adaption problem. Specifically, we attach a User Dictionary guided inference module (UD) to a general token classification based speller. Our experiments demonstrate that ECSpell$^{UD}$, namely ECSpell combined with UD, surpasses all the other baselines largely, even approaching the performance on the general benchmark.
translated by 谷歌翻译
确保适当的标点符号和字母外壳是朝向应用复杂的自然语言处理算法的关键预处理步骤。这对于缺少标点符号和壳体的文本源,例如自动语音识别系统的原始输出。此外,简短的短信和微博的平台提供不可靠且经常错误的标点符号和套管。本调查概述了历史和最先进的技术,用于恢复标点符号和纠正单词套管。此外,突出了当前的挑战和研究方向。
translated by 谷歌翻译
The rapid advancement of AI technology has made text generation tools like GPT-3 and ChatGPT increasingly accessible, scalable, and effective. This can pose serious threat to the credibility of various forms of media if these technologies are used for plagiarism, including scientific literature and news sources. Despite the development of automated methods for paraphrase identification, detecting this type of plagiarism remains a challenge due to the disparate nature of the datasets on which these methods are trained. In this study, we review traditional and current approaches to paraphrase identification and propose a refined typology of paraphrases. We also investigate how this typology is represented in popular datasets and how under-representation of certain types of paraphrases impacts detection capabilities. Finally, we outline new directions for future research and datasets in the pursuit of more effective paraphrase detection using AI.
translated by 谷歌翻译
仇恨言论的大规模传播,针对特定群体的仇恨内容,是一个批评社会重要性的问题。仇恨语音检测的自动化方法通常采用最先进的深度学习(DL)的文本分类器 - 非常大的预训练的神经语言模型超过1亿个参数,将这些模型适应仇恨语音检测的任务相关标记的数据集。不幸的是,只有许多标记的数据集有限的尺寸可用于此目的。我们为推进这种事态的高潜力进行了几项贡献。我们呈现HyperNetworks用于仇恨语音检测,这是一种特殊的DL网络,其权重由小型辅助网络调节。这些架构在字符级运行,而不是字级,并且与流行的DL分类器相比,几个较小的顺序大小。我们进一步表明,在命名为IT数据增强的过程中使用大量自动生成的示例的培训讨厌检测分类器通常是有益的,但这种做法尤其提高了所提出的HyperNetworks的性能。事实上,我们实现了比艺术最新的语言模型相当或更好的性能,这些模型是使用这种方法的预先训练的和数量级,与使用五个公共仇恨语音数据集进行评估。
translated by 谷歌翻译
大多数中国预训练的模型都采用字符作为下游任务的基本单元。但是,这些模型忽略了单词传递的信息,从而导致某些重要语义的丧失。在本文中,我们提出了一种新方法来利用单词结构并将词汇语义集成到预训练模型的特征表示中。具体而言,我们根据相似度的重量将单词嵌入其内部字符的嵌入中。为了加强边界信息一词,我们将一个单词中内部字符的表示形式混合在一起。之后,我们将单词到字符对准注意机制通过掩盖不重要的角色来强调重要角色。此外,为了减少单词分割引起的误差传播,我们提出了一种合奏方法,以结合不同的标记者给出的分割结果。实验结果表明,我们的方法在不同的中文NLP任务上取得了优于基本预训练的模型Bert,Bert-WWM和Ernie:情感分类,句子对匹配,自然语言推断和机器阅读理解。我们进行进一步的分析以证明我们模型每个组成部分的有效性。
translated by 谷歌翻译
Scene text spotting is of great importance to the computer vision community due to its wide variety of applications. Recent methods attempt to introduce linguistic knowledge for challenging recognition rather than pure visual classification. However, how to effectively model the linguistic rules in end-to-end deep networks remains a research challenge. In this paper, we argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input. Correspondingly, we propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting. Firstly, the autonomous suggests enforcing explicitly language modeling by decoupling the recognizer into vision model and language model and blocking gradient flow between both models. Secondly, a novel bidirectional cloze network (BCN) as the language model is proposed based on bidirectional feature representation. Thirdly, we propose an execution manner of iterative correction for the language model which can effectively alleviate the impact of noise input. Finally, to polish ABINet++ in long text recognition, we propose to aggregate horizontal features by embedding Transformer units inside a U-Net, and design a position and content attention module which integrates character order and content to attend to character features precisely. ABINet++ achieves state-of-the-art performance on both scene text recognition and scene text spotting benchmarks, which consistently demonstrates the superiority of our method in various environments especially on low-quality images. Besides, extensive experiments including in English and Chinese also prove that, a text spotter that incorporates our language modeling method can significantly improve its performance both in accuracy and speed compared with commonly used attention-based recognizers.
translated by 谷歌翻译
来自变压器(BERT)的双向编码器表示显示了各种NLP任务的奇妙改进,并且已经提出了其连续的变体来进一步提高预先训练的语言模型的性能。在本文中,我们的目标是首先介绍中国伯特的全文掩蔽(WWM)策略,以及一系列中国预培训的语言模型。然后我们还提出了一种简单但有效的型号,称为Macbert,这在几种方面提高了罗伯塔。特别是,我们提出了一种称为MLM作为校正(MAC)的新掩蔽策略。为了展示这些模型的有效性,我们创建了一系列中国预先培训的语言模型,作为我们的基线,包括BERT,Roberta,Electra,RBT等。我们对十个中国NLP任务进行了广泛的实验,以评估创建的中国人托管语言模型以及提议的麦克白。实验结果表明,Macbert可以在许多NLP任务上实现最先进的表演,我们还通过几种可能有助于未来的研究的调查结果来消融细节。我们开源我们的预先培训的语言模型,以进一步促进我们的研究界。资源可用:https://github.com/ymcui/chinese-bert-wwm
translated by 谷歌翻译
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a;Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
translated by 谷歌翻译
Named Entity Recognition and Intent Classification are among the most important subfields of the field of Natural Language Processing. Recent research has lead to the development of faster, more sophisticated and efficient models to tackle the problems posed by those two tasks. In this work we explore the effectiveness of two separate families of Deep Learning networks for those tasks: Bidirectional Long Short-Term networks and Transformer-based networks. The models were trained and tested on the ATIS benchmark dataset for both English and Greek languages. The purpose of this paper is to present a comparative study of the two groups of networks for both languages and showcase the results of our experiments. The models, being the current state-of-the-art, yielded impressive results and achieved high performance.
translated by 谷歌翻译
通过摩尔维亚岛与罗马尼亚语方言识别的机器学习模型的看似高精度水平和对这一主题的越来越多的研究兴趣,我们提供了摩尔维亚的跟进与罗马尼亚语交叉方言主题识别(MRC)的Vartial共享任务2019年评估运动。共享任务包括两个子任务类型:一个组成,其中包括摩尔维亚和罗马尼亚语方言的区分,其中一个由罗马尼亚语两条方言进行主题分类文件。参与者实现了令人印象深刻的分数,例如,摩尔维亚州的顶级型号与罗马尼亚语方言识别获得了0.895的宏F1得分。我们对人类注释者进行了主观评估,显示人类与机器学习(ML)模型相比,人类可以获得更低的精度率。因此,还不清楚为什么参与者提出的方法达到这种高精度率的方法。我们的目标是理解(i)为什么所提出的方法如此良好地工作(通过可视化鉴别特征)和(ii)这些方法可以在多大程度上保持其高精度水平,例如,这些方法可以保持高精度水平。当我们将文本样本缩短到单个句子时或我们在推理时间使用推文时。我们工作的二级目标是使用集合学习提出改进的ML模型。我们的实验表明,ML模型可以准确地识别方言,即使在句子水平和不同的域中(新闻文章与推文)。我们还分析了最佳性能模型的最辨别特征,在这些模型所采取的决策背后提供了一些解释。有趣的是,我们学习我们以前未知的新的辩证模式或我们的人为注册者。此外,我们进行实验,表明可以通过基于堆叠的集合来改善MRC共享任务的机器学习性能。
translated by 谷歌翻译
随着越来越多的可用文本数据,能够自动分析,分类和摘要这些数据的算法的开发已成为必需品。在本研究中,我们提出了一种用于关键字识别的新颖算法,即表示给定文档的关键方面的一个或多字短语的提取,称为基于变压器的神经标记器,用于关键字识别(TNT-KID)。通过将变压器架构适用于手头的特定任务并利用域特定语料库上的预先磨损的语言模型,该模型能够通过提供竞争和强大的方式克服监督和无监督的最先进方法的缺陷在各种不同的数据集中的性能,同时仅需要最佳执行系统所需的手动标记的数据。本研究还提供了彻底的错误分析,具有对模型内部运作的有价值的见解和一种消融研究,测量关键字识别工作流程的特定组分对整体性能的影响。
translated by 谷歌翻译
拼写错误纠正是自然语言处理中具有很长历史的主题之一。虽然以前的研究取得了显着的结果,但仍然存在挑战。在越南语中,任务的最先进的方法从其相邻音节中介绍了一个音节的上下文。然而,该方法的准确性可能是不令人满意的,因为如果模型可能会失去上下文,如果两个(或更多)拼写错误彼此静置。在本文中,我们提出了一种纠正越南拼写错误的新方法。我们使用深入学习模型解决错误错误和拼写错误错误的问题。特别地,嵌入层由字节对编码技术提供支持。基于变压器架构的序列模型的序列使我们的方法与上一个问题不同于同一问题的方法。在实验中,我们用大型合成数据集训练模型,这是随机引入的拼写错误。我们使用现实数据集测试所提出的方法的性能。此数据集包含11,202个以9,341不同的越南句子中的人造拼写错误。实验结果表明,我们的方法达到了令人鼓舞的表现,检测到86.8%的误差,81.5%纠正,分别提高了最先进的方法5.6%和2.2%。
translated by 谷歌翻译
在本文中,我们研究了波斯语的G2P转换的端到端和多模块框架的应用。结果表明,我们提出的多模型G2P系统在准确性和速度方面优于我们的端到端系统。该系统由发音词典作为我们的查找表组成,以及使用GRU和Transformer架构创建的波斯语中的同符,OOV和EZAFE的单独模型。该系统是序列级别而不是单词级别,它使其能够有效地捕获单词(跨字信息)之间的不成文关系,而无需进行任何预处理,而无需进行任何预歧歧义和EZAFE识别。经过评估后,我们的系统达到了94.48%的单词级准确性,表现优于先前的波斯语G2P系统。
translated by 谷歌翻译
语言模型是通过有限的输入集定义的,当我们尝试扩展支持语言的数量时,该输入会产生词汇瓶颈。解决此瓶颈会导致在嵌入矩阵中可以表示的与输出层中的计算问题之间的权衡。本文介绍了基于像素的语言编码器Pixel,这两个问题都没有遭受这些问题的影响。 Pixel是一种验证的语言模型,可将文本作为图像呈现,使基于拼字法相似性或像素的共激活的语言传输表示形式。 Pixel经过训练可以重建蒙版贴片的像素,而不是预测令牌上的分布。我们在与BERT相同的英语数据上为8600万参数像素模型预告,并对包括各种非拉丁语脚本在内的类型上多样化的语言中的句法和语义任务进行了评估。我们发现,Pixel在预读取数据中找不到的脚本上的句法和语义处理任务大大优于BERT,但是在使用拉丁文脚本时,Pixel比BERT稍弱。此外,我们发现像素对嘈杂的文本输入比bert更强大,进一步证实了用像素建模语言的好处。
translated by 谷歌翻译