One of the important topics in the research field of Chinese classical poetry is to analyze the poetic style. By examining the relevant works of previous dynasties, researchers judge a poetic style mostly by their subjective feelings, and refer to the previous evaluations that have become a certain conclusion. Although this judgment method is often effective, there may be some errors. This paper builds the most perfect data set of Chinese classical poetry at present, trains a BART-poem pre -trained model on this data set, and puts forward a generally applicable poetry style judgment method based on this BART-poem model, innovatively introduces in-depth learning into the field of computational stylistics, and provides a new research method for the study of classical poetry. This paper attempts to use this method to solve the problem of poetry style identification in the Tang and Song Dynasties, and takes the poetry schools that are considered to have a relatively clear and consistent poetic style, such as the Hongzheng Qizi and Jiajing Qizi, Jiangxi poetic school and Tongguang poetic school, as the research object, and takes the poems of their representative poets for testing. Experiments show that the judgment results of the tested poetry work made by the model are basically consistent with the conclusions given by critics of previous dynasties, verify some avant-garde judgments of Mr. Qian Zhongshu, and better solve the task of poetry style recognition in the Tang and Song dynasties.
translated by 谷歌翻译
In order to test whether artificial intelligence can create qualified classical poetry like humans, the author proposes a study of Chinese classical poetry generation based on a pre-trained model. This paper mainly tries to use BART and other pre training models, proposes FS2TEXT and RR2TEXT to generate metrical poetry text and even specific style poetry text, and solves the problem that the user's writing intention gradually reduces the relevance of the generated poetry text. In order to test the model's results, the authors selected ancient poets, by combining it with BART's poetic model work, developed a set of AI poetry Turing problems, it was reviewed by a group of poets and poetry writing researchers. There were more than 600 participants, and the final results showed that, high-level poetry lovers can't distinguish between AI activity and human activity, this indicates that the author's working methods are not significantly different from human activities. The model of poetry generation studied by the author generalizes works that cannot be distinguished from those of advanced scholars. The number of modern Chinese poets has reached 5 million. However, many modern Chinese poets lack language ability and skills as a result of their childhood learning. However, many modern poets have no creative inspiration, and the author's model can help them. They can look at this model when they choose words and phrases and they can write works based on the poems they already have, and they can write their own poems. The importance of poetry lies in the author's thoughts and reflections. It doesn't matter how good AI poetry is. The only thing that matters is for people to see and inspire them.
translated by 谷歌翻译
定义生成任务旨在自动在特定上下文中生成一个单词的定义。但是,由于缺乏针对不同复杂性的数据集,模型产生的定义往往会保持相同的复杂度。本文提出了为具有可控复杂性级别的单词生成定义的新任务。相应地,我们介绍了编译,一个数据集给出了有关中国定义的详细信息,并且每个定义都标有其复杂性级别。编译数据集包括74,303个单词和106,882个定义。据我们所知,它是中国定义生成任务的最大数据集。我们选择各种代表性生成方法作为此任务的基准和进行评估,这说明我们的数据集在协助模型生成不同的复杂性级别定义方面发挥了出色的作用。我们认为,编译数据集将使复杂性可控定义生成的进一步研究受益。
translated by 谷歌翻译
Future work sentences (FWS) are the particular sentences in academic papers that contain the author's description of their proposed follow-up research direction. This paper presents methods to automatically extract FWS from academic papers and classify them according to the different future directions embodied in the paper's content. FWS recognition methods will enable subsequent researchers to locate future work sentences more accurately and quickly and reduce the time and cost of acquiring the corpus. The current work on automatic identification of future work sentences is relatively small, and the existing research cannot accurately identify FWS from academic papers, and thus cannot conduct data mining on a large scale. Furthermore, there are many aspects to the content of future work, and the subdivision of the content is conducive to the analysis of specific development directions. In this paper, Nature Language Processing (NLP) is used as a case study, and FWS are extracted from academic papers and classified into different types. We manually build an annotated corpus with six different types of FWS. Then, automatic recognition and classification of FWS are implemented using machine learning models, and the performance of these models is compared based on the evaluation metrics. The results show that the Bernoulli Bayesian model has the best performance in the automatic recognition task, with the Macro F1 reaching 90.73%, and the SCIBERT model has the best performance in the automatic classification task, with the weighted average F1 reaching 72.63%. Finally, we extract keywords from FWS and gain a deep understanding of the key content described in FWS, and we also demonstrate that content determination in FWS will be reflected in the subsequent research work by measuring the similarity between future work sentences and the abstracts.
translated by 谷歌翻译
在本文中,我们利用了以前的预训练模型(PTM)的优势,并提出了一种新型的中国预训练的不平衡变压器(CPT)。与以前的中国PTM不同,CPT旨在利用自然语言理解(NLU)和自然语言生成(NLG)之间的共同知识来促进表现。 CPT包括三个部分:共享编码器,一个理解解码器和一代解码器。具有共享编码器的两个特定解码器分别通过蒙版语言建模(MLM)进行了预训练,并分别将自动编码(DAE)任务进行了验证。借助部分共享的体系结构和多任务预培训,CPT可以(1)使用两个解码器学习NLU或NLG任务的特定知识,并且(2)对模型的潜力充分利用了微调。此外,不平衡的变压器节省了计算和存储成本,这使CPT竞争激烈,并极大地加速了文本生成的推断。对各种中国NLU和NLG任务的实验结果显示了CPT的有效性。
translated by 谷歌翻译
Neural machine translation(NMT) has aroused wide attention due to its impressive quality. Beyond quality, controlling translation styles is also an important demand for many languages. Previous related studies mainly focus on controlling formality and gain some improvements. However, they still face two challenges. The first is the evaluation limitation. Style contains abundant information including lexis, syntax, etc. But only formality is well studied. The second is the heavy reliance on iterative fine-tuning when new styles are required. Correspondingly, this paper contributes in terms of the benchmark and approach. First, we re-visit this task and propose a multiway stylized machine translation (MSMT) benchmark, which includes multiple categories of styles in four language directions to push the boundary of this task. Second, we propose a method named style activation prompt (StyleAP) by retrieving prompts from stylized monolingual corpus, which needs no extra fine-tuning. Experiments show that StyleAP could effectively control the style of translation and achieve remarkable performance. All of our data and code are released at https://github.com/IvanWang0730/StyleAP.
translated by 谷歌翻译
传统中药(TCM)是一种自然,安全且有效的疗法,已在全球范围内传播和应用。独特的TCM诊断和治疗系统需要对隐藏在自由文本编写的临床记录中的患者症状进行全面分析。先前的研究表明,该系统可以在人工智能(AI)技术(例如自然语言处理(NLP))的帮助下进行通知和智能。但是,现有数据集没有足够的质量或数量来支持TCM中数据驱动的AI技术的进一步开发。因此,在本文中,我们专注于TCM诊断和治疗系统的核心任务 - 综合征分化(SD) - 我们介绍了第一个用于SD的公共大型数据集,称为TCM-SD。我们的数据集包含54,152个现实世界临床记录,涵盖148个综合征。此外,我们在TCM领域收集了一个大规模的未标记文本语料库,并提出了一种特定领域的预训练的语言模型,称为Zy-Bert。我们使用深层神经网络进行了实验,以建立强大的性能基线,揭示了SD中的各种挑战,并证明了特定领域的预训练性语言模型的潜力。我们的研究和分析揭示了将计算机科学和语言学知识纳入探索TCM理论的经验有效性的机会。
translated by 谷歌翻译
社会科学的学术文献是记录人类文明并研究人类社会问题的文献。随着这种文献的大规模增长,快速找到有关相关问题的现有研究的方法已成为对研究人员的紧迫需求。先前的研究,例如SCIBERT,已经表明,使用特定领域的文本进行预训练可以改善这些领域中自然语言处理任务的性能。但是,没有针对社会科学的预训练的语言模型,因此本文提出了关于社会科学引文指数(SSCI)期刊上许多摘要的预培训模型。这些模型可在GitHub(https://github.com/s-t-full-text-knowledge-mining/ssci-bert)上获得,在学科分类和带有社会科学文学的抽象结构 - 功能识别任务方面表现出色。
translated by 谷歌翻译
Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that are more natural and better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the lower level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks which may require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize CTG techniques from the perspective of PLMs. We hope it can help researchers in related fields to quickly track the academic frontier, providing them with a landscape of the area and a roadmap for future research.
translated by 谷歌翻译
非平行文本样式转移是自然语言生成的重要任务。但是,先前的研究集中在令牌或句子级别上,例如句子情绪和形式转移,但在话语水平上忽略了长时间的转移。长文本通常涉及更复杂的作者语言偏好,例如话语结构,而不是句子。在本文中,我们制定了非并行故事作者风格转移的任务,该任务需要将输入故事传输到指定的作者样式的同时,同时维护源语义。为了解决这个问题,我们提出了一个名为StoryTrans的一代模型,该模型利用话语表示捕获源内容信息并将其传输到具有可学习样式嵌入的目标样式中。我们使用额外的培训目标将文学的文学特征与学习的话语表示,以防止模型退化为自动编码器。此外,为了增强内容保存,我们设计了一个面具和填充框架,以将源文本的特定于特定于样式的关键字定为生成。此外,我们分别用中文和英语构建了此任务的新数据集。广泛的实验表明,我们的模型在样式传输和内容保存的总体性能方面优于强大的基线。
translated by 谷歌翻译
预训练的语言模型(PLM)在自然语言生成(NLG)任务中取得了显着的成功。到目前为止,大多数PLM都使用大型一般语料库以无监督的方式进行了预培训。同时,与无监督的模型相比,预先训练的模型越来越多地显示出较低的数据表现出色。受监督预训练的成功的激励,我们提出了自然语言生成的多任务监督预训练(MVP)。为了预先培训文本生成模型MVP,我们从七个生成任务中收集了45个数据集的标记预训练语料库。对于每个任务,我们进一步预先训练特定的软提示,以刺激执行特定任务的模型能力。广泛的实验证明了我们在许多NLG任务中有监督的预训练的有效性,并且我们的一般方法在17个数据集中的12个中实现了最先进的性能。
translated by 谷歌翻译
在科学研究中,该方法是解决科学问题和关键研究对象的必不可少手段。随着科学的发展,正在提出,修改和使用许多科学方法。作者在抽象和身体文本中描述了该方法的详细信息,并且反映该方法名称的学术文献中的关键实体称为方法实体。在大量的学术文献中探索各种方法实体有助于学者了解现有方法,为研究任务选择适当的方法并提出新方法。此外,方法实体的演变可以揭示纪律的发展并促进知识发现。因此,本文对方法论和经验作品进行了系统的综述,重点是从全文学术文献中提取方法实体,并努力使用这些提取的方法实体来建立知识服务。首先提出了本综述涉及的关键概念的定义。基于这些定义,我们系统地审查了提取和评估方法实体的方法和指标,重点是每种方法的利弊。我们还调查了如何使用提取的方法实体来构建新应用程序。最后,讨论了现有作品的限制以及潜在的下一步。
translated by 谷歌翻译
如今,基础模型已成为人工智能中的基本基础设施之一,铺平了通往通用情报的方式。但是,现实提出了两个紧急挑战:现有的基础模型由英语社区主导;用户通常会获得有限的资源,因此不能总是使用基础模型。为了支持中文社区的发展,我们介绍了一个名为Fengshenbang的开源项目,该项目由认知计算与自然语言研究中心(CCNL)领导。我们的项目具有全面的功能,包括大型预培训模型,用户友好的API,基准,数据集等。我们将所有这些都包装在三个子项目中:风水次模型,风水框架和狂热基准。 Fengshenbang的开源路线图旨在重新评估中国预培训的大型大型模型的开源社区,促使整个中国大型模型社区的发展。我们还希望构建一个以用户为中心的开源生态系统,以允许个人访问所需的模型以匹配其计算资源。此外,我们邀请公司,大学和研究机构与我们合作建立大型开源模型的生态系统。我们希望这个项目将成为中国认知情报的基础。
translated by 谷歌翻译
目的本文的目的是探讨哪些学术文章裁判的结构将更加关注,具体内容裁判的重点是哪些特定内容,以及中国的分布是否与引用有关。设计/方法/方法首先,利用节标题和分层注意网络模型(HAN)的特征单词来识别学术文章结构。其次,根据PRC中规则提取的位置信息在不同结构中的分布。第三,分析通过卡方检验和TF-IDF在不同结构中提取的PRC特征单词的分布。最后,使用四种相关分析方法来分析PRC在不同结构中的分布是否与引用相关。发现在材料和方法和结果部分中分布的PRC计数远远超过了引言和讨论的结构,这表明裁判员更多地关注材料,方法和结果。中国在不同结构中的特征单词的分布显然是不同的,这可以反映裁判员关注的内容。中国在不同结构中的分布与引用之间没有相关性。由于裁判员写同行评审报告的差异,研究的局限性/含义,用于提取位置信息的规则不能涵盖所有中国的所有中国。原创性/价值本文在不同的学术文章结构中发现了中国分布的一种模式,证明了长期的经验理解。它还提供了对学术文章写作的见解:研究人员应确保方法的科学性和撰写学术文章的结果的可靠性,以获得裁判的高度认可。
translated by 谷歌翻译
Pre-trained language models are trained on large-scale unsupervised data, and they can be fine-tuned on small-scale labeled datasets and achieve good results. Multilingual pre-trained language models can be trained on multiple languages and understand multiple languages at the same time. At present, the research on pre-trained models mainly focuses on rich-resource language, while there is relatively little research on low-resource languages such as minority languages, and the public multilingual pre-trained language model can not work well for minority languages. Therefore, this paper constructs a multilingual pre-trained language model named MiLMo that performs better on minority language tasks, including Mongolian, Tibetan, Uyghur, Kazakh and Korean. To solve the problem of scarcity of datasets on minority languages and verify the effectiveness of the MiLMo model, this paper constructs a minority multilingual text classification dataset named MiTC, and trains a word2vec model for each language. By comparing the word2vec model and the pre-trained model in the text classification task, this paper provides an optimal scheme for the downstream task research of minority languages. The final experimental results show that the performance of the pre-trained model is better than that of the word2vec model, and it has achieved the best results in minority multilingual text classification. The multilingual pre-trained language model MiLMo, multilingual word2vec model and multilingual text classification dataset MiTC are published on https://milmo.cmli-nlp.com.
translated by 谷歌翻译
中文角色是一款具有挑战性的谜语游戏,将一个角色作为解决方案。谜语用修辞技术描述了解决方案特征的发音,形状和含义。在本文中,我们提出了一个汉字谜语数据集,该数据集涵盖了大多数普通简化的中文字符,通过从网络上爬出谜语并生成全新的杂物。在一代阶段,我们为生成模型提供了中文的语音字母,解释和解释解决方案特征,并为每个测试的字符获得多个谜语描述。然后,生成的谜语是手动过滤的,最终数据集CC-Riddle由人写的谜语和过滤的生成的谜语组成。此外,我们基于数据集构建了一个角色谜语QA系统,发现现有模型难以解决此类棘手的问题。CC-Riddle现已公开可用。
translated by 谷歌翻译
科学文献是高质量的语料库,支持大量自然语言处理(NLP)研究。但是,现有数据集围绕英语,这限制了中国科学NLP的发展。在这项工作中,我们提出了CSL,这是一个大规模的中国科学文献数据集,其中包含396K论文的标题,摘要,关键字和学术领域。据我们所知,CSL是中文中的第一个科学文档数据集。 CSL可以用作中国语料库。同样,该半结构化数据是一种自然注释,可以构成许多监督的NLP任务。基于CSL,我们提出了一个基准,以评估跨科学领域任务的模型的性能,即摘要,关键字生成和文本分类。我们分析了现有文本到文本模型在评估任务上的行为,并揭示了中国科学NLP任务的挑战,该任务为未来的研究提供了宝贵的参考。数据和代码可在https://github.com/ydli-ai/csl上找到
translated by 谷歌翻译
这项研究提供了对僧伽罗文本分类的预训练语言模型的性能的首次全面分析。我们测试了一组不同的Sinhala文本分类任务,我们的分析表明,在包括Sinhala(XLM-R,Labse和Laser)的预训练的多语言模型中,XLM-R是迄今为止Sinhala文本的最佳模型分类。我们还预先培训了两种基于罗伯塔的单语僧伽罗模型,它们远远优于僧伽罗的现有预训练的语言模型。我们表明,在微调时,这些预训练的语言模型为僧伽罗文本分类树立了非常强大的基线,并且在标记数据不足以进行微调的情况下非常强大。我们进一步提供了一组建议,用于使用预训练的模型进行Sinhala文本分类。我们还介绍了新的注释数据集,可用于僧伽罗文本分类的未来研究,并公开发布我们的预培训模型。
translated by 谷歌翻译
这里提出的研究提供了对Ghazal的数字洞察力 - 乌尔都语诗歌中最受赞赏的类型。这项研究探讨了4,754级诗人生产的4,754个诗人,探讨了Urdu Ghazal的主要特征,使其比其他形式更受欢迎和钦佩。提供了详细的解释,以及用于表达爱情,自然,鸟类和鲜花等的单词类型。也认为是诗人在诗歌中对其所属的人讨论的方式。使用多维缩放进行数值分析诗歌风格,以揭示引起批评者注意力的不同诗意作品的词汇分集和相似性/差异,例如iqbal和ghalib,mir taqi mir和mir dard。这里产生的分析对于计算风格学,神经认知诗学和情感分析特别有利。
translated by 谷歌翻译
抒情的解释可以帮助人们快速理解歌曲及其歌词,还可以使管理,检索和发现音乐档案不断增长,从而更加容易地检索和发现歌曲。在本文中,我们提出了Bart-Fusion,这是一种新型模型,用于从歌词和音乐音频中生成歌词解释,该模型将大规模的预训练的语言模型与音频编码器结合在一起。我们采用跨模式注意模块将音频表示形式纳入歌词表示形式,以帮助预先训练的语言模型从音频的角度了解歌曲,同时保留语言模型的原始生成性能。我们还发布了歌曲解释数据集,这是一个新的大型数据集,用于培训和评估我们的模型。实验结果表明,其他音频信息有助于我们的模型更好地理解单词和音乐,并产生精确和流利的解释。跨模式音乐检索的另一个实验表明,巴特融合产生的解释也可以帮助人们比原始的巴特更准确地检索音乐。
translated by 谷歌翻译