近年来,由于通过网络的电子文件的高可用性,抄袭已成为一个严峻的挑战,特别是学者之间。已经开发出各种抄袭检测系统来防止文本重复使用和面对抄袭。虽然在学术手稿中检测重复文本几乎很容易,但发现已经语义改变的文本重复模式具有重要意义。另一个重要问题是处理较少的资源语言,这些语言有很多文本,用于训练目的,以及NLP应用程序的工具中的性能很低。在本文中,我们介绍了Hamtajoo,是学术稿件的波斯抄袭检测系统。此外,我们描述了系统的整体结构以及每个阶段中使用的算法。为了评估所提出的系统的性能,我们使用了抄袭检测语料库符合PAN标准。
translated by 谷歌翻译
Logic Mill is a scalable and openly accessible software system that identifies semantically similar documents within either one domain-specific corpus or multi-domain corpora. It uses advanced Natural Language Processing (NLP) techniques to generate numerical representations of documents. Currently it leverages a large pre-trained language model to generate these document representations. The system focuses on scientific publications and patent documents and contains more than 200 million documents. It is easily accessible via a simple Application Programming Interface (API) or via a web interface. Moreover, it is continuously being updated and can be extended to text corpora from other domains. We see this system as a general-purpose tool for future research applications in the social sciences and other domains.
translated by 谷歌翻译
窃是声称自己是其他人,没有任何适当信用和引用的人。本文是一份调查论文,代表了一些很棒的研究论文及其对窃工作的比较。如今,窃成为自然语言处理领域中最有趣,最关键的研究点之一。我们回顾了一些基于不同类型的窃检测及其模型和算法的旧研究论文,并比较了这些论文的准确性。有几种方法可以使用不同的语言检测。有一些算法可以检测窃。类似,语料库,CL-CNG,LSI,Levenshtein距离等。我们分析了这些论文,并了解到它们使用了不同类型的算法来检测窃。在实验这些论文之后,我们得到了一些算法为检测pla窃提供了更好的输出和准确性。我们将对有关窃的一些论文进行审查,并将讨论其模型的利弊。我们还展示了一种提出的窃方法方法,该方法基于感知分离,单词分离并根据同义词制作句子并与任何来源进行比较。
translated by 谷歌翻译
识别跨语言抄袭是挑战性的,特别是对于遥远的语言对和感知翻译。我们介绍了这项任务的新型多语言检索模型跨语言本体论(CL \ nobreakdash-osa)。 CL-OSA表示从开放知识图Wikidata获得的实体向量的文档。反对其他方法,Cl \ nobreakdash-osa不需要计算昂贵的机器翻译,也不需要使用可比较或平行语料库进行预培训。它可靠地歧义同音异义和缩放,以允许其应用于Web级文档集合。我们展示了CL-OSA优于从五个大局部多样化的测试语料中检索候选文档的最先进的方法,包括日语英语等遥控语言对。为了识别在角色级别的跨语言抄袭,CL-OSA主要改善了感觉识别翻译的检测。对于这些挑战性案例,CL-OSA在良好的Plagdet得分方面的表现超过了最佳竞争对手的比例超过两种。我们研究的代码和数据公开可用。
translated by 谷歌翻译
我们介绍了网络STEREO-21数据集,这是开放式出版物中大量的科学文本重复使用。它包含420万个独特的开放式出版物中发现的超过9100万例重复使用的文本段落。我们的数据集具有高度覆盖科学学科和重复使用的多种元素,以及全面的元数据,以使每个案例与每个案例进行背景化,解决了以前关于科学写作的最显着的缺点。Webis-Stereo-21允许从不同科学背景中解决广泛的研究问题,从而促进了对该现象的定性和定量分析,以及首次基于科学出版物中文本重复使用的基本率。
translated by 谷歌翻译
在学术界,抄袭肯定不是一个新兴的关注,但它随着互联网的普及和对全球内容来源的易于访问而变得更大的程度,使人类干预不足。尽管如此,由于计算机辅助抄袭检测,抄袭远远远非是一个未被解除的问题,目前是一个有效的研究领域,该研究落在信息检索(IR)和自然语言处理(NLP)领域。许多软件解决方案有助于满足这项任务,本文概述了用于阿拉伯语,法国和英语学术和教育环境的抄袭检测系统。比较在八个系统之间持有,并在检测不同来源的三个混淆水平的特征,可用性,技术方面以及它们的性能之间进行:逐字,释义和跨语言抄袭。在本研究的背景下也进行了对技术形式的抄袭技术形式的关注检查。此外,还提供了对不同作者提出的抄袭类型和分类的调查。
translated by 谷歌翻译
学术研究是解决以前从未解决过的问题的探索活动。通过这种性质,每个学术研究工作都需要进行文献审查,以区分其Novelties尚未通过事先作品解决。在自然语言处理中,该文献综述通常在“相关工作”部分下进行。鉴于研究文件的其余部分和引用的论文列表,自动相关工作生成的任务旨在自动生成“相关工作”部分。虽然这项任务是在10年前提出的,但直到最近,它被认为是作为科学多文件摘要问题的变种。然而,即使在今天,尚未标准化了自动相关工作和引用文本生成的问题。在这项调查中,我们进行了一个元研究,从问题制定,数据集收集,方法方法,绩效评估和未来前景的角度来比较相关工作的现有文献,以便为读者洞察到国家的进步 - 最内容的研究,以及如何进行未来的研究。我们还调查了我们建议未来工作要考虑整合的相关研究领域。
translated by 谷歌翻译
随着互联网技术的发展,信息超载现象变得越来越明显。用户需要花费大量时间来获取所需的信息。但是,汇总文档信息的关键词非常有助于用户快速获取和理解文档。对于学术资源,大多数现有研究通过标题和摘要提取关键纸张。我们发现引用中的标题信息还包含作者分配的密钥次。因此,本文使用参考信息并应用两种典型的无监督的提取方法(TF * IDF和Textrank),两个代表传统监督学习算法(NA \“IVE贝叶斯和条件随机场)和监督的深度学习模型(Bilstm- CRF),分析参考信息对关键症提取的具体性能。从扩大源文本的角度来提高关键术识别的质量。实验结果表明,参考信息可以提高精度,召回和F1自动关键肾上腺瓶在一定程度上提取。这表明了参考信息关于学术论文的关键症提取的有用性,并为以下关于自动关键正萃取的研究提供了新的想法。
translated by 谷歌翻译
复杂的工程系统的设计是一个漫长而明确的过程,高度依赖于工程师的专业知识和专业判断。因此,涉及人类因素的活动的典型陷阱通常是由于缺乏分析的完整性或详尽性,设计选择或文档之间的不一致性以及隐性主观性而表现出来。提出了一种方法,以帮助系统工程师从非结构化的自然语言文本中自动生成系统图。自然语言处理(NLP)技术用于从组织中提供的文本资源(例如规格,手册,技术报告,维护报告)从组织中提取实体及其关系,并将其转换为系统建模语言(SYSML)图表,并具有特定的图表专注于结构和需求图。目的是为用户提供一个更具标准化,全面和自动化的起点,随后根据其需求改进并调整图表。所提出的方法是灵活和开放域。它由六个步骤组成,这些步骤利用开放式工具,并导致自动生成的SYSML图,而无需中间建模要求,但通过用户对一组参数的规范。拟议方法的适用性和好处是通过六个案例研究显示的,其文本源为输入,并根据手动定义的图表元素进行了标准。
translated by 谷歌翻译
我们研究了检查问题的事实,旨在识别给定索赔的真实性。具体而言,我们专注于事实提取和验证(发烧)及其伴随数据集的任务。该任务包括从维基百科检索相关文件(和句子)并验证文件中的信息是否支持或驳斥所索赔的索赔。此任务至关重要,可以是假新闻检测和医疗索赔验证等应用程序块。在本文中,我们以通过以结构化和全面的方式呈现文献来更好地了解任务的挑战。我们通过分析不同方法的技术视角并讨论发热数据集的性能结果,描述了所提出的方法,这是最熟悉的和正式结构化的数据集,就是事实提取和验证任务。我们还迄今为止迄今为止确定句子检索组件的有益损失函数的最大实验研究。我们的分析表明,采样负句对于提高性能并降低计算复杂性很重要。最后,我们描述了开放的问题和未来的挑战,我们激励了未来的任务研究。
translated by 谷歌翻译
在科学研究中,该方法是解决科学问题和关键研究对象的必不可少手段。随着科学的发展,正在提出,修改和使用许多科学方法。作者在抽象和身体文本中描述了该方法的详细信息,并且反映该方法名称的学术文献中的关键实体称为方法实体。在大量的学术文献中探索各种方法实体有助于学者了解现有方法,为研究任务选择适当的方法并提出新方法。此外,方法实体的演变可以揭示纪律的发展并促进知识发现。因此,本文对方法论和经验作品进行了系统的综述,重点是从全文学术文献中提取方法实体,并努力使用这些提取的方法实体来建立知识服务。首先提出了本综述涉及的关键概念的定义。基于这些定义,我们系统地审查了提取和评估方法实体的方法和指标,重点是每种方法的利弊。我们还调查了如何使用提取的方法实体来构建新应用程序。最后,讨论了现有作品的限制以及潜在的下一步。
translated by 谷歌翻译
Automatic keyword extraction (AKE) has gained more importance with the increasing amount of digital textual data that modern computing systems process. It has various applications in information retrieval (IR) and natural language processing (NLP), including text summarisation, topic analysis and document indexing. This paper proposes a simple but effective post-processing-based universal approach to improve the performance of any AKE methods, via an enhanced level of semantic-awareness supported by PoS-tagging. To demonstrate the performance of the proposed approach, we considered word types retrieved from a PoS-tagging step and two representative sources of semantic information -- specialised terms defined in one or more context-dependent thesauri, and named entities in Wikipedia. The above three steps can be simply added to the end of any AKE methods as part of a post-processor, which simply re-evaluate all candidate keywords following some context-specific and semantic-aware criteria. For five state-of-the-art (SOTA) AKE methods, our experimental results with 17 selected datasets showed that the proposed approach improved their performances both consistently (up to 100\% in terms of improved cases) and significantly (between 10.2\% and 53.8\%, with an average of 25.8\%, in terms of F1-score and across all five methods), especially when all the three enhancement steps are used. Our results have profound implications considering the ease to apply our proposed approach to any AKE methods and to further extend it.
translated by 谷歌翻译
科学出版物的产出成倍增长。因此,跟踪趋势和变化越来越具有挑战性。了解科学文档是下游任务的重要一步,例如知识图构建,文本挖掘和纪律分类。在这个研讨会中,我们从科学出版物的摘要中可以更好地理解关键字和键形酶提取。
translated by 谷歌翻译
当医学研究人员进行系统审查(SR)时,筛查研究是最耗时的过程:研究人员阅读了数千个医学文献,手动标记它们相关或无关紧要。筛选优先级排序(即,文件排名)是通过提供相关文件的排名来协助研究人员的方法,其中相关文件的排名高于无关。种子驱动的文档排名(SDR)使用已知的相关文档(即,种子)作为查询并生成这些排名。以前的SDR工作试图在查询文档中识别不同术语权重,并在检索模型中使用它们来计算排名分数。或者,我们将SDR任务制定为查询文档的类似文档,并根据相似度得分生成排名。我们提出了一个名为Mirror匹配的文件匹配度量,通过结合常见的书写模式来计算医疗摘要文本之间的匹配分数,例如背景,方法,结果和结论。我们对2019年克利夫氏素母电子邮件进行实验2 TAR数据集,并且经验结果表明这种简单的方法比平均精度和精密的度量标准的传统和神经检索模型实现了更高的性能。
translated by 谷歌翻译
Most low-resource languages do not have the necessary resources to create even a substantial monolingual corpus. These languages may often be found in government proceedings but mainly in Portable Document Format (PDF) that contains legacy fonts. Extracting text from these documents to create a monolingual corpus is challenging due to legacy font usage and printer-friendly encoding, which are not optimized for text extraction. Therefore, we propose a simple, automatic, and novel idea that can scale for Tamil, Sinhala, English languages, and many documents along with parallel corpora. Since Tamil and Sinhala are Low-Resource Languages, we improved the performance of Tesseract by employing LSTM-based training on more than 20 legacy fonts to recognize printed characters in these languages. Especially, our model detects code-mixed text, numbers, and special characters from the printed document. It is shown that this approach can reduce the character-level error rate of Tesseract from 6.03 to 2.61 for Tamil (-3.42% relative change) and 7.61 to 4.74 for Sinhala (-2.87% relative change), as well as the word-level error rate from 39.68 to 20.61 for Tamil (-19.07% relative change) and 35.04 to 26.58 for Sinhala (-8.46% relative change) on the test set. Also, our newly created parallel corpus consists of 185.4k, 168.9k, and 181.04k sentences and 2.11M, 2.22M, and 2.33M Words in Tamil, Sinhala, and English respectively. This study shows that fine-tuning Tesseract models on multiple new fonts help to understand the texts and enhances the performance of the OCR. We made newly trained models and the source code for fine-tuning Tesseract, freely available.
translated by 谷歌翻译
寻找专家在推动成功的合作和加快高质量研究开发和创新方面起着至关重要的作用。但是,科学出版物和数字专业知识的快速增长使确定合适的专家是一个具有挑战性的问题。根据向量空间模型,文档语言模型和基于图形的模型,可以将寻找给定主题的专家的现有方法分类为信息检索技术。在本文中,我们建议$ \ textit {expfinder} $,一种用于专家发现的新合奏模型,该模型集成了一种新颖的$ n $ gram-gram vector空间模型,称为$ n $ vsm和基于图的模型,并表示作为$ \ textit {$ \ mu $ co-hits} $,这是共同算法的拟议变体。 $ n $ vsm的关键是利用$ n $ gram单词和$ \ textIt {expfinder} $ compriese $ n $ vsm的最新反向文档频率加权方法中的实现专家发现。与六个不同的专家发现模型相比,我们在四个不同的数据集上全面评估$ \ textit {expfinder} $。评估结果表明,$ \ textit {expfinder} $是专家发现的高效模型,显着优于19%至160.2%的所有比较模型。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 2nd International Workshop on Reading Music Systems, held in Delft on the 2nd of November 2019.
translated by 谷歌翻译
例如,查询是一个众所周知的信息检索任务,其中由用户选择文档作为搜索查询,目标是从大集合中检索相关文档。但是,文档通常涵盖主题的多个方面。要解决此方案,我们将通过示例介绍面位查询的任务,其中用户还可以指定除输入查询文档之外的更精细的粗体方面。我们专注于在科学文献搜索中的应用。我们设想能够沿着专门选择的修辞结构元素作为对此问题的一种解决方案来检索类似于查询科学纸的科学论文。在这项工作中,我们称之为方面的修辞结构元素,表明了科学论文的目标,方法或结果。我们介绍并描述了一个专家注释的测试集合,以评估培训的型号以执行此任务。我们的测试收集包括一个不同的50套英文查询文件,从计算语言学和机器学习场所绘制。我们仔细遵循TREC用于深度-K池(k = 100或250)使用的注释指南,结果数据收集包括具有高注释协议的分级相关性分数。在我们的数据集中评估的最先进模型显示出进一步的工作中的显着差距。可以在此处访问我们的数据集:https://github.com/iesl/csfcube
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
The number of scientific publications continues to rise exponentially, especially in Computer Science (CS). However, current solutions to analyze those publications restrict access behind a paywall, offer no features for visual analysis, limit access to their data, only focus on niches or sub-fields, and/or are not flexible and modular enough to be transferred to other datasets. In this thesis, we conduct a scientometric analysis to uncover the implicit patterns hidden in CS metadata and to determine the state of CS research. Specifically, we investigate trends of the quantity, impact, and topics for authors, venues, document types (conferences vs. journals), and fields of study (compared to, e.g., medicine). To achieve this we introduce the CS-Insights system, an interactive web application to analyze CS publications with various dashboards, filters, and visualizations. The data underlying this system is the DBLP Discovery Dataset (D3), which contains metadata from 5 million CS publications. Both D3 and CS-Insights are open-access, and CS-Insights can be easily adapted to other datasets in the future. The most interesting findings of our scientometric analysis include that i) there has been a stark increase in publications, authors, and venues in the last two decades, ii) many authors only recently joined the field, iii) the most cited authors and venues focus on computer vision and pattern recognition, while the most productive prefer engineering-related topics, iv) the preference of researchers to publish in conferences over journals dwindles, v) on average, journal articles receive twice as many citations compared to conference papers, but the contrast is much smaller for the most cited conferences and journals, and vi) journals also get more citations in all other investigated fields of study, while only CS and engineering publish more in conferences than journals.
translated by 谷歌翻译