在过去几年中,学术数据的数量一直在急剧增加。对于特定科学领域的新人(例如,IR,物理学,NLP)往往难以解决更大的趋势,并在先前科学成就和突破的背景下定位最新研究。同样,科学史上的研究人员对允许他们分析和可视化特定科学域中的变化的工具感兴趣。时间摘要和相关方法应该是有用的,以使大量的科学话语数据随时间汇总。我们展示了一种新颖的分析研究论文收集的方法,在较长的时间内发布,以提供在时间进展情况上发生的重要语义变革的高级概述。我们的方法是基于比较单词语义表示随着​​时间的推移,并旨在支持用户更好地理解学术出版物的大型域名档案。作为一个示例数据集,我们使用从1979年到2015年的ACL原点参考语料库,并包含22,878篇学术文章。
translated by 谷歌翻译
科学世界正在快速改变,新技术正在开发,新的趋势正在进行频率增加。本文介绍了对学术出版物进行科学分析的框架,这对监测研究趋势并确定潜在的创新至关重要。该框架采用并结合了各种自然语言处理技术,例如Word Embedding和主题建模。嵌入单词嵌入用于捕获特定于域的单词的语义含义。我们提出了两种新颖的科学出版物嵌入,即PUB-G和PUB-W,其能够在各种研究领域学习一般的语义含义以及特定于域的单词。此后,主题建模用于识别这些更大的研究领域内的研究主题集群。我们策划了一个出版物数据集,由两条会议组成,并从1995年到2020年的两项期刊从两个研究领域组成。实验结果表明,与其他基线嵌入式的基于主题连贯性,我们的PUB-G和PUB-W嵌入式与其他基线嵌入式相比优越。
translated by 谷歌翻译
本研究提出了一种新颖的趋势检测和可视化方法 - 更具体地说,随着时间的推移,主题的变化建模。如果当前用于识别和可视化趋势的模型仅传达基于用法随机计数的单一单词的普及,那么本研究中的方法说明了一个主题正在发展的普及和方向。在这种情况下,方向是选定语料库中的独特亚主题。通过使用K-均值聚类和余弦相似性对主题的移动进行建模来对这种趋势进行建模,以将簇之间的距离分组。在收敛的场景中,可以推断出整个主题是在网络上的(主题之间的令牌,可以互换)。相反,一个不同的场景暗示每个主题的各自的令牌在相同的上下文中都不会找到(彼此之间越来越不同)。该方法对20个新闻组数据集中存在的各种媒体房屋的一组文章进行了测试。
translated by 谷歌翻译
随着大数据挖掘和现代大量文本分析的出现和普及,自动化文本摘要在从文档中提取和检索重要信息而变得突出。这项研究从单个和多个文档的角度研究了自动文本摘要的各个方面。摘要是将庞大的文本文章凝结成简短的摘要版本的任务。为了摘要目的,该文本的大小减小,但保留了关键的重要信息并保留原始文档的含义。这项研究介绍了潜在的Dirichlet分配(LDA)方法,用于从具有与基因和疾病有关的主题进行摘要的医学科学期刊文章进行主题建模。在这项研究中,基于Pyldavis Web的交互式可视化工具用于可视化所选主题。可视化提供了主要主题的总体视图,同时允许并将深度含义归因于流行率单个主题。这项研究提出了一种新颖的方法来汇总单个文档和多个文档。结果表明,使用提取性摘要技术在处理后的文档中考虑其主题患病率的概率,纯粹是通过考虑其术语来排名的。 Pyldavis可视化描述了探索主题与拟合LDA模型的术语的灵活性。主题建模结果显示了主题1和2中的流行率。该关联表明,本研究中的主题1和2中的术语之间存在相似性。使用潜在语义分析(LSA)和面向召回的研究测量LDA和提取性摘要方法的功效,以评估模型的可靠性和有效性。
translated by 谷歌翻译
使用机器学习算法从未标记的文本中提取知识可能很复杂。文档分类和信息检索是两个应用程序,可以从无监督的学习(例如文本聚类和主题建模)中受益,包括探索性数据分析。但是,无监督的学习范式提出了可重复性问题。初始化可能会导致可变性,具体取决于机器学习算法。此外,关于群集几何形状,扭曲可能会产生误导。在原因中,异常值和异常的存在可能是决定因素。尽管初始化和异常问题与文本群集和主题建模相关,但作者并未找到对它们的深入分析。这项调查提供了这些亚地区的系统文献综述(2011-2022),并提出了共同的术语,因为类似的程序具有不同的术语。作者描述了研究机会,趋势和开放问题。附录总结了与审查的作品直接或间接相关的文本矢量化,分解和聚类算法的理论背景。
translated by 谷歌翻译
在矿业行业中,在项目管理过程中产生了许多报告。这些过去的文件是未来成功的知识资源。但是,如果文件未经组织和非结构化,则可以是一个繁琐而挑战的任务是检索必要的信息。文档聚类是一种强大的方法来应对问题,并且在过去的研究中介绍了许多方法。尽管如此,没有银弹可以对任何类型的文件表现最佳。因此,需要探索性研究来应用新数据集的聚类方法。在本研究中,我们将研究多个主题建模(TM)方法。目标是使用昆士兰,资源部,昆士兰州政府部的地质调查的数据集找到采矿项目报告的适当方法,并了解内容,以了解如何组织它们。三种TM方法,潜在的Dirichlet分配(LDA),非负矩阵分解(NMF)和非负张量分解(NTF)在统计和定性地比较。评估后,我们得出结论,LDA对数据集执行最佳;然而,可以通过一些改进来采用其他方法的可能性仍然存在。
translated by 谷歌翻译
近年来,超级人性药物的研究与发展取得了显着发展,各种军事和商业应用程序越来越多。几个国家的公共和私人组织一直在投资超人员,旨在超越其竞争对手并确保/提高战略优势和威慑。对于这些组织而言,能够及时可靠地识别新兴技术至关重要。信息技术的最新进展使得分析大量数据,提取隐藏的模式并为决策者提供新的见解。在这项研究中,我们专注于2000 - 2020年期间有关高人物的科学出版物,并采用自然语言处理和机器学习来通过识别12个主要潜在研究主题并分析其时间演变来表征研究格局。我们的出版物相似性分析揭示了在研究二十年中表明周期的模式。该研究对研究领域进行了全面的分析,以及研究主题是算法提取的事实,可以从练习中删除主观性,并可以在主题和时间间隔之间进行一致的比较。
translated by 谷歌翻译
慢性疼痛被认为是一个重大的健康问题,不仅受到经济,而且在社会和个人层面的影响。作为私人和主观的经验,它不可能从外部和公正地体验,描述和解释慢性疼痛,作为纯粹的有害刺激,直接指向因果症,并促进其缓解,与急性疼痛相反,对其进行评估通常是直截了当的。因此,口头沟通是将相关信息传达给卫生专业人员的关键,否则外部实体无法访问,即关于痛苦经验和患者的内在质量。我们提出并讨论了一个主题建模方法,以识别慢性疼痛的口头描述中的模式,并使用这些模式量化和限定疼痛的经验。我们的方法允许提取关于所获得的主题模型和潜在空间的慢性疼痛经验的新洞察。我们认为我们的结果在临床上与慢性疼痛的评估和管理有关。
translated by 谷歌翻译
Future work sentences (FWS) are the particular sentences in academic papers that contain the author's description of their proposed follow-up research direction. This paper presents methods to automatically extract FWS from academic papers and classify them according to the different future directions embodied in the paper's content. FWS recognition methods will enable subsequent researchers to locate future work sentences more accurately and quickly and reduce the time and cost of acquiring the corpus. The current work on automatic identification of future work sentences is relatively small, and the existing research cannot accurately identify FWS from academic papers, and thus cannot conduct data mining on a large scale. Furthermore, there are many aspects to the content of future work, and the subdivision of the content is conducive to the analysis of specific development directions. In this paper, Nature Language Processing (NLP) is used as a case study, and FWS are extracted from academic papers and classified into different types. We manually build an annotated corpus with six different types of FWS. Then, automatic recognition and classification of FWS are implemented using machine learning models, and the performance of these models is compared based on the evaluation metrics. The results show that the Bernoulli Bayesian model has the best performance in the automatic recognition task, with the Macro F1 reaching 90.73%, and the SCIBERT model has the best performance in the automatic classification task, with the weighted average F1 reaching 72.63%. Finally, we extract keywords from FWS and gain a deep understanding of the key content described in FWS, and we also demonstrate that content determination in FWS will be reflected in the subsequent research work by measuring the similarity between future work sentences and the abstracts.
translated by 谷歌翻译
In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them) has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. this algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as the result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework which can be used individually or as a part of generating the summary to overcome coverage problems.
translated by 谷歌翻译
Natural Language Understanding has seen an increasing number of publications in the last few years, especially after robust word embeddings models became prominent, when they proved themselves able to capture and represent semantic relationships from massive amounts of data. Nevertheless, traditional models often fall short in intrinsic issues of linguistics, such as polysemy and homonymy. Any expert system that makes use of natural language in its core, can be affected by a weak semantic representation of text, resulting in inaccurate outcomes based on poor decisions. To mitigate such issues, we propose a novel approach called Most Suitable Sense Annotation (MSSA), that disambiguates and annotates each word by its specific sense, considering the semantic effects of its context. Our approach brings three main contributions to the semantic representation scenario: (i) an unsupervised technique that disambiguates and annotates words by their senses, (ii) a multi-sense embeddings model that can be extended to any traditional word embeddings algorithm, and (iii) a recurrent methodology that allows our models to be re-used and their representations refined. We test our approach on six different benchmarks for the word similarity task, showing that our approach can produce state-of-the-art results and outperforms several more complex state-of-the-art systems.
translated by 谷歌翻译
缓慢的新兴主题检测是事件检测之间的任务,我们在短时间内聚合不同单词的行为,以及我们监控他们的长期演进的语言演化。在这项工作中,我们解决了早期检测慢慢新兴的问题的问题。为此,我们收集了单词级别的弱信号的证据。我们建议监视嵌入空间中的单词表示的行为,并使用其几何特性之一来表征主题的出现。随着这种任务通常难以评估,我们提出了一种用于定量评估的框架。我们展示了积极的结果,在新闻和科学文章的两种公共数据集上优于最先进的方法。
translated by 谷歌翻译
在科学研究中,该方法是解决科学问题和关键研究对象的必不可少手段。随着科学的发展,正在提出,修改和使用许多科学方法。作者在抽象和身体文本中描述了该方法的详细信息,并且反映该方法名称的学术文献中的关键实体称为方法实体。在大量的学术文献中探索各种方法实体有助于学者了解现有方法,为研究任务选择适当的方法并提出新方法。此外,方法实体的演变可以揭示纪律的发展并促进知识发现。因此,本文对方法论和经验作品进行了系统的综述,重点是从全文学术文献中提取方法实体,并努力使用这些提取的方法实体来建立知识服务。首先提出了本综述涉及的关键概念的定义。基于这些定义,我们系统地审查了提取和评估方法实体的方法和指标,重点是每种方法的利弊。我们还调查了如何使用提取的方法实体来构建新应用程序。最后,讨论了现有作品的限制以及潜在的下一步。
translated by 谷歌翻译
新闻库中的自动事件检测是开采快速发展的结构化知识的至关重要的任务。由于现实世界事件具有不同的粒度,从顶级主题到关键事件,然后再提及与具体行动相对应的事件,因此通常有两条研究:(1)主题检测从新闻语料库的主要主题中标识(例如,。 ,“ 2019年香港抗议活动”与“ 2020年美国总统大选”),具有非常不同的语义; (2)从一份文件提取的行动提取提取级别的行动(例如,“警察击中抗议者的左臂”),无法理解该事件。在本文中,我们提出了一项新任务,即在中间级别的关键事件检测,目的是从新闻语料库的关键事件(例如,“ 8月12日至14日的HK机场抗议”)中进行检测,每一次都发生在特定时间/位置并专注于同一主题。由于新闻文章的快速发展性质,这项任务可以弥合事件的理解和结构,并且由于关键事件的主题和时间紧密以及标记的数据的稀缺而具有固有的挑战。为了应对这些挑战,我们开发了一个无监督的关键事件检测框架Evmine,(1)使用新颖的TTF-ITF分数提取时间频繁的峰值短语,(2)将峰值短语合并为事件 - 指示特征集,通过从我们的我们检测我们的社区中。设计的峰短语图可以捕获文档的共发生,语义相似性和时间亲密信号,以及(3)迭代地检索与每个关键事件相关的文档,通过训练具有从事件指标特征集中自动生成的伪标签的分类器,并完善该分类器使用检索的文档检测到关键事件。广泛的实验和案例研究表明,Evmine的表现优于所有基线方法及其在两个现实世界新闻机构上的消融。
translated by 谷歌翻译
Most research studying social determinants of health (SDoH) has focused on physician notes or structured elements of the electronic medical record (EMR). We hypothesize that clinical notes from social workers, whose role is to ameliorate social and economic factors, might provide a richer source of data on SDoH. We sought to perform topic modeling to identify robust topics of discussion within a large cohort of social work notes. We retrieved a diverse, deidentified corpus of 0.95 million clinical social work notes from 181,644 patients at the University of California, San Francisco. We used word frequency analysis and Latent Dirichlet Allocation (LDA) topic modeling analysis to characterize this corpus and identify potential topics of discussion. Word frequency analysis identified both medical and non-medical terms associated with specific ICD10 chapters. The LDA topic modeling analysis extracted 11 topics related to social determinants of health risk factors including financial status, abuse history, social support, risk of death, and mental health. In addition, the topic modeling approach captured the variation between different types of social work notes and across patients with different types of diseases or conditions. We demonstrated that social work notes contain rich, unique, and otherwise unobtainable information on an individual's SDoH.
translated by 谷歌翻译
The relationship between words in a sentence often tells us more about the underlying semantic content of a document than its actual words, individually. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II. These algorithms combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings as building blocks forming a single system. In short, our approach has three main contributions: (i) a set of techniques that fully integrate word embeddings and lexical chains; (ii) a more robust semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. We intend to assess the knowledge of pre-trained models to evaluate their robustness in the document classification task. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
translated by 谷歌翻译
Selecting the number of topics in LDA models is considered to be a difficult task, for which alternative approaches have been proposed. The performance of the recently developed singular Bayesian information criterion (sBIC) is evaluated and compared to the performance of alternative model selection criteria. The sBIC is a generalization of the standard BIC that can be implemented to singular statistical models. The comparison is based on Monte Carlo simulations and carried out for several alternative settings, varying with respect to the number of topics, the number of documents and the size of documents in the corpora. Performance is measured using different criteria which take into account the correct number of topics, but also whether the relevant topics from the DGPs are identified. Practical recommendations for LDA model selection in applications are derived.
translated by 谷歌翻译
众所周知,歌曲和诗歌的翻译不仅破坏节奏和押韵模式,而且导致语义信息丢失。 Bhagavad Gita是一个古老的印度教哲学文本,最初是梵语,在Mahabharata战争之前,克里希纳和阿尔纳之间的谈话具有谈话。 Bhagavad Gita也是印度教的关键神圣文本之一,被称为印度教的吠陀语料库的最前沿。在过去的两个世纪里,西方学者对印度教哲学有很多兴趣,因此Bhagavad Gita已经翻译了多种语言。但是,没有多少工作验证了英语翻译的质量。最近由深度学习提供的语言模型的进展,不仅能够翻译,而是更好地了解语言和语义和情感分析。我们的作品受到深入学习方法供电的语言模型的最新进展。在本文中,我们使用语义和情绪分析比较Bhagavad Gita的选定翻译(主要来自梵语到英语)。我们使用手工标记的情绪数据集进行调整,用于调整已知为\ Textit的最先进的基于深度学习的语言模型{来自变压器的双向编码器表示}(BERT)。我们使用小说嵌入模型来为跨翻译的选定章节和经文提供语义分析。最后,我们使用上述模型进行情绪和语义分析,并提供结果可视化。我们的结果表明,虽然各自的Bhagavad Gita翻译中的风格和词汇量广泛变化,但情绪分析和语义相似性表明,传达的消息在整个翻译中大多相似。
translated by 谷歌翻译
除了文献计量学之外,还有兴趣表征科学论文中思想数量的演变。调查此问题的一种常见方法是分析出版物的标题,以检测随着时间的推移词汇变化。以这样的概念,即短语或更具体的键形酶代表概念,将词汇多样性指标应用于标题的短语版本。因此,词汇多样性的变化被视为研究的指标,甚至可能扩展研究。因此,优化键形检测是该过程的重要方面。我们建议使用多个短语检测模型的目标,而不是仅一个,而是从源代码语料库中生产出更全面的钥匙串。这种方法的另一个潜在优势是,这些集合的联合和差异可能会提供自动化技术,以识别和省略非特异性短语。我们比较了几个短语检测模型的性能,分析每个短语集的输出,并使用四个通用的词汇多样性指标计算包含每个模型的键形的Corpora变体的词汇多样性。
translated by 谷歌翻译
多文件摘要(MDS)是信息聚合的有效工具,它从与主题相关文档集群生成信息和简洁的摘要。我们的调查是,首先,系统地概述了最近的基于深度学习的MDS模型。我们提出了一种新的分类学,总结神经网络的设计策略,并进行全面的最先进的概要。我们突出了在现有文献中很少讨论的各种客观函数之间的差异。最后,我们提出了与这个新的和令人兴奋的领域有关的几个方向。
translated by 谷歌翻译