我们研究了弱监督的文本分类问题,旨在将文本文档分类为只有类别曲面名称的一组预定义类,而没有提供任何注释的培训文件。大多数现有方法利用每个文档中的文本信息。然而,在许多领域中,文件伴随着各种类型的元数据(例如,作者,场地和研究文件的年份)。除了文本内容之外,这些元数据及其组合可以作为强大的类别指标。在本文中,我们探讨了使用元数据来帮助弱监督文本分类的潜力。具体而言,我们通过异构信息网络模拟文档和元数据之间的关系。为了有效地捕获网络中的高阶结构,我们使用图案来描述元数据组合。我们提出了一个名为Motifclass的新颖框架,(1)选择类别 - 指示性主题实例,(2)根据类别名称和指示性主题实例检索并生成伪标记的训练样本,并且(3)使用文本分类器培训伪培训数据。关于现实世界数据集的广泛实验证明了Motifclass对现有弱监督的文本分类方法的卓越表现。进一步的分析显示了考虑我们框架中的高阶元数据信息的益处。
translated by 谷歌翻译
旨在为每个文档分配主题标签的文档分类在各种应用程序中扮演基本作用。尽管在传统的监督文件分类中存在现有研究的成功,但它们不太关注两个真正的问题:(1)元数据的存在:在许多域中,文本伴随着作者和标签等各种附加信息。此类元数据充当令人信服的主题指标,应将其利用到分类框架中; (2)标签稀缺性:在某些情况下,标记的训练样本价格昂贵,只需要使用一小组注释数据来执行分类。为了认识到这两个挑战,我们提出了MetaCAT,是一个最小的监督框架,可以用元数据分类文本。具体地,我们开发了一个生成过程,描述了单词,文档,标签和元数据之间的关系。由生成模型引导,我们将文本和元数据嵌入到相同的语义空间中以编码异构信号。然后,基于相同的生成过程,我们综合训练样本来解决标签稀缺的瓶颈。我们对各种数据集进行了彻底的评估。实验结果证明了Metacat在许多竞争基础上的有效性。
translated by 谷歌翻译
多标签文本分类是指从标签集中分配其最相关标签的问题。通常,在现实世界应用中提供给定文件的元数据和标签的层次结构。然而,大多数现有的研究专注于仅建模文本信息,几次尝试利用元数据或层次结构,而不是它们都是。在本文中,我们通过在大型标签层次结构中正式化Metadata感知文本分类问题来弥合差距(例如,数万个标签)。为了解决这个问题,我们介绍了匹配解决方案 - 一个端到端的框架,它利用元数据和层次结构。为了合并元数据,我们预先培训了同一空间中的文本和元数据的嵌入,并且还利用完全连接的关注来捕获它们之间的相互关系。要利用标签层次结构,我们提出了不同的方法来规范其父母每个子标签的参数和输出概率。在具有大规模标签层次结构的两个大规模文本数据集上的广泛实验证明了匹配最先进的深度学习基线的有效性。
translated by 谷歌翻译
GitHub已成为代码共享和科学交流的重要平台。使用大量的存储库可用,需要基于主题的搜索需求。即使介绍了主题标签功能,大多数GitHub存储库都没有任何标签,阻碍了搜索和基于主题的分析。这项工作将自动存储库分类问题定位为关键字驱动的分层分类。具体而言,用户只需要提供具有关键字的标签层次结构以作为监控提供。此设置灵活,适用于用户的需求,占主题标签的不同粒度,需要最小的人力努力。我们确定了这个问题的三个关键挑战,即(1)多模态信号的存在; (2)监督稀缺和偏见; (3)监督格式不匹配。为了认识到这些挑战,我们提出了一种HIGITCLASS框架,包括三个模块:异构信息网络嵌入;关键词富集;主题建模和伪文档生成。在两个GitHub存储库集合上的实验结果证实,HIGITCLASS优于现有的弱监督和DATALESS分层分类方法,尤其是集成了用于存储库分类的结构化和非结构化数据的能力。
translated by 谷歌翻译
Instead of mining coherent topics from a given text corpus in a completely unsupervised manner, seed-guided topic discovery methods leverage user-provided seed words to extract distinctive and coherent topics so that the mined topics can better cater to the user's interest. To model the semantic correlation between words and seeds for discovering topic-indicative terms, existing seed-guided approaches utilize different types of context signals, such as document-level word co-occurrences, sliding window-based local contexts, and generic linguistic knowledge brought by pre-trained language models. In this work, we analyze and show empirically that each type of context information has its value and limitation in modeling word semantics under seed guidance, but combining three types of contexts (i.e., word embeddings learned from local contexts, pre-trained language model representations obtained from general-domain training, and topic-indicative sentences retrieved based on seed information) allows them to complement each other for discovering quality topics. We propose an iterative framework, SeedTopicMine, which jointly learns from the three types of contexts and gradually fuses their context signals via an ensemble ranking process. Under various sets of seeds and on multiple datasets, SeedTopicMine consistently yields more coherent and accurate topics than existing seed-guided topic discovery approaches.
translated by 谷歌翻译
识别异常文档,其内容与语料库中的大多数文档不同,在管理大型文本集合中发挥了重要作用。但是,由于没有关于Inlier(或目标)分布的明确信息,现有的无监督异常探测器可能会根据语料库中的异常值的密度或多样性进行不可靠的结果。为了解决这一挑战,我们介绍了一项新的任务,称为类别无类别检测,该任务旨在通过使用类别名称作为弱监管来将文档与Inlier(或目标)类别的语义相关。在实践中,该任务可以广泛适用于,它可以灵活地根据用户的兴趣指定目标类别的范围,同时仅需要目标类别名称作为最小指导。在本文中,我们介绍了一个类别超类的检测框架,它有效地根据其特定于类别的相关性得分,有效地测量每个文档的一个目标类别之一。我们的框架采用两步方法; (i)它首先通过利用在文本嵌入空间中编码的单词文件相似度,然后(ii)通过使用伪标签来计算伪标签以计算置信度来生成所有未标记的文档的伪类别标签从其目标类别预测。真实世界数据集的实验表明,我们的框架在指定不同目标类别的各种场景中的所有基线方法中实现了最佳检测性能。
translated by 谷歌翻译
Weakly-supervised text classification aims to train a classifier using only class descriptions and unlabeled data. Recent research shows that keyword-driven methods can achieve state-of-the-art performance on various tasks. However, these methods not only rely on carefully-crafted class descriptions to obtain class-specific keywords but also require substantial amount of unlabeled data and takes a long time to train. This paper proposes FastClass, an efficient weakly-supervised classification approach. It uses dense text representation to retrieve class-relevant documents from external unlabeled corpus and selects an optimal subset to train a classifier. Compared to keyword-driven methods, our approach is less reliant on initial class descriptions as it no longer needs to expand each class description into a set of class-specific keywords. Experiments on a wide range of classification tasks show that the proposed approach frequently outperforms keyword-driven models in terms of classification accuracy and often enjoys orders-of-magnitude faster training speed.
translated by 谷歌翻译
学术研究是解决以前从未解决过的问题的探索活动。通过这种性质,每个学术研究工作都需要进行文献审查,以区分其Novelties尚未通过事先作品解决。在自然语言处理中,该文献综述通常在“相关工作”部分下进行。鉴于研究文件的其余部分和引用的论文列表,自动相关工作生成的任务旨在自动生成“相关工作”部分。虽然这项任务是在10年前提出的,但直到最近,它被认为是作为科学多文件摘要问题的变种。然而,即使在今天,尚未标准化了自动相关工作和引用文本生成的问题。在这项调查中,我们进行了一个元研究,从问题制定,数据集收集,方法方法,绩效评估和未来前景的角度来比较相关工作的现有文献,以便为读者洞察到国家的进步 - 最内容的研究,以及如何进行未来的研究。我们还调查了我们建议未来工作要考虑整合的相关研究领域。
translated by 谷歌翻译
新闻库中的自动事件检测是开采快速发展的结构化知识的至关重要的任务。由于现实世界事件具有不同的粒度,从顶级主题到关键事件,然后再提及与具体行动相对应的事件,因此通常有两条研究:(1)主题检测从新闻语料库的主要主题中标识(例如,。 ,“ 2019年香港抗议活动”与“ 2020年美国总统大选”),具有非常不同的语义; (2)从一份文件提取的行动提取提取级别的行动(例如,“警察击中抗议者的左臂”),无法理解该事件。在本文中,我们提出了一项新任务,即在中间级别的关键事件检测,目的是从新闻语料库的关键事件(例如,“ 8月12日至14日的HK机场抗议”)中进行检测,每一次都发生在特定时间/位置并专注于同一主题。由于新闻文章的快速发展性质,这项任务可以弥合事件的理解和结构,并且由于关键事件的主题和时间紧密以及标记的数据的稀缺而具有固有的挑战。为了应对这些挑战,我们开发了一个无监督的关键事件检测框架Evmine,(1)使用新颖的TTF-ITF分数提取时间频繁的峰值短语,(2)将峰值短语合并为事件 - 指示特征集,通过从我们的我们检测我们的社区中。设计的峰短语图可以捕获文档的共发生,语义相似性和时间亲密信号,以及(3)迭代地检索与每个关键事件相关的文档,通过训练具有从事件指标特征集中自动生成的伪标签的分类器,并完善该分类器使用检索的文档检测到关键事件。广泛的实验和案例研究表明,Evmine的表现优于所有基线方法及其在两个现实世界新闻机构上的消融。
translated by 谷歌翻译
医疗保健提供者通常会记录给每位患者提供临床,研究和计费目的的临床护理的详细说明。由于这些叙述的非结构性性质,提供者使用专门的员工使用国际疾病(ICD)编码系统为患者的诊断分配诊断代码。此手动过程不仅耗时,而且昂贵且容易出错。先前的工作证明了机器学习(ML)方法在自动化此过程中的潜在效用,但它依靠大量手动标记数据来训练模型。此外,诊断编码系统随着时间的流逝而演变,这使得传统的监督学习策略无法推广到本地应用程序之外。在这项工作中,我们引入了一个普遍的弱监督文本分类框架,该框架仅从类标签描述中学习,而无需使用任何人类标记的文档。它利用预先训练的语言模型中存储的语言领域知识和数据编程框架将代码标签分配给单个文本。我们通过将方法与四个现实世界文本分类数据集中的最先进的弱文本分类器进行比较,除了将ICD代码分配给公开可用的模拟MIMIC-III数据库中的医疗注释外,我们证明了我们的方法的功效和灵活性。
translated by 谷歌翻译
Deep Learning and Machine Learning based models have become extremely popular in text processing and information retrieval. However, the non-linear structures present inside the networks make these models largely inscrutable. A significant body of research has focused on increasing the transparency of these models. This article provides a broad overview of research on the explainability and interpretability of natural language processing and information retrieval methods. More specifically, we survey approaches that have been applied to explain word embeddings, sequence modeling, attention modules, transformers, BERT, and document ranking. The concluding section suggests some possible directions for future research on this topic.
translated by 谷歌翻译
链接的开放数据实践导致了过去十年中网络上结构化数据的显着增长。这样的结构化数据以机器可读的方式描述了现实世界实体,并为自然语言处理领域的研究创造了前所未有的机会。但是,缺乏有关如何使用此类数据,哪种任务以及它们在多大程度上对这些任务有用的研究。这项工作着重于电子商务领域,以探索利用此类结构化数据来创建可能用于产品分类和链接的语言资源的方法。我们以RDF N四分之一的形式处理数十亿个结构化数据点,以创建数百万个与产品相关的语料库单词,后来以三种不同的方式用于创建语言资源:培训单词嵌入模型,继续预训练类似于Bert的语言模型和训练机器翻译模型,这些模型被用作生成产品相关的关键字的代理。我们对大量基准测试的评估表明,嵌入单词是提高这两个任务准确性的最可靠和一致的方法(在某些数据集中,宏观 - 平均F1中最高6.9个百分点)。但是,其他两种方法并不那么有用。我们的分析表明,这可能是由于许多原因,包括结构化数据中的偏置域表示以及缺乏词汇覆盖范围。我们分享我们的数据集,并讨论如何将我们所学到的经验教训朝着这一方向介绍未来的研究。
translated by 谷歌翻译
在科学研究中,该方法是解决科学问题和关键研究对象的必不可少手段。随着科学的发展,正在提出,修改和使用许多科学方法。作者在抽象和身体文本中描述了该方法的详细信息,并且反映该方法名称的学术文献中的关键实体称为方法实体。在大量的学术文献中探索各种方法实体有助于学者了解现有方法,为研究任务选择适当的方法并提出新方法。此外,方法实体的演变可以揭示纪律的发展并促进知识发现。因此,本文对方法论和经验作品进行了系统的综述,重点是从全文学术文献中提取方法实体,并努力使用这些提取的方法实体来建立知识服务。首先提出了本综述涉及的关键概念的定义。基于这些定义,我们系统地审查了提取和评估方法实体的方法和指标,重点是每种方法的利弊。我们还调查了如何使用提取的方法实体来构建新应用程序。最后,讨论了现有作品的限制以及潜在的下一步。
translated by 谷歌翻译
使用机器学习算法从未标记的文本中提取知识可能很复杂。文档分类和信息检索是两个应用程序,可以从无监督的学习(例如文本聚类和主题建模)中受益,包括探索性数据分析。但是,无监督的学习范式提出了可重复性问题。初始化可能会导致可变性,具体取决于机器学习算法。此外,关于群集几何形状,扭曲可能会产生误导。在原因中,异常值和异常的存在可能是决定因素。尽管初始化和异常问题与文本群集和主题建模相关,但作者并未找到对它们的深入分析。这项调查提供了这些亚地区的系统文献综述(2011-2022),并提出了共同的术语,因为类似的程序具有不同的术语。作者描述了研究机会,趋势和开放问题。附录总结了与审查的作品直接或间接相关的文本矢量化,分解和聚类算法的理论背景。
translated by 谷歌翻译
本次调查绘制了用于分析社交媒体数据的生成方法的研究状态的广泛的全景照片(Sota)。它填补了空白,因为现有的调查文章在其范围内或被约会。我们包括两个重要方面,目前正在挖掘和建模社交媒体的重要性:动态和网络。社会动态对于了解影响影响或疾病的传播,友谊的形成,友谊的形成等,另一方面,可以捕获各种复杂关系,提供额外的洞察力和识别否则将不会被注意的重要模式。
translated by 谷歌翻译
Text classification of unseen classes is a challenging Natural Language Processing task and is mainly attempted using two different types of approaches. Similarity-based approaches attempt to classify instances based on similarities between text document representations and class description representations. Zero-shot text classification approaches aim to generalize knowledge gained from a training task by assigning appropriate labels of unknown classes to text documents. Although existing studies have already investigated individual approaches to these categories, the experiments in literature do not provide a consistent comparison. This paper addresses this gap by conducting a systematic evaluation of different similarity-based and zero-shot approaches for text classification of unseen classes. Different state-of-the-art approaches are benchmarked on four text classification datasets, including a new dataset from the medical domain. Additionally, novel SimCSE and SBERT-based baselines are proposed, as other baselines used in existing work yield weak classification results and are easily outperformed. Finally, the novel similarity-based Lbl2TransformerVec approach is presented, which outperforms previous state-of-the-art approaches in unsupervised text classification. Our experiments show that similarity-based approaches significantly outperform zero-shot approaches in most cases. Additionally, using SimCSE or SBERT embeddings instead of simpler text representations increases similarity-based classification results even further.
translated by 谷歌翻译
The relationship between words in a sentence often tells us more about the underlying semantic content of a document than its actual words, individually. In this work, we propose two novel algorithms, called Flexible Lexical Chain II and Fixed Lexical Chain II. These algorithms combine the semantic relations derived from lexical chains, prior knowledge from lexical databases, and the robustness of the distributional hypothesis in word embeddings as building blocks forming a single system. In short, our approach has three main contributions: (i) a set of techniques that fully integrate word embeddings and lexical chains; (ii) a more robust semantic representation that considers the latent relation between words in a document; and (iii) lightweight word embeddings models that can be extended to any natural language task. We intend to assess the knowledge of pre-trained models to evaluate their robustness in the document classification task. The proposed techniques are tested against seven word embeddings algorithms using five different machine learning classifiers over six scenarios in the document classification task. Our results show the integration between lexical chains and word embeddings representations sustain state-of-the-art results, even against more complex systems.
translated by 谷歌翻译
建模法检索和检索作为预测问题最近被出现为法律智能的主要方法。专注于法律文章检索任务,我们展示了一个名为Lamberta的深度学习框架,该框架被设计用于民法代码,并在意大利民法典上专门培训。为了我们的知识,这是第一项研究提出了基于伯特(来自变压器的双向编码器表示)学习框架的意大利法律制度对意大利法律制度的高级法律文章预测的研究,最近引起了深度学习方法的增加,呈现出色的有效性在几种自然语言处理和学习任务中。我们通过微调意大利文章或其部分的意大利预先训练的意大利预先训练的伯爵来定义Lamberta模型,因为法律文章作为分类任务检索。我们Lamberta框架的一个关键方面是我们构思它以解决极端的分类方案,其特征在于课程数量大,少量学习问题,以及意大利法律预测任务的缺乏测试查询基准。为了解决这些问题,我们为法律文章的无监督标签定义了不同的方法,原则上可以应用于任何法律制度。我们提供了深入了解我们Lamberta模型的解释性和可解释性,并且我们对单一标签以及多标签评估任务进行了广泛的查询模板实验分析。经验证据表明了Lamberta的有效性,以及对广泛使用的深度学习文本分类器和一些构思的几次学习者来说,其优越性是对属性感知预测任务的优势。
translated by 谷歌翻译
Future work sentences (FWS) are the particular sentences in academic papers that contain the author's description of their proposed follow-up research direction. This paper presents methods to automatically extract FWS from academic papers and classify them according to the different future directions embodied in the paper's content. FWS recognition methods will enable subsequent researchers to locate future work sentences more accurately and quickly and reduce the time and cost of acquiring the corpus. The current work on automatic identification of future work sentences is relatively small, and the existing research cannot accurately identify FWS from academic papers, and thus cannot conduct data mining on a large scale. Furthermore, there are many aspects to the content of future work, and the subdivision of the content is conducive to the analysis of specific development directions. In this paper, Nature Language Processing (NLP) is used as a case study, and FWS are extracted from academic papers and classified into different types. We manually build an annotated corpus with six different types of FWS. Then, automatic recognition and classification of FWS are implemented using machine learning models, and the performance of these models is compared based on the evaluation metrics. The results show that the Bernoulli Bayesian model has the best performance in the automatic recognition task, with the Macro F1 reaching 90.73%, and the SCIBERT model has the best performance in the automatic classification task, with the weighted average F1 reaching 72.63%. Finally, we extract keywords from FWS and gain a deep understanding of the key content described in FWS, and we also demonstrate that content determination in FWS will be reflected in the subsequent research work by measuring the similarity between future work sentences and the abstracts.
translated by 谷歌翻译
Personal knowledge bases (PKBs) are crucial for a broad range of applications such as personalized recommendation and Web-based chatbots. A critical challenge to build PKBs is extracting personal attribute knowledge from users' conversation data. Given some users of a conversational system, a personal attribute and these users' utterances, our goal is to predict the ranking of the given personal attribute values for each user. Previous studies often rely on a relative number of resources such as labeled utterances and external data, yet the attribute knowledge embedded in unlabeled utterances is underutilized and their performance of predicting some difficult personal attributes is still unsatisfactory. In addition, it is found that some text classification methods could be employed to resolve this task directly. However, they also perform not well over those difficult personal attributes. In this paper, we propose a novel framework PEARL to predict personal attributes from conversations by leveraging the abundant personal attribute knowledge from utterances under a low-resource setting in which no labeled utterances or external data are utilized. PEARL combines the biterm semantic information with the word co-occurrence information seamlessly via employing the updated prior attribute knowledge to refine the biterm topic model's Gibbs sampling process in an iterative manner. The extensive experimental results show that PEARL outperforms all the baseline methods not only on the task of personal attribute prediction from conversations over two data sets, but also on the more general weakly supervised text classification task over one data set.
translated by 谷歌翻译