众包被视为有效监督学习的一个潜在解决方案,旨在通过人群工人建立大规模的注释培训数据。以前的研究重点是减少来自众包注释的噪音的影响。我们在这项工作中涉及不同的观点,关于所有众包作为个人注册人的金标。通过这种方式,我们发现众群可能与域适应高度相似,然后近域方法的最近进步几乎可以直接应用于众包。在这里,我们将命名实体识别(ner)作为一项研究案例,建议由尝试捕获有效域感知功能的域适配方法的吸引人感知表示学习模型。我们调查无监督和监督的众群学习,假设没有或只有小型专家注释。基准众包的实验结果表明,我们的方法非常有效,导致新的最先进的性能。此外,在监督环境下,我们只能通过非常小的专家注释来实现令人印象深刻的性能。
translated by 谷歌翻译
人群顺序注释可能是一种有效且具有成本效益的方式,用于构建用于序列标签的大型数据集。不同于标记独立实例,对于人群顺序注释,标签序列的质量取决于注释者在捕获序列中每个令牌的内部依赖性方面的专业知识水平。在本文中,我们提出了与人群(SA-SLC)进行序列标记的序列注释。首先,开发了有条件的概率模型,以共同模拟顺序数据和注释者的专业知识,其中引入分类分布以估计每个注释者在捕获局部和非本地标签依赖性以进行顺序注释时的可靠性。为了加速所提出模型的边缘化,提出了有效的标签序列推理(VLSE)方法,以从人群顺序注释中得出有效的地面真相标签序列。 VLSE从令牌级别中得出了可能的地面真相标签,并在标签序列解码的正向推断中进一步介绍了李子标签。 VLSE减少了候选标签序列的数量,并提高了可能的地面真实标签序列的质量。自然语言处理的几个序列标记任务的实验结果显示了所提出的模型的有效性。
translated by 谷歌翻译
最近的工作表明,在适应新域时,域名语言模型可以提高性能。但是,与培训前提出的成本提出了一个重要问题:给出了固定预算,NLP从业者应该采取哪些步骤来最大限度地提高绩效?在本文中,我们在预算限制下研究域适应,并将其作为数据注释和预培训之间的客户选择问题。具体而言,我们测量三个程序文本数据集的注释成本以及三种域语言模型的预培训成本。然后,我们评估不同预算限制下的预训练和数据注释的不同组合的效用,以评估哪种组合策略最佳效果。我们发现,对于小预算,支出所有资金都会导致最佳表现;一旦预算变得足够大,数据注释和域内预训练的组合更优先。因此,我们建议任务特定的数据注释应该是在将NLP模型调整到新域时的经济策略的一部分。
translated by 谷歌翻译
As an important fine-grained sentiment analysis problem, aspect-based sentiment analysis (ABSA), aiming to analyze and understand people's opinions at the aspect level, has been attracting considerable interest in the last decade. To handle ABSA in different scenarios, various tasks are introduced for analyzing different sentiment elements and their relations, including the aspect term, aspect category, opinion term, and sentiment polarity. Unlike early ABSA works focusing on a single sentiment element, many compound ABSA tasks involving multiple elements have been studied in recent years for capturing more complete aspect-level sentiment information. However, a systematic review of various ABSA tasks and their corresponding solutions is still lacking, which we aim to fill in this survey. More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks. From the perspective of solutions, we summarize the utilization of pre-trained language models for ABSA, which improved the performance of ABSA to a new stage. Besides, techniques for building more practical ABSA systems in cross-domain/lingual scenarios are discussed. Finally, we review some emerging topics and discuss some open challenges to outlook potential future directions of ABSA.
translated by 谷歌翻译
大多数NER方法都依赖于广泛的标记数据进行模型培训,这些数据在低资源场景中挣扎,培训数据有限。与资源丰富的源域相比,现有的主要方法通常会遇到目标域具有不同标签集的挑战,该标签集可以作为类传输和域转移得出的结论。在本文中,我们通过可拔出的提示(Lightner)提出了一个轻巧的调整范式,用于低资源。具体而言,我们构建了实体类别的统一可学习的语言器,以生成实体跨度序列和实体类别,而无需任何标签特定的分类器,从而解决了类转移问题。我们通过将可学习的参数纳入自我发言层作为指导,进一步提出了一个可插入的指导模块,该参数可以重新调节注意力并调整预训练的权重。请注意,我们仅通过修复了预训练的语言模型的整个参数来调整那些插入的模块,从而使我们的方法轻巧且灵活地适合低资源场景,并且可以更好地跨域传输知识。实验结果表明,Lightner可以在标准监督环境中获得可比的性能,并且在低资源设置中优于强大基线。代码在https://github.com/zjunlp/deepke/tree/main/main/example/ner/few-shot中。
translated by 谷歌翻译
Recent advances in NLP are brought by a range of large-scale pretrained language models (PLMs). These PLMs have brought significant performance gains for a range of NLP tasks, circumventing the need to customize complex designs for specific tasks. However, most current work focus on finetuning PLMs on a domain-specific datasets, ignoring the fact that the domain gap can lead to overfitting and even performance drop. Therefore, it is practically important to find an appropriate method to effectively adapt PLMs to a target domain of interest. Recently, a range of methods have been proposed to achieve this purpose. Early surveys on domain adaptation are not suitable for PLMs due to the sophisticated behavior exhibited by PLMs from traditional models trained from scratch and that domain adaptation of PLMs need to be redesigned to take effect. This paper aims to provide a survey on these newly proposed methods and shed light in how to apply traditional machine learning methods to newly evolved and future technologies. By examining the issues of deploying PLMs for downstream tasks, we propose a taxonomy of domain adaptation approaches from a machine learning system view, covering methods for input augmentation, model optimization and personalization. We discuss and compare those methods and suggest promising future research directions.
translated by 谷歌翻译
输入分布转移是无监督域适应(UDA)中的重要问题之一。最受欢迎的UDA方法集中在域不变表示学习上,试图将不同域中的功能调整为相似的特征分布。但是,这些方法忽略了域之间的输入单词分布的直接对齐,这是单词级分类任务(例如跨域NER)的重要因素。在这项工作中,我们通过引入子词级解决方案X-Pience来为输入单词级分布移动,从而为跨域NER开发了新的灯光。具体而言,我们将源域的输入单词重新划分以接近目标子词分布,该分布是作为最佳运输问题制定和解决的。由于这种方法着重于输入级别,因此它也可以与先前的DIRL方法相结合,以进一步改进。实验结果表明,基于四个基准NER数据集的Bert-Tagger所提出的方法的有效性。同样,事实证明,所提出的方法受益于诸如Dann之类的DIRL方法。
translated by 谷歌翻译
Supervised approaches generally rely on majority-based labels. However, it is hard to achieve high agreement among annotators in subjective tasks such as hate speech detection. Existing neural network models principally regard labels as categorical variables, while ignoring the semantic information in diverse label texts. In this paper, we propose AnnoBERT, a first-of-its-kind architecture integrating annotator characteristics and label text with a transformer-based model to detect hate speech, with unique representations based on each annotator's characteristics via Collaborative Topic Regression (CTR) and integrate label text to enrich textual representations. During training, the model associates annotators with their label choices given a piece of text; during evaluation, when label information is not available, the model predicts the aggregated label given by the participating annotators by utilising the learnt association. The proposed approach displayed an advantage in detecting hate speech, especially in the minority class and edge cases with annotator disagreement. Improvement in the overall performance is the largest when the dataset is more label-imbalanced, suggesting its practical value in identifying real-world hate speech, as the volume of hate speech in-the-wild is extremely small on social media, when compared with normal (non-hate) speech. Through ablation studies, we show the relative contributions of annotator embeddings and label text to the model performance, and tested a range of alternative annotator embeddings and label text combinations.
translated by 谷歌翻译
注释数据是用于培训和评估机器学习模型的自然语言处理中的重要成分。因此,注释具有高质量是非常理想的。但是,最近的工作表明,几个流行的数据集包含令人惊讶的注释错误或不一致之处。为了减轻此问题,多年来已经设计了许多注释错误检测方法。尽管研究人员表明他们的方法在新介绍的数据集上效果很好,但他们很少将其方法与以前的工作或同一数据集进行比较。这引起了人们对方法的一般表现的强烈关注,并且使他们的优势和劣势很难解决。因此,我们重新实现18种检测潜在注释错误的方法,并在9个英语数据集上对其进行评估,以进行文本分类以及令牌和跨度标签。此外,我们定义了统一的评估设置,包括注释错误检测任务,评估协议和一般最佳实践的新形式化。为了促进未来的研究和可重复性,我们将数据集和实施释放到易于使用和开源软件包中。
translated by 谷歌翻译
尽管与专家标签相比,众包平台通常用于收集用于培训机器学习模型的数据集,尽管标签不正确。有两种常见的策略来管理这种噪音的影响。第一个涉及汇总冗余注释,但以较少的例子为代价。其次,先前的作品还考虑使用整个注释预算来标记尽可能多的示例,然后应用Denoising算法来隐式清洁数据集。我们找到了一个中间立场,并提出了一种方法,该方法保留了一小部分注释,以明确清理高度可能的错误样本以优化注释过程。特别是,我们分配了标签预算的很大一部分,以形成用于训练模型的初始数据集。然后,该模型用于确定最有可能是不正确的特定示例,我们将剩余预算用于重新标记。在三个模型变化和四个自然语言处理任务上进行的实验表明,当分配相同的有限注释预算时,旨在处理嘈杂标签的标签聚合和高级denoising方法均优于标签聚合或匹配。
translated by 谷歌翻译
法律文件是非结构化的,使用法律术语,并且具有相当长的长度,使得难以通过传统文本处理技术自动处理。如果文档可以在语义上分割成连贯的信息单位,法律文件处理系统将基本上受益。本文提出了一种修辞职位(RR)系统,用于将法律文件分组成语义连贯的单位:事实,论点,法规,问题,先例,裁决和比例。在法律专家的帮助下,我们提出了一套13个细粒度的修辞标志标签,并创建了与拟议的RR批发的新的法律文件有条件。我们开发一个系统,以将文件分段为修辞职位单位。特别是,我们开发了一种基于多任务学习的深度学习模型,文档修辞角色标签作为分割法律文件的辅助任务。我们在广泛地尝试各种深度学习模型,用于预测文档中的修辞角色,并且所提出的模型对现有模型显示出卓越的性能。此外,我们应用RR以预测法律案件的判断,并表明与基于变压器的模型相比,使用RR增强了预测。
translated by 谷歌翻译
我们筹集并定义了一个新的众群情景,开放套装,在那里我们只知道一个不熟悉的众群项目的一般主题,我们不知道其标签空间,即可能的标签集。这仍然是一个任务注释问题,但与任务和标签空间的不熟悉妨碍了任务和工人的建模,以及真理推断。我们提出了一个直观的解决方案,Oscrowd。首先,Oscrowd将人群主题相关的数据集集成到一个大源域中,以便于部分传输学习,以近似这些任务的标签空间推理。接下来,它将基于类别相关性为每个源域分配权重。在此之后,它使用多源打开集传输学习来模拟人群任务并分配可能的注释。转让学习给出的标签空间和注释将用于指导和标准化人群工人的注释。我们在在线场景中验证了Oscrowd,并证明了Oscrowd解决了开放式众群问题,比相关的众包解决方案更好。
translated by 谷歌翻译
自然语言处理领域(NLP)最近看到使用预先接受训练的语言模型来解决几乎任何任务的大量变化。尽管对各种任务的基准数据集显示了很大的改进,但这些模型通常在非标准域中对临床领域的临床域进行次优,其中观察到预训练文件和目标文件之间的巨大差距。在本文中,我们的目标是通过对语言模型的域特定培训结束这种差距,我们调查其对多种下游任务和设置的影响。我们介绍了预先训练的Clin-X(临床XLM-R)语言模型,并展示了Clin-X如何通过两种语言的十个临床概念提取任务的大幅度优于其他预先训练的变压器模型。此外,我们展示了如何通过基于随机分裂和交叉句子上下文的集合来利用我们所提出的任务和语言 - 无人机模型架构进一步改善变压器模型。我们在低资源和转移设置中的研究显​​示,尽管只有250个标记的句子,但在只有250个标记的句子时,缺乏带注释数据的稳定模型表现。我们的结果突出了专业语言模型作为非标准域中的概念提取的Clin-X的重要性,但也表明我们的任务 - 无人机模型架构跨越测试任务和语言是强大的,以便域名或任务特定的适应不需要。 Clin-Xlanguage模型和用于微调和传输模型的源代码在https://github.com/boschresearch/clin\_x/和Huggingface模型集线器上公开使用。
translated by 谷歌翻译
当大型训练数据集不可用于低资源域时,命名实体识别(NER)模型通常表现不佳。最近,预先训练大规模语言模型已成为应对数据稀缺问题的有希望的方向。然而,语言建模和ner任务之间的潜在差异可能会限制模型的性能,并且由于收集的网数据集通常很小或大而是低质量,因此已经研究了NER任务的预训练。在本文中,我们构建了一个具有相对高质量的大规模核心语料库,我们基于创建的数据集预先列车。实验结果表明,我们的预训练模型可以显着优于八大域的低资源场景中的百合形和其他强基线。此外,实体表示的可视化进一步指示Ner-BERT用于对各种实体进行分类的有效性。
translated by 谷歌翻译
Selecting an effective training signal for tasks in natural language processing is difficult: collecting expert annotations is expensive, and crowd-sourced annotations may not be reliable. At the same time, recent work in machine learning has demonstrated that learning from soft-labels acquired from crowd annotations can be effective, especially when there is distribution shift in the test set. However, the best method for acquiring these soft labels is inconsistent across tasks. This paper proposes new methods for acquiring soft-labels from crowd-annotations by aggregating the distributions produced by existing methods. In particular, we propose to find a distribution over classes by learning from multiple-views of crowd annotations via temperature scaling and finding the Jensen-Shannon centroid of their distributions. We demonstrate that using these aggregation methods leads to best or near-best performance across four NLP tasks on out-of-domain test sets, mitigating fluctuations in performance when using the constituent methods on their own. Additionally, these methods result in best or near-best uncertainty estimation across tasks. We argue that aggregating different views of crowd-annotations as soft-labels is an effective way to ensure performance which is as good or better than the best individual view, which is useful given the inconsistency in performance of the individual methods.
translated by 谷歌翻译
自然语言理解(NLU)通过大型基准驱动的大规模进展,与转让学习的研究配对扩大其影响。基准是由一小部分频繁现象的主导,留下了一条长长的不常见现象。在这项工作中,我们反映了问题:转移学习方法足够地解决了长尾的基准训练模型的表现吗?由于基准未列出包括/排除的现象,我们使用宏观级别的宏观尺寸(如经验丰富的类型,主题等)概念化。我们评估通过100个代表性论文转让学习的定性荟萃分析来转移学习研究的趋势nlu。我们的分析问了三个问题:(i)哪个长尾尺寸进行转移学习研究目标? (ii)哪种特性有助于适应方法改善长尾的性能? (iii)哪种方法差距对长尾性能有最大的负面影响?我们对这些问题的答案突出了在长尾的转让学习中的未来研究的主要途径。最后,我们展示了一个案例研究,比较了各种适应方法对临床叙事的性能,以表明系统性开展的元实验如何提供能够沿着这些未来的途径取得进展的见解。
translated by 谷歌翻译
Practices in the built environment have become more digitalized with the rapid development of modern design and construction technologies. However, the requirement of practitioners or scholars to gather complicated professional knowledge in the built environment has not been satisfied yet. In this paper, more than 80,000 paper abstracts in the built environment field were obtained to build a knowledge graph, a knowledge base storing entities and their connective relations in a graph-structured data model. To ensure the retrieval accuracy of the entities and relations in the knowledge graph, two well-annotated datasets have been created, containing 2,000 instances and 1,450 instances each in 29 relations for the named entity recognition task and relation extraction task respectively. These two tasks were solved by two BERT-based models trained on the proposed dataset. Both models attained an accuracy above 85% on these two tasks. More than 200,000 high-quality relations and entities were obtained using these models to extract all abstract data. Finally, this knowledge graph is presented as a self-developed visualization system to reveal relations between various entities in the domain. Both the source code and the annotated dataset can be found here: https://github.com/HKUST-KnowComp/BEKG.
translated by 谷歌翻译
学术研究是解决以前从未解决过的问题的探索活动。通过这种性质,每个学术研究工作都需要进行文献审查,以区分其Novelties尚未通过事先作品解决。在自然语言处理中,该文献综述通常在“相关工作”部分下进行。鉴于研究文件的其余部分和引用的论文列表,自动相关工作生成的任务旨在自动生成“相关工作”部分。虽然这项任务是在10年前提出的,但直到最近,它被认为是作为科学多文件摘要问题的变种。然而,即使在今天,尚未标准化了自动相关工作和引用文本生成的问题。在这项调查中,我们进行了一个元研究,从问题制定,数据集收集,方法方法,绩效评估和未来前景的角度来比较相关工作的现有文献,以便为读者洞察到国家的进步 - 最内容的研究,以及如何进行未来的研究。我们还调查了我们建议未来工作要考虑整合的相关研究领域。
translated by 谷歌翻译
Aspect-based sentiment analysis (ABSA) aims at extracting opinionated aspect terms in review texts and determining their sentiment polarities, which is widely studied in both academia and industry. As a fine-grained classification task, the annotation cost is extremely high. Domain adaptation is a popular solution to alleviate the data deficiency issue in new domains by transferring common knowledge across domains. Most cross-domain ABSA studies are based on structure correspondence learning (SCL), and use pivot features to construct auxiliary tasks for narrowing down the gap between domains. However, their pivot-based auxiliary tasks can only transfer knowledge of aspect terms but not sentiment, limiting the performance of existing models. In this work, we propose a novel Syntax-guided Domain Adaptation Model, named SDAM, for more effective cross-domain ABSA. SDAM exploits syntactic structure similarities for building pseudo training instances, during which aspect terms of target domain are explicitly related to sentiment polarities. Besides, we propose a syntax-based BERT mask language model for further capturing domain-invariant features. Finally, to alleviate the sentiment inconsistency issue in multi-gram aspect terms, we introduce a span-based joint aspect term and sentiment analysis module into the cross-domain End2End ABSA. Experiments on five benchmark datasets show that our model consistently outperforms the state-of-the-art baselines with respect to Micro-F1 metric for the cross-domain End2End ABSA task.
translated by 谷歌翻译
食源性疾病是一个严重但可以预防的公共卫生问题 - 延迟发现相关的暴发导致生产力损失,昂贵的召回,公共安全危害甚至生命丧失。尽管社交媒体是识别未报告的食源性疾病的有前途的来源,但缺乏标记的数据集来开发有效的爆发检测模型。为了加快基于机器学习的疫苗爆发检测模型的开发,我们提出了推文-FID(Tweet-Foodborne疾病检测),这是第一个用于多种食源性疾病事件检测任务的公开注释的数据集。从Twitter收集的Tweet-FID带有三个方面:Tweet类,实体类型和老虎机类型,并带有专家以及众包工人生产的标签。我们介绍了利用这三个方面的几个域任务:文本相关性分类(TRC),实体提及检测(EMD)和插槽填充(SF)。我们描述了用于支持这些任务模型开发的数据集设计,创建和标签的端到端方法。提供了这些任务的全面结果,以利用Tweet-FID数据集上的最新单项和多任务深度学习方法。该数据集为未来的Foodborne爆发检测提供了机会。
translated by 谷歌翻译