插槽填充和意图检测是自然语言理解领域的两个基本任务。由于这两项任务之间存在很强的相关性,因此以前的研究努力通过多任务学习或设计功能交互模块来建模它们,以提高每个任务的性能。但是,现有的方法都没有考虑句子的结构信息与两个任务的标签语义之间的相关性。话语的意图和语义成分取决于句子的句法元素。在本文中,我们研究了一个多透明的标签改进网络,该网络利用依赖性结构和标签语义嵌入。考虑到增强句法表示,我们将句子的依赖性结构介绍到我们的模型中。为了捕获句法信息和任务标签之间的语义依赖性,我们将特定于任务的特征与相应的标签嵌入通过注意机制相结合。实验结果表明,我们的模型在两个公共数据集上实现了竞争性能。
translated by 谷歌翻译
Intent classification and slot filling are two core tasks in natural language understanding (NLU). The interaction nature of the two tasks makes the joint models often outperform the single designs. One of the promising solutions, called BERT (Bidirectional Encoder Representations from Transformers), achieves the joint optimization of the two tasks. BERT adopts the wordpiece to tokenize each input token into multiple sub-tokens, which causes a mismatch between the tokens and the labels lengths. Previous methods utilize the hidden states corresponding to the first sub-token as input to the classifier, which limits performance improvement since some hidden semantic informations is discarded in the fine-tune process. To address this issue, we propose a novel joint model based on BERT, which explicitly models the multiple sub-tokens features after wordpiece tokenization, thereby generating the context features that contribute to slot filling. Specifically, we encode the hidden states corresponding to multiple sub-tokens into a context vector via the attention mechanism. Then, we feed each context vector into the slot filling encoder, which preserves the integrity of the sentence. Experimental results demonstrate that our proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on two public benchmark datasets. The F1 score of the slot filling in particular has been improved from 96.1 to 98.2 (2.1% absolute) on the ATIS dataset.
translated by 谷歌翻译
Multi-intent detection and slot filling joint models are gaining increasing traction since they are closer to complicated real-world scenarios. However, existing approaches (1) focus on identifying implicit correlations between utterances and one-hot encoded labels in both tasks while ignoring explicit label characteristics; (2) directly incorporate multi-intent information for each token, which could lead to incorrect slot prediction due to the introduction of irrelevant intent. In this paper, we propose a framework termed DGIF, which first leverages the semantic information of labels to give the model additional signals and enriched priors. Then, a multi-grain interactive graph is constructed to model correlations between intents and slots. Specifically, we propose a novel approach to construct the interactive graph based on the injection of label semantics, which can automatically update the graph to better alleviate error propagation. Experimental results show that our framework significantly outperforms existing approaches, obtaining a relative improvement of 13.7% over the previous best model on the MixATIS dataset in overall accuracy.
translated by 谷歌翻译
Recent graph-based models for joint multiple intent detection and slot filling have obtained promising results through modeling the guidance from the prediction of intents to the decoding of slot filling. However, existing methods (1) only model the \textit{unidirectional guidance} from intent to slot; (2) adopt \textit{homogeneous graphs} to model the interactions between the slot semantics nodes and intent label nodes, which limit the performance. In this paper, we propose a novel model termed Co-guiding Net, which implements a two-stage framework achieving the \textit{mutual guidances} between the two tasks. In the first stage, the initial estimated labels of both tasks are produced, and then they are leveraged in the second stage to model the mutual guidances. Specifically, we propose two \textit{heterogeneous graph attention networks} working on the proposed two \textit{heterogeneous semantics-label graphs}, which effectively represent the relations among the semantics nodes and label nodes. Experiment results show that our model outperforms existing models by a large margin, obtaining a relative improvement of 19.3\% over the previous best model on MixATIS dataset in overall accuracy.
translated by 谷歌翻译
口语语言理解已被处理为监督的学习问题,其中每个域都有一组培训数据。但是,每个域的注释数据都是经济昂贵和不可扩展的,因此我们应该充分利用所有域的信息。通过进行多域学习,使用跨域的联合训练的共享参数来解决一个现有方法解决问题。我们建议通过使用域特定和特定于任务的模型参数来改善该方法的参数化,以改善知识学习和传输。5个域的实验表明,我们的模型对多域SLU更有效,并获得最佳效果。此外,当适应具有很少数据的新域时,通过优于12.4 \%来表现出先前最佳模型的可转换性。
translated by 谷歌翻译
Recent joint multiple intent detection and slot filling models employ label embeddings to achieve the semantics-label interactions. However, they treat all labels and label embeddings as uncorrelated individuals, ignoring the dependencies among them. Besides, they conduct the decoding for the two tasks independently, without leveraging the correlations between them. Therefore, in this paper, we first construct a Heterogeneous Label Graph (HLG) containing two kinds of topologies: (1) statistical dependencies based on labels' co-occurrence patterns and hierarchies in slot labels; (2) rich relations among the label nodes. Then we propose a novel model termed ReLa-Net. It can capture beneficial correlations among the labels from HLG. The label correlations are leveraged to enhance semantic-label interactions. Moreover, we also propose the label-aware inter-dependent decoding mechanism to further exploit the label correlations for decoding. Experiment results show that our ReLa-Net significantly outperforms previous models. Remarkably, ReLa-Net surpasses the previous best model by over 20\% in terms of overall accuracy on MixATIS dataset.
translated by 谷歌翻译
目前对语言理解(SLU)的研究重大仅限于简单的设置:基于纯文本的SLU,它将用户话语为输入并生成其相应的语义帧(例如,意图和插槽)。不幸的是,当话语是语义模糊的话语时,这种简单的设置可能无法在复杂的真实情景中工作,这不能通过基于文本的SLU模型来实现的。在本文中,我们首先介绍了一种新的和重要任务,基于个人资料的口语语言理解(ProSlu),这需要不仅依赖于纯文本的模型,而且需要支持的资料配置文件,以预测正确的意图和插槽。为此,我们进一步引入了一个具有超过5K的大规模的汉语数据集及其相应的支持简档信息(知识图(kg),用户配置文件(向上),上下文意识(CA))。此外,我们还评估了多个最先进的基线模型,并探索多级知识适配器,以有效地结合资料信息。实验结果表明,当话语是语义模糊的,我们所提出的框架可以有效地融合了句子级意图检测和令牌级槽填充的支持信息,所以所有现有的基于文本的SLU模型都无法工作。最后,我们总结了关键挑战,为未来方向提供了新的观点,希望促进研究。
translated by 谷歌翻译
基于宽高的情绪分析(ABSA)是一种细粒度的情绪分析任务。为了更好地理解长期复杂的句子,并获得准确的方面的信息,这项任务通常需要语言和致辞知识。然而,大多数方法采用复杂和低效的方法来结合外部知识,例如,直接搜索图形节点。此外,尚未彻底研究外部知识和语言信息之间的互补性。为此,我们提出了一个知识图形增强网络(kgan),该网络(kgan)旨在有效地将外部知识与明确的句法和上下文信息纳入。特别是,kgan从多个不同的角度来看,即基于上下文,语法和知识的情绪表示。首先,kgan通过并行地了解上下文和句法表示,以完全提取语义功能。然后,KGAN将知识图形集成到嵌入空间中,基于该嵌入空间,基于该嵌入空间,通过注意机制进一步获得了方面特异性知识表示。最后,我们提出了一个分层融合模块,以便以本地到全局方式补充这些多视图表示。关于三个流行的ABSA基准测试的广泛实验证明了我们康复的效果和坚固性。值得注意的是,在罗伯塔的预用模型的帮助下,Kggan实现了最先进的性能的新记录。
translated by 谷歌翻译
语言理解(SLU)是以任务为导向对话系统的核心组成部分,期望面对人类用户不耐烦的推理较短。现有的工作通过为单转弯任务设计非自动回旋模型来提高推理速度,但在面对对话历史记录时未能适用于多转移SLU。直观的想法是使所有历史言语串联并直接利用非自动进取模型。但是,这种方法严重错过了显着的历史信息,并遭受了不协调的问题。为了克服这些缺点,我们提出了一个新型模型,用于使用层改造的变压器(SHA-LRT),该模型名为“显着历史”,该模型由SHA模块组成,该模块由SHA模块组成,一种层的机制(LRM)和插槽标签生成(SLG)任务。 SHA通过历史悠久的注意机制捕获了从历史言论和结果进行的当前对话的显着历史信息。 LRM预测了Transferer的中间状态的初步SLU结果,并利用它们来指导最终预测,SLG获得了非自动进取编码器的顺序依赖性信息。公共数据集上的实验表明,我们的模型可显着提高多转弯性能(总体上为17.5%),并且加速(接近15倍)最先进的基线的推理过程,并且在单转弯方面有效SLU任务。
translated by 谷歌翻译
我们介绍了第一项经验研究,研究了突发性检测对意向检测和插槽填充的下游任务的影响。我们对越南人进行了这项研究,这是一种低资源语言,没有以前的研究,也没有公共数据集可用于探索。首先,我们通过手动添加上下文不满并注释它们来扩展流利的越南意图检测和插槽填充phoatis。然后,我们使用强基线进行实验进行实验,以基于预训练的语言模型,以检测和关节意图检测和插槽填充。我们发现:(i)爆发对下游意图检测和插槽填充任务的性能产生负面影响,并且(ii)在探索环境中,预先训练的多语言语言模型XLM-R有助于产生更好的意图检测和插槽比预先训练的单语言模型phobert填充表演,这与在流利性环境中通常发现的相反。
translated by 谷歌翻译
数据稀疏问题是自然语言理解(NLU)的关键挑战,特别是对于新的目标域。通过在源域中训练NLU模型并直接将模型应用于任意目标域(即使没有微调),很少拍摄的NLU对缓解数据稀缺问题至关重要。在本文中,我们建议改进具有矢量投影距离和抽象三角条件随机场(CRF)的原型网络,用于几次射击NLU。向量投影距离利用在标签向量上的上下文词嵌入的投影作为单词标签相似度,其等同于归一化的线性模型。抽象三角CRF了解用于联合意图分类和插槽填充任务的域名忽视标签转换。广泛的实验表明,我们所提出的方法可以显着超越强力基线。具体而言,我们的方法可以在中文和英语中达到两次拍摄的两次拍摄NLU基准(几个关节和剪辑)的新技术,而无需对目标域的微调。
translated by 谷歌翻译
了解用户的意图并从句子中识别出语义实体,即自然语言理解(NLU),是许多自然语言处理任务的上游任务。主要挑战之一是收集足够数量的注释数据来培训模型。现有有关文本增强的研究并没有充分考虑实体,因此对于NLU任务的表现不佳。为了解决这个问题,我们提出了一种新型的NLP数据增强技术,实体意识数据增强(EADA),该技术应用了树结构,实体意识到语法树(EAST),以表示句子与对实体的注意相结合。我们的EADA技术会自动从少量注释的数据中构造东方,然后生成大量的培训实例,以进行意图检测和插槽填充。四个数据集的实验结果表明,该技术在准确性和泛化能力方面显着优于现有数据增强方法。
translated by 谷歌翻译
Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code and experiment details of this paper can be obtained from https:// github.com/thunlp/ERNIE.
translated by 谷歌翻译
具有对比性学习目标的预训练方法在对话了解任务中表现出了显着的成功。但是,当前的对比学习仅将自调查的对话样本视为正样本,并将所有其他对话样本视为负面样本,即使在语义上相关的对话框中,也会强制执行不同的表示。在本文中,我们提出了一个树木结构化的预培训对话模型Space-2,该模型从有限标记的对话框和大规模的无标记的对话框COLPORA通过半监督的对比度预培训来学习对话框表示。具体而言,我们首先定义一个通用的语义树结构(STS),以统一不同对话框数据集的注释模式,以便可以利用所有标记数据中存储的丰富结构信息。然后,我们提出了一个新颖的多视图分数功能,以增加共享类似STS的所有可能对话框的相关性,并且在监督的对比预训练期间仅推开其他完全不同的对话框。为了充分利用未标记的对话,还增加了基本的自我监督对比损失,以完善学习的表示。实验表明,我们的方法可以在DialogLue基准测试中实现新的最新结果,该基准由七个数据集和四个流行的对话框组成。为了获得可重复性,我们在https://github.com/alibabaresearch/damo-convai/tree/main/main/space-2上发布代码和数据。
translated by 谷歌翻译
Named Entity Recognition and Intent Classification are among the most important subfields of the field of Natural Language Processing. Recent research has lead to the development of faster, more sophisticated and efficient models to tackle the problems posed by those two tasks. In this work we explore the effectiveness of two separate families of Deep Learning networks for those tasks: Bidirectional Long Short-Term networks and Transformer-based networks. The models were trained and tested on the ATIS benchmark dataset for both English and Greek languages. The purpose of this paper is to present a comparative study of the two groups of networks for both languages and showcase the results of our experiments. The models, being the current state-of-the-art, yielded impressive results and achieved high performance.
translated by 谷歌翻译
In-game toxic language becomes the hot potato in the gaming industry and community. There have been several online game toxicity analysis frameworks and models proposed. However, it is still challenging to detect toxicity due to the nature of in-game chat, which has extremely short length. In this paper, we describe how the in-game toxic language shared task has been established using the real-world in-game chat data. In addition, we propose and introduce the model/framework for toxic language token tagging (slot filling) from the in-game chat. The data and code will be released.
translated by 谷歌翻译
医疗关系提取(MRE)任务旨在提取医学文本中实体之间的关系。传统的关系提取方法通过探索句法信息,例如依赖树。但是,由外域解析器产生的医学文本的1好的依赖树的质量相对有限,因此医疗关系提取方法的性能可能会退化。为此,我们提出了一种基于因果解释理论的医学文本中共同模拟语义和句法信息的方法。我们生成依赖性森林,这些森林由1-最佳依赖树组成。然后,采用特定于任务的因果解释者来修剪依赖性森林,该森林将进一步送入设计的图形卷积网络,以学习下游任务的相应表示。从经验上讲,基准医学数据集的各种比较证明了我们模型的有效性。
translated by 谷歌翻译
跨度提取,旨在从纯文本中提取文本跨度(如单词或短语),是信息提取中的基本过程。最近的作品介绍了通过将跨度提取任务正式化为问题(QA正式化)的跨度提取任务来提高文本表示,以实现最先进的表现。然而,QA正规化并没有充分利用标签知识并遭受培训/推理的低效率。为了解决这些问题,我们介绍了一种新的范例来整合标签知识,并进一步提出一个小说模型,明确有效地将标签知识集成到文本表示中。具体而言,它独立地编码文本和标签注释,然后将标签知识集成到文本表示中,并使用精心设计的语义融合模块进行文本表示。我们在三个典型的跨度提取任务中进行广泛的实验:扁平的网,嵌套网和事件检测。实证结果表明,我们的方法在四个基准测试中实现了最先进的性能,而且分别将培训时间和推理时间降低76%和77%,与QA形式化范例相比。我们的代码和数据可在https://github.com/apkepers/lear中获得。
translated by 谷歌翻译
Lexicon信息和预先训练的型号,如伯特,已被组合以探索由于各自的优势而探索中文序列标签任务。然而,现有方法通过浅和随机初始化的序列层仅熔断词典特征,并且不会将它们集成到伯特的底层中。在本文中,我们提出了用于汉语序列标记的Lexicon增强型BERT(Lebert),其直接通过Lexicon适配器层将外部词典知识集成到BERT层中。与现有方法相比,我们的模型促进了伯特下层的深层词典知识融合。关于十个任务的十个中文数据集的实验,包括命名实体识别,单词分段和言语部分标记,表明Lebert实现了最先进的结果。
translated by 谷歌翻译
事实证明,将先验知识纳入预训练的语言模型中对知识驱动的NLP任务有效,例如实体键入和关系提取。当前的培训程序通常通过使用知识掩盖,知识融合和知识更换将外部知识注入模型。但是,输入句子中包含的事实信息尚未完全开采,并且尚未严格检查注射的外部知识。结果,无法完全利用上下文信息,并将引入额外的噪音,或者注入的知识量受到限制。为了解决这些问题,我们提出了MLRIP,该MLRIP修改了Ernie-Baidu提出的知识掩盖策略,并引入了两阶段的实体替代策略。进行全面分析的广泛实验说明了MLRIP在军事知识驱动的NLP任务中基于BERT的模型的优势。
translated by 谷歌翻译