我们介绍了第一项经验研究,研究了突发性检测对意向检测和插槽填充的下游任务的影响。我们对越南人进行了这项研究,这是一种低资源语言,没有以前的研究,也没有公共数据集可用于探索。首先,我们通过手动添加上下文不满并注释它们来扩展流利的越南意图检测和插槽填充phoatis。然后,我们使用强基线进行实验进行实验,以基于预训练的语言模型,以检测和关节意图检测和插槽填充。我们发现:(i)爆发对下游意图检测和插槽填充任务的性能产生负面影响,并且(ii)在探索环境中,预先训练的多语言语言模型XLM-R有助于产生更好的意图检测和插槽比预先训练的单语言模型phobert填充表演,这与在流利性环境中通常发现的相反。
translated by 谷歌翻译
Intent classification and slot filling are two core tasks in natural language understanding (NLU). The interaction nature of the two tasks makes the joint models often outperform the single designs. One of the promising solutions, called BERT (Bidirectional Encoder Representations from Transformers), achieves the joint optimization of the two tasks. BERT adopts the wordpiece to tokenize each input token into multiple sub-tokens, which causes a mismatch between the tokens and the labels lengths. Previous methods utilize the hidden states corresponding to the first sub-token as input to the classifier, which limits performance improvement since some hidden semantic informations is discarded in the fine-tune process. To address this issue, we propose a novel joint model based on BERT, which explicitly models the multiple sub-tokens features after wordpiece tokenization, thereby generating the context features that contribute to slot filling. Specifically, we encode the hidden states corresponding to multiple sub-tokens into a context vector via the attention mechanism. Then, we feed each context vector into the slot filling encoder, which preserves the integrity of the sentence. Experimental results demonstrate that our proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on two public benchmark datasets. The F1 score of the slot filling in particular has been improved from 96.1 to 98.2 (2.1% absolute) on the ATIS dataset.
translated by 谷歌翻译
在本文中,我们呈现了Bartpho的两个版本Bartpho-symlable和Bartpho-Word,这是第一个为越南语预先培训的公共大规模单声道序列到序列模型。Bartpho使用“大”架构和序列序列去噪的预训练方案,因此特别适用于生成NLP任务。我们开展实验,以将我们的巴特照片与竞争对手MBART进行比较,以越南文本摘要的下游任务,表明:在自动和人类评估中,Bartpho优于强大的基线MBART并改善了最先进的。我们释放巴特诺以促进未来的生成越南NLP任务的研究和应用。我们的Bartpho模型可公开提供:https://github.com/vinairesearch/bartpho
translated by 谷歌翻译
发展任务导向的对话助理的实用需求需要了解许多语言。多语言自然语言理解(NLU)的新型基准包括多种语言中的单声道句,用意图和插槽注释。在这种设置模型中,用于交叉传输在联合意图识别和槽填充方面表现出显着性能。然而,现有的基准缺乏代码切换话语,这难以收集和标签由于语法结构的复杂性。对于NLU模型的评估似乎偏见和有限,因为代码切换被遗漏了范围。我们的工作采用认可的方法来生成合理的和自然探测的代码切换话语,并使用它们来创建合成代码交换测试集。基于实验,我们报告说,最先进的NLU模型无法处理代码切换。在最糟糕的是,性能,通过语义精度评估,从横跨80 \%的8 \%的低至15 \%。此外,我们展示了,对合成码混合数据进行预训练有助于在具有单晶体数据的可比水平上保持所提出的测试中的性能。最后,我们分析了不同的语言对并表明语言越近,NLU模型越好地处理了交替。这符合对多语种模型在语言之间进行转移的共同理解
translated by 谷歌翻译
我们提出语言学家,这是一种通过微调Alexatm 5B生成带注释数据的方法,用于生成意图分类和插槽标记(IC+ST),这是一种5亿参数的多语言序列到序列(SEQ2SEQ)模型,在灵活的指令上迅速的。在SNIP数据集的10次新颖意图设置中,语言学家超过了最新的方法(反向翻译和示例外推),可以通过宽阔的边距,显示出IC回忆中+1.9点的目标意图的绝对改善ST F1分数和+2.5分。在MATIS ++数据集的零击跨语言设置中,语言学家表现出强大的机器翻译基线,插槽对齐的基线是+4.14的+4.14点在6个语言上绝对在ST F1分数上,同时在IC上匹配IC的性能。最后,我们在用于对话代理IC+ST的内部大规模多语言数据集上验证了我们的结果,并显示了使用背面翻译,释义和插槽目录重新采样采样的基线的显着改进。据我们所知,我们是第一个展示大规模SEQ2SEQ模型的指导微调的人,以控制多语言意图和插槽标记的数据生成的输出。
translated by 谷歌翻译
插槽填充和意图检测是诸如语音助手的会话代理的骨干,是有效的研究领域。尽管公开的基准上的最先进的技术,但令人印象深刻的性能,他们概括到现实情景的能力尚未得到证明。在这项工作中,我们提出了一种自然,一套简单的口语导向转换,应用于数据集的评估集,在保留话语的语义时引入人类口语变化。我们将大自然应用于共同的插槽填充和意图检测基准,并证明了自然集合的标准评估的简单扰动可以显着降低模型性能。通过我们的实验,我们证明了当自然运营商应用于评估流行基准的评估集时,模型精度可以降低至多40%。
translated by 谷歌翻译
Multi-intent detection and slot filling joint models are gaining increasing traction since they are closer to complicated real-world scenarios. However, existing approaches (1) focus on identifying implicit correlations between utterances and one-hot encoded labels in both tasks while ignoring explicit label characteristics; (2) directly incorporate multi-intent information for each token, which could lead to incorrect slot prediction due to the introduction of irrelevant intent. In this paper, we propose a framework termed DGIF, which first leverages the semantic information of labels to give the model additional signals and enriched priors. Then, a multi-grain interactive graph is constructed to model correlations between intents and slots. Specifically, we propose a novel approach to construct the interactive graph based on the injection of label semantics, which can automatically update the graph to better alleviate error propagation. Experimental results show that our framework significantly outperforms existing approaches, obtaining a relative improvement of 13.7% over the previous best model on the MixATIS dataset in overall accuracy.
translated by 谷歌翻译
数据稀疏问题是自然语言理解(NLU)的关键挑战,特别是对于新的目标域。通过在源域中训练NLU模型并直接将模型应用于任意目标域(即使没有微调),很少拍摄的NLU对缓解数据稀缺问题至关重要。在本文中,我们建议改进具有矢量投影距离和抽象三角条件随机场(CRF)的原型网络,用于几次射击NLU。向量投影距离利用在标签向量上的上下文词嵌入的投影作为单词标签相似度,其等同于归一化的线性模型。抽象三角CRF了解用于联合意图分类和插槽填充任务的域名忽视标签转换。广泛的实验表明,我们所提出的方法可以显着超越强力基线。具体而言,我们的方法可以在中文和英语中达到两次拍摄的两次拍摄NLU基准(几个关节和剪辑)的新技术,而无需对目标域的微调。
translated by 谷歌翻译
In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.
translated by 谷歌翻译
插槽填充和意图检测是自然语言理解领域的两个基本任务。由于这两项任务之间存在很强的相关性,因此以前的研究努力通过多任务学习或设计功能交互模块来建模它们,以提高每个任务的性能。但是,现有的方法都没有考虑句子的结构信息与两个任务的标签语义之间的相关性。话语的意图和语义成分取决于句子的句法元素。在本文中,我们研究了一个多透明的标签改进网络,该网络利用依赖性结构和标签语义嵌入。考虑到增强句法表示,我们将句子的依赖性结构介绍到我们的模型中。为了捕获句法信息和任务标签之间的语义依赖性,我们将特定于任务的特征与相应的标签嵌入通过注意机制相结合。实验结果表明,我们的模型在两个公共数据集上实现了竞争性能。
translated by 谷歌翻译
Task-oriented dialogue (TOD) systems have been applied in a range of domains to support human users to achieve specific goals. Systems are typically constructed for a single domain or language and do not generalise well beyond this. Their extension to other languages in particular is restricted by the lack of available training data for many of the world's languages. To support work on Natural Language Understanding (NLU) in TOD across multiple languages and domains simultaneously, we constructed MULTI3NLU++, a multilingual, multi-intent, multi-domain dataset. MULTI3NLU++ extends the English-only NLU++ dataset to include manual translations into a range of high, medium and low resource languages (Spanish, Marathi, Turkish and Amharic), in two domains (banking and hotels). MULTI3NLU++ inherits the multi-intent property of NLU++, where an utterance may be labelled with multiple intents, providing a more realistic representation of a user's goals and aligning with the more complex tasks that commercial systems aim to model. We use MULTI3NLU++ to benchmark state-of-the-art multilingual language models as well as Machine Translation and Question Answering systems for the NLU task of intent detection for TOD systems in the multilingual setting. The results demonstrate the challenging nature of the dataset, particularly in the low-resource language setting.
translated by 谷歌翻译
口语语言理解已被处理为监督的学习问题,其中每个域都有一组培训数据。但是,每个域的注释数据都是经济昂贵和不可扩展的,因此我们应该充分利用所有域的信息。通过进行多域学习,使用跨域的联合训练的共享参数来解决一个现有方法解决问题。我们建议通过使用域特定和特定于任务的模型参数来改善该方法的参数化,以改善知识学习和传输。5个域的实验表明,我们的模型对多域SLU更有效,并获得最佳效果。此外,当适应具有很少数据的新域时,通过优于12.4 \%来表现出先前最佳模型的可转换性。
translated by 谷歌翻译
通过共享数据集和基准,已经促进了语音处理的进展。历史上,这些都集中在自动语音识别(ASR),扬声器标识或其他较低级别的任务上。兴趣在更高层次的口语中越来越多,理解任务,包括使用端到端模型,但是此类任务的注释数据集较少。与此同时,最近的工作显示了预先培训通用表示的可能性,然后使用相对较少标记的数据进行微调的多个任务。我们建议为口语语言理解(屠宰)创建一套基准任务,由有限尺寸标记的培训集和相应的评估集组成。该资源将允许研究界跟踪进度,评估高级任务的预先接受预期的表示,并研究开放的问题,例如管道与端到端方法的实用性。我们介绍了雪橇基准套件的第一阶段,包括指定实体识别,情感分析和相应数据集上的ASR。我们专注于自然产生的(未读取或综合)语音和自由可用的数据集。我们为VoxceReb和Voxpopuli数据集的子集提供新的转录和注释,基线模型的评估指标和结果,以及重现基线的开源工具包,并评估新模型。
translated by 谷歌翻译
Token free approaches have been successfully applied to a series of word and span level tasks. In this work, we compare a byte-level (ByT5) and a wordpiece based (mT5) sequence to sequence model on the 51 languages of the MASSIVE multilingual semantic parsing dataset. We examine multiple experimental settings: (i) zero-shot, (ii) full gold data and (iii) zero-shot with synthetic data. By leveraging a state-of-the-art label projection method for machine translated examples, we are able to reduce the gap in exact match accuracy to only 5 points with respect to a model trained on gold data from all the languages. We additionally provide insights on the cross-lingual transfer of ByT5 and show how the model compares with respect to mT5 across all parameter sizes.
translated by 谷歌翻译
We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. Language models are generally trained on the publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes a semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems, enabling us to leverage the ability of LLMs to zero-shot answer reading comprehension questions. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. To address this problem, we fine-tune a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can correctly parse ~16% of utterances in the MTOP dataset without requiring any annotated data.
translated by 谷歌翻译
意图分类(IC)和插槽标签(SL)模型,它形成对话系统的基础,通常会在实际字环境中遇到噪声数据。在这项工作中,我们调查了强大的IC / SL模型是如何嘈杂的数据。我们在生产人机对话中发现七种常见噪声类型(缩写,套管,拼写错误,形态变体,释义,标点符号和同义词),我们收集并公开发布测试套件。在此测试套件上,我们表明普通噪声类型显着降低了最先进的基于伯特IC / SL模型的IC精度和SL F1性能。通过利用串噪声稳健性转移 - 对一种噪声类型的培训来提高另一种噪声类型的鲁棒性 - 我们设计综合数据增强方法,以增加所有七种噪声类型的模型性能+ 10.8%的IC精度和+15平均SL F1点。据我们所知,这是第一个展示单个IC / SL模型的工作,这是一个广泛的噪声现象。
translated by 谷歌翻译
我们介绍了用于插槽,意图分类和虚拟助手评估的大规模数据集 - 数字亚马逊SLU资源包(SLURP)。大规模包含1M现实,平行,标记为虚拟助手的话语,涵盖51种语言,18个域,60个意图和55个插槽。通过任务专业翻译人员将仅英文slurp数据集定位为29属的50种类型多样性的语言来创建大规模。我们还介绍了XLM-R和MT5上的建模结果,包括精确的匹配精度,意图分类精度和插槽填充F1分数。我们已经公开发布了数据集,建模代码和模型。
translated by 谷歌翻译
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
translated by 谷歌翻译
目前对语言理解(SLU)的研究重大仅限于简单的设置:基于纯文本的SLU,它将用户话语为输入并生成其相应的语义帧(例如,意图和插槽)。不幸的是,当话语是语义模糊的话语时,这种简单的设置可能无法在复杂的真实情景中工作,这不能通过基于文本的SLU模型来实现的。在本文中,我们首先介绍了一种新的和重要任务,基于个人资料的口语语言理解(ProSlu),这需要不仅依赖于纯文本的模型,而且需要支持的资料配置文件,以预测正确的意图和插槽。为此,我们进一步引入了一个具有超过5K的大规模的汉语数据集及其相应的支持简档信息(知识图(kg),用户配置文件(向上),上下文意识(CA))。此外,我们还评估了多个最先进的基线模型,并探索多级知识适配器,以有效地结合资料信息。实验结果表明,当话语是语义模糊的,我们所提出的框架可以有效地融合了句子级意图检测和令牌级槽填充的支持信息,所以所有现有的基于文本的SLU模型都无法工作。最后,我们总结了关键挑战,为未来方向提供了新的观点,希望促进研究。
translated by 谷歌翻译
尽管在自动语音识别(ASR)中最近的表现方法增加了,但这种方法并不能确保其输出的适当套管和标点符号。这个问题对自然语言处理(NLP)算法和人类的理解都有重大影响。对于原始文本输入的预处理管道,必须进行资本化和标点符号恢复。对于越南人等低资源语言,此任务的公共数据集很少。在本文中,我们为越南人的资本化和标点符号恢复贡献了一个公共数据集;并提出了两个名为intercappunc的任务的联合模型。越南数据集的实验结果显示了我们联合模型的有效性与单个模型和先前的联合学习模型相比。我们在https://github.com/anhtunguyen98/jointcappund上公开发布数据集和模型的实现
translated by 谷歌翻译