最近,通过“向导”模拟游戏收集了一类以任务为导向的对话(TOD)数据集。但是,《巫师》数据实际上是模拟的数据,因此与现实生活中的对话根本不同,这些对话更加嘈杂和随意。最近,Seretod挑战赛是组织的,并发布了Mobilecs数据集,该数据集由来自中国移动的真实用户和客户服务人员之间的真实世界对话框组成。基于Mobilecs数据集,Seretod挑战具有两个任务,不仅评估了对话系统本身的构建,而且还检查了对话框成绩单中的信息提取,这对于建立TOD的知识库至关重要。本文主要介绍了Mobilecs数据集对这两项任务的基线研究。我们介绍了如何构建两个基线,遇到的问题以及结果。我们预计基线可以促进令人兴奋的未来研究,以建立针对现实生活任务的人类机器人对话系统。
translated by 谷歌翻译
传统意图分类模型基于预定义的意图集,仅识别有限的内域(IND)意图类别。但是用户可以在实用的对话系统中输入室外(OOD)查询。这样的OOD查询可以提供未来改进的方向。在本文中,我们定义了一项新任务,广义意图发现(GID),旨在将IND意图分类器扩展到包括IND和OOD意图在内的开放世界意图集。我们希望在发现和识别新的未标记的OOD类型的同时,同时对一组标记的IND意图类进行分类。我们为不同的应用程序方案构建了三个公共数据集,并提出了两种框架,即基于管道的框架和端到端,以实现未来的工作。此外,我们进行详尽的实验和定性分析,以理解关键挑战,并为未来的GID研究提供新的指导。
translated by 谷歌翻译
大多数现有的插槽填充模型倾向于记住实体的固有模式和培训数据中相应的上下文。但是,这些模型在暴露于口语语言扰动或实践中的变化时会导致系统故障或不良输出。我们提出了一种扰动的语义结构意识转移方法,用于训练扰动插槽填充模型。具体而言,我们介绍了两种基于传销的培训策略,以分别从无监督的语言扰动语料库中分别学习上下文语义结构和单词分布。然后,我们将从上游训练过程学到的语义知识转移到原始样本中,并通过一致性处理过滤生成的数据。这些程序旨在增强老虎机填充模型的鲁棒性。实验结果表明,我们的方法始终优于先前的基本方法,并获得强有力的概括,同时阻止模型记住实体和环境的固有模式。
translated by 谷歌翻译
通过利用未标记的对话框数据来开发半监督的面向任务的对话框(TOD)系统已吸引了越来越多的兴趣。对于对潜在状态TOD模型的半监督学习,经常使用变异学习,但遭受了通过离散潜在变量传播的梯度的令人讨厌的高度变化,以及间接优化目标对数的弊端。最近,一种称为关节随机近似(JSA)的替代算法已出现,用于学习具有令人印象深刻的性能的离散潜在可变模型。在本文中,我们建议将JSA应用于对潜在状态TOD模型的半监督学习,该模型称为JSA-TOD。据我们所知,JSA-TOD代表了开发基于JSA的半监督学习的第一批工作,用于对TOD系统(例如TOD系统)这样的长期顺序生成问题的离散潜在可变条件模型。广泛的实验表明,JSA-TOD明显优于其变异学习对应物。值得注意的是,使用20%标签的半监督JSA-TOD在Multiwoz2.1上的全面监督基线附近。
translated by 谷歌翻译
与EMNLP2022 SERETOD车间共同划分的半监督和增强任务的对话系统的挑战。
translated by 谷歌翻译
口语理解(SLU)将自动语音识别(ASR)和自然语言理解(NLU)视为一项统一任务,通常遭受数据稀缺。我们基于元辅助学习来利用ASR和NLU联合培训方法,通过仅利用大量的语音数据来提高低资源SLU任务的性能。这种方法的一个明显优势是,它提供了一个灵活的框架来实施低资源的SLU训练任务,而无需访问任何进一步的语义注释。特别是,NLU模型被视为标签生成网络,以预测文本的意图和插槽标签。多任务网络网络从语音同步训练ASR任务和SLU任务;标签生成网络的预测作为语义目标传递到多任务网络。通过公共CATSLU数据集的实验证明了所提出的算法的效率,该数据集对下游NLU任务产生了更合适的ASR假设。
translated by 谷歌翻译
移动网络流量预测是日常网络操作中的关键功能之一。商业移动网络大,异质,复杂,动态。这些内在特征使得移动网络流量预测远离诸如最近的高级算法,例如基于Graph卷积网络的预测方法和各种关注机制,也已经证明是在车辆交通预测中成功的。在本文中,我们将问题作为空间序列预测任务。我们提出了一种新的深度学习网络架构,自适应多接收领域空间 - 时间图卷积网络(AMF-STGCN),以模拟移动基站的交通动态。 AMF-STGCN扩展了GCN(1)在移动网络中联合建模的复杂空间 - 时间依赖性,(2)应用注意机制捕获异构基站的各种接收领域,(3)基于完全连接的额外解码器引入额外的解码器深网络以多阶段预测征服错误传播挑战。来自两个不同域的四个真实数据集的实验一致地显示AMF-STGCN优于最先进的方法。
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize the feedback information across historical attacks, dubbed example-level adversarial transferability. Specifically, by treating the attack on each benign example as one task, we develop a meta-learning framework by training a meta-generator to produce perturbations conditioned on benign examples. When attacking a new benign example, the meta generator can be quickly fine-tuned based on the feedback information of the new task as well as a few historical attacks to produce effective perturbations. Moreover, since the meta-train procedure consumes many queries to learn a generalizable generator, we utilize model-level adversarial transferability to train the meta-generator on a white-box surrogate model, then transfer it to help the attack against the target model. The proposed framework with the two types of adversarial transferability can be naturally combined with any off-the-shelf query-based attack methods to boost their performance, which is verified by extensive experiments.
translated by 谷歌翻译
Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has appeared as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data, which only takes spatiotemporal coordinates as inputs. Specifically, the proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks. The weights of the network are learned from sparsely-acquired (k, t)-space data itself only, without external training datasets or prior images. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared scan-specific methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 5.5 ~ 7.1 dB in PSNR for extremely high accelerations (up to 41.6-fold). The high-quality and inner continuity of the images provided by INR has great potential to further improve the spatiotemporal resolution of dynamic MRI, without the need of any training data.
translated by 谷歌翻译