Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals. However, such systems rely on costly manually labeled dialogs which are not available in practical scenarios. In this paper, we present our models for Track 2 of the SereTOD 2022 challenge, which is the first challenge of building semi-supervised and reinforced TOD systems on a large-scale real-world Chinese TOD dataset MobileCS. We build a knowledge-grounded dialog model to formulate dialog history and local KB as input and predict the system response. And we perform semi-supervised pre-training both on the labeled and unlabeled data. Our system achieves the first place both in the automatic evaluation and human interaction, especially with higher BLEU (+7.64) and Success (+13.6\%) than the second place.
translated by 谷歌翻译
室外(OOD)检测是面向任务的对话框系统中的关键组件,旨在确定查询是否不在预定义的支持的意图集之外。事实证明,先前基于软磁性的检测算法对OOD样品被过度自信。在本文中,我们分析了过度自信的OOD来自由于训练和测试分布之间的不匹配而导致的分布不确定性,这使得该模型无法自信地做出预测,因此可能导致异常软磁得分。我们提出了一个贝叶斯OOD检测框架,以使用Monte-Carlo辍学来校准分布不确定性。我们的方法是灵活的,并且可以轻松地插入现有的基于软磁性的基线和增益33.33 \%OOD F1改进,而与MSP相比仅增加了0.41 \%的推理时间。进一步的分析表明,贝叶斯学习对OOD检测的有效性。
translated by 谷歌翻译
传统意图分类模型基于预定义的意图集,仅识别有限的内域(IND)意图类别。但是用户可以在实用的对话系统中输入室外(OOD)查询。这样的OOD查询可以提供未来改进的方向。在本文中,我们定义了一项新任务,广义意图发现(GID),旨在将IND意图分类器扩展到包括IND和OOD意图在内的开放世界意图集。我们希望在发现和识别新的未标记的OOD类型的同时,同时对一组标记的IND意图类进行分类。我们为不同的应用程序方案构建了三个公共数据集,并提出了两种框架,即基于管道的框架和端到端,以实现未来的工作。此外,我们进行详尽的实验和定性分析,以理解关键挑战,并为未来的GID研究提供新的指导。
translated by 谷歌翻译
大多数现有的插槽填充模型倾向于记住实体的固有模式和培训数据中相应的上下文。但是,这些模型在暴露于口语语言扰动或实践中的变化时会导致系统故障或不良输出。我们提出了一种扰动的语义结构意识转移方法,用于训练扰动插槽填充模型。具体而言,我们介绍了两种基于传销的培训策略,以分别从无监督的语言扰动语料库中分别学习上下文语义结构和单词分布。然后,我们将从上游训练过程学到的语义知识转移到原始样本中,并通过一致性处理过滤生成的数据。这些程序旨在增强老虎机填充模型的鲁棒性。实验结果表明,我们的方法始终优于先前的基本方法,并获得强有力的概括,同时阻止模型记住实体和环境的固有模式。
translated by 谷歌翻译
由于无标记的文本和语音数据的广泛可用性,最近基于仅音频数据的仅文本和半监督培训已广受欢迎。在这项工作中,我们建议将纯文本和半监督培训纳入基于注意力的审议模型。通过将纯文本数据合并到培训审议文本编码器的变压器(BERT)的双向编码器表示中,以及使用联合声学和文本解码器(JATD)和半诉讼程序的大规模文本到语音和纯音频和音频话语培训,与基线审议相比,我们的各种任务减少了4%-12%。与最先进的语言模型(LM)纠正方法相比,审议模型将Google语音搜索降低了11%。我们表明,与具有合理的终端潜伏期的最先进的LM委员相比,审议模型还获得了正面的人类并排评估。
translated by 谷歌翻译
语言模型(LMS)显着提高端到端模型(E2E)模型在训练过程中很少见的单词的识别准确性,当时在浅融合或重新恢复设置中。在这项工作中,我们介绍了LMS在判别培训框架中学习混合自动回旋传感器(HAT)模型的研究,以减轻有关使用LMS的训练与推理差距。对于浅融合设置,我们在假设生成和损失计算过程中都使用LMS,而LM感知的MWER训练模型可实现10 \%的相对改进,比用标准MWER在语音搜索测试集中培训的模型相对改进,其中包含稀有单词。对于重新设置,我们学会了一个小型神经模块,以数据依赖性方式产生串联的融合权重。该模型与常规MWER训练的模型相同,但无需清除融合重量。
translated by 谷歌翻译
在本文中,我们提出了一个动态的级联编码器自动语音识别(ASR)模型,该模型统一了不同部署方案的模型。此外,该模型可以显着降低模型尺寸和功耗而不会损失质量。也就是说,使用动态级联编码器模型,我们探索了三种技术,以最大程度地提高每个模型大小的性能:1)在共享编码器时为每个子模型使用单独的解码器;2)使用漏斗 - 提高编码器效率;3)平衡因果关系的大小,以提高质量和适合部署限制。总体而言,与基线级联编码器模型相比,拟议的大中等模型的尺寸较小30%,并将功耗降低了33%。统一大型,中和小型模型的三重大小模型可实现37%的总尺寸减少,而质量损失最小,同时大大减少了拥有单独模型的工程工作。
translated by 谷歌翻译
Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译