对话场景是语音处理技术最重要,最具挑战性的场景之一,因为对话中的人们以随意的方式相互反应。在对话中检测每个人的语音活动对于下游任务,例如自然语言处理,机器翻译等。人们指的是“何时说话”作为说话者诊断(SD)的检测技术。传统上,诊断错误率(DER)长期以来一直用作SD系统的标准评估度量。但是,der没有给简短的对话短语提供足够的重视,这在语义层面上很重要。此外,在语音社区中,仍然无法使用精心准确的手动测试数据集,适合评估对话性SD技术。在本文中,我们设计和描述了对话式短语扬声器诊断(CSSD)任务,该任务包括培训和测试数据集,评估指标和基线。在数据集方面,尽管先前开源的180小时对话魔术Data-RAMC数据集,但我们还准备了一个20小时的对话演讲测试数据集,并精心验证了CSSD任务的时间戳注释。在度量方面,我们设计了新的对话der(CDER)评估度量,该评估度量计算出语音级别的SD准确性。在基线方面,我们采用了一种常用的方法:变异贝叶斯HMM X-vector系统,作为CSSD任务的基线。我们的评估指标可在https://github.com/speechclub/cder_metric上公开获得。
translated by 谷歌翻译
由于训练和测试分布之间的不匹配,自动语音识别(ASR)的跨域性能可能会受到严重阻碍。由于目标域通常缺乏标记的数据,并且在声学和语言水平上存在域移位,因此对ASR进行无监督的域适应性(UDA)是一项挑战。先前的工作表明,通过利用未标记的数据的自我检查,自我监督的学习(SSL)或伪标记(PL)可以有效地进行UDA。但是,这些自我介绍也面临不匹配的域分布中的性能退化,而以前的工作未能解决。这项工作提出了一个系统的UDA框架,可以在预训练和微调范式中充分利用具有自学贴标签的未标记数据。一方面,我们应用持续的预训练和数据重播技术来减轻SSL预训练模型的域不匹配。另一方面,我们提出了一种基于PL技术的域自适应微调方法,并具有三种独特的修改:首先,我们设计了一种双分支PL方法,以降低对错误的伪标签的敏感性;其次,我们设计了一种不确定性感知的置信度过滤策略,以提高伪标签的正确性。第三,我们引入了两步PL方法,以结合目标域语言知识,从而产生更准确的目标域伪标记。各种跨域场景的实验结果表明,所提出的方法可以有效地提高跨域的性能,并显着超过以前的方法。
translated by 谷歌翻译
具有联合学习(FL)的自动语音识别(ASR)使得在不损害隐私的情况下利用来自多个客户的数据。基于FL的ASR质量可以通过识别性能,沟通和计算成本来衡量。当不同客户之间的数据不是独立且分布相同的(非IID)时,性能可能会大大降低。在这项工作中,我们使用个性化的FL解决了基于FL的ASR中的非IID问题,该问题为每个客户学习个性化模型。具体而言,我们提出了两种类型的ASR个性化FL方法。首先,我们将基于个性化的FL适应ASR,该层在本地保留一些层以学习个性化模型。其次,为了降低沟通和计算成本,我们提出了脱钩的联合学习(Decouplefl)。一方面,DeCoupleFL将计算负担移至服务器,从而减少了客户端的计算。另一方面,Decouplefl传达安全的高级功能而不是模型参数,从而在模型大时降低通信成本。实验表明,与FedAvg相比,两种提出的基于FL的ASR方法可以将WER降低2.3%-3.4%。其中,与FedAvg相比,Decouplefl仅具有11.4%的通信和75%的计算成本,这也明显少于基于个性化的FL。
translated by 谷歌翻译
自我监督的预训练可以有效地改善低资源自动语音识别(ASR)的性能。但是,现有的自我监督的预训练是任务不合时宜的,即可以应用于各种下游任务。尽管它扩大了其应用的范围,但预训练模型的容量并未完全用于ASR任务,并且学习的表示形式可能对ASR不最佳。在这项工作中,为了为低资源ASR构建更好的预训练模型,我们提出了一种称为WAV2VEC-S的预训练方法,我们使用特定于任务的半监督预培训来完善自我监督的预培训因此,ASR任务的预训练模型更有效地利用了预培训模型的能力来生成针对ASR的任务特定表示。实验表明,与WAV2VEC 2.0相比,WAV2VEC-S仅需要训练前时间的边际增长,但可以显着改善在内域,跨域和跨语言数据集上的ASR性能。 1H和10H微调分别为24.5%和6.6%。此外,我们表明,半监督的预训练可以通过规范相关分析来弥合自我监管的预训练模型与相应的微调模型之间的表示差距。
translated by 谷歌翻译
Neural network pruning offers a promising prospect to facilitate deploying deep neural networks on resourcelimited devices. However, existing methods are still challenged by the training inefficiency and labor cost in pruning designs, due to missing theoretical guidance of non-salient network components. In this paper, we propose a novel filter pruning method by exploring the High Rank of feature maps (HRank). Our HRank is inspired by the discovery that the average rank of multiple feature maps generated by a single filter is always the same, regardless of the number of image batches CNNs receive. Based on HRank, we develop a method that is mathematically formulated to prune filters with low-rank feature maps. The principle behind our pruning is that low-rank feature maps contain less information, and thus pruned results can be easily reproduced. Besides, we experimentally show that weights with high-rank feature maps contain more important information, such that even when a portion is not updated, very little damage would be done to the model performance. Without introducing any additional constraints, HRank leads to significant improvements over the state-of-the-arts in terms of FLOPs and parameters reduction, with similar accuracies. For example, with ResNet-110, we achieve a 58.2%-FLOPs reduction by removing 59.2% of the parameters, with only a small loss of 0.14% in top-1 accuracy on CIFAR-10. With Res-50, we achieve a 43.8%-FLOPs reduction by removing 36.7% of the parameters, with only a loss of 1.17% in the top-1 accuracy on ImageNet. The codes can be available at https://github.com/lmbxmu/HRank.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Learning feature interactions is the key to success for the large-scale CTR prediction and recommendation. In practice, handcrafted feature engineering usually requires exhaustive searching. In order to reduce the high cost of human efforts in feature engineering, researchers propose several deep neural networks (DNN)-based approaches to learn the feature interactions in an end-to-end fashion. However, existing methods either do not learn both vector-wise interactions and bit-wise interactions simultaneously, or fail to combine them in a controllable manner. In this paper, we propose a new model, xDeepInt, based on a novel network architecture called polynomial interaction network (PIN) which learns higher-order vector-wise interactions recursively. By integrating subspace-crossing mechanism, we enable xDeepInt to balance the mixture of vector-wise and bit-wise feature interactions at a bounded order. Based on the network architecture, we customize a combined optimization strategy to conduct feature selection and interaction selection. We implement the proposed model and evaluate the model performance on three real-world datasets. Our experiment results demonstrate the efficacy and effectiveness of xDeepInt over state-of-the-art models. We open-source the TensorFlow implementation of xDeepInt: https://github.com/yanyachen/xDeepInt.
translated by 谷歌翻译
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables. We formalize this scenario by building a new Chinese benchmark KnowSQL consisting of domain-specific questions covering various domains. We then address this problem by presenting formulaic knowledge, rather than by annotating additional data examples. More concretely, we construct a formulaic knowledge bank as a domain knowledge base and propose a framework (ReGrouP) to leverage this formulaic knowledge during parsing. Experiments using ReGrouP demonstrate a significant 28.2% improvement overall on KnowSQL.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译