由长期记忆复发网络(LSTM-RNN)和变压器代表的最先进的神经网络语言模型(NNLMS)和变压器变得非常复杂。当获得有限的培训数据时,它们容易过度拟合和泛化。为此,本文提出了一个总体完整的贝叶斯学习框架,其中包含三种方法,以说明LSTM-RNN和Transformer LMS的潜在不确定性。分别使用贝叶斯,高斯过程和变异LSTM-RNN或变压器LMS对其模型参数,神经激活的选择和隐藏输出表示的不确定性。有效的推理方法被用来自动选择使用神经体系结构搜索的最佳网络内部组件作为贝叶斯学习。还使用了最少数量的蒙特卡洛参数样本。这些允许贝叶斯NNLM培训和评估中产生的计算成本最小化。实验是针对两项任务进行的:AMI符合转录和牛津-BBC唇读句子2(LRS2)使用最先进的LF-MMI培训的有效的TDNN系统重叠的语音识别,具有数据增强,扬声器的适应和多种音频,频道横梁成形以进行重叠的语音。基线LSTM-RNN和Transformer LMS具有估计的模型参数和辍学正则化的一致性改进,就困惑性和单词错误率(WER)获得了两项任务。特别是,在LRS2数据上,在基线LSTM-RNN和Transformer LMS中,在贝叶斯NNLMS及其各自的Baselines之间的模型组合后,在基线LSTM-RNN和Transferes LMS上分别获得了最高1.3%和1.2%的绝对降低(相对12.1%和11.3%)。 。
translated by 谷歌翻译
阿尔茨海默氏病(AD)的早期诊断对于促进预防性护理以延迟进一步发展至关重要。本文介绍了建立在痴呆症Pitt copus上的基于最新的构象识别系统以自动检测的开发。通过纳入一组有目的设计的建模功能,包括基于域搜索的自动配置特异性构象异构体超参数除外,还包括基于速度扰动和基于规格的数据增强训练的基线构象体系统可显着改善。使用学习隐藏单位贡献(LHUC)的细粒度老年人的适应性;以及与混合TDNN系统的基于两次通行的跨系统逆转。在48位老年人的评估数据上获得了总体单词错误率(相对34.8%)的总体单词错误率(相对34.8%)。使用最终系统的识别输出来提取文本特征,获得了最佳的基于语音识别的AD检测精度为91.7%。
translated by 谷歌翻译
混合动力和端到端(E2E)自动语音识别(ASR)系统之间的基本建模差异在其中创造了巨大的多样性和互补性。本文研究了混合TDNN和构型E2E ASR系统的基于多通的逆转和交叉适应系统组合方法。在多通恢复中,最先进的混合动力LF-MMI训练有素的CNN-TDNN系统具有速度扰动,规格和贝叶斯学习隐藏单元供款(LHUC)扬声器的适应器,以在被恢复之前产生初始的N-tesk输出由扬声器适应构象异构体系统,使用2向跨系统得分插值。在交叉适应中,混合CNN-TDNN系统适用于构象异构体系统的1好的输出,反之亦然。在300小时的总机语料库上进行的实验表明,使用两种系统组合方法中的任何一个得出的组合系统都超过了单个系统。在NIST HUB5'00,RT03和RT03和RT02评估数据。
translated by 谷歌翻译
关节特征本质上是声信号失真的不变,并且已成功地纳入了为正常语音设计的自动语音识别(ASR)系统。它们在非典型任务领域(例如老年人和跨语言的言语无序)的实际应用通常受到从目标扬声器收集此类专家数据的困难。本文介绍了一种跨域和跨语性A2A反演方法,该方法利用了A2A模型中24小时TAL Corpus的平行音频,视觉和超声舌成像(UTI)数据,然后进行交叉训练和交叉训练。语言适用于两种语言的三个数据集:英语dementiabank pitt和antonese JCCOCC MOCA老年演讲Corpora;以及英语Torgo违反语音数据,以产生基于UTI的发音特征。 Experiments conducted on three tasks suggested incorporating the generated articulatory features consistently outperformed the baseline hybrid TDNN and Conformer based end-to-end systems constructed using acoustic features only by statistically significant word error rate or character error rate reductions up to 2.64%, 1.92% and数据增强和说话者适应后,绝对4.17%,7.89%和13.28%相对1.21%。
translated by 谷歌翻译
尽管针对正常语音的自动语音识别(ASR)技术取得了迅速的进展,但迄今为止,准确认识违反障碍和老年语音仍然是高度挑战的任务。由于这些用户中经常发现的移动性问题,很难为ASR系统开发收集大量此类数据。为此,数据增强技术起着至关重要的作用。与现有的数据增强技术相反,仅修改光谱轮廓的说话速率或整体形状,使用一组新颖的扬声器依赖(SD)生成对抗网络(Gan )本文基于数据增强方法。这些既可以灵活地允许:a)在可用的语音数据可用时修改时间或速度的正常语音光谱,并更接近受损说话者的扬声器; b)对于非平行数据,SVD分解了正常语音频谱基础特征,要转换为目标老年人说话者的特征,然后再与时间基础重组以生成最先进的TDNN的增强数据和构象体ASR系统培训。实验是针对四个任务进行的:英语Uapseech和Torgo违反语音语音Corpora;英国痴呆症皮特和广东话JCCOCC MOCA老年语音数据集。所提出的基于GAN的数据增强方法始终优于基线速度扰动方法,最多可在Torgo和Dementiabank数据上降低4.91%和3.0%的绝对速度(相对相对9.61%和6.4%)。应用基于LHUC的扬声器适应后,保留了一致的性能改进。
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns. To address these limitations, recent research has explored the use of WiFi antennas (1D sensors) for body segmentation and key-point body detection. This paper further expands on the use of the WiFi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
translated by 谷歌翻译
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译
Long document retrieval aims to fetch query-relevant documents from a large-scale collection, where knowledge distillation has become de facto to improve a retriever by mimicking a heterogeneous yet powerful cross-encoder. However, in contrast to passages or sentences, retrieval on long documents suffers from the scope hypothesis that a long document may cover multiple topics. This maximizes their structure heterogeneity and poses a granular-mismatch issue, leading to an inferior distillation efficacy. In this work, we propose a new learning framework, fine-grained distillation (FGD), for long-document retrievers. While preserving the conventional dense retrieval paradigm, it first produces global-consistent representations crossing different fine granularity and then applies multi-granular aligned distillation merely during training. In experiments, we evaluate our framework on two long-document retrieval benchmarks, which show state-of-the-art performance.
translated by 谷歌翻译
To improve the performance of the dual-encoder retriever, one effective approach is knowledge distillation from the cross-encoder ranker. Existing works construct the candidate passages following the supervised learning setting where a query is paired with a positive passage and a batch of negatives. However, through empirical observation, we find that even the hard negatives from advanced methods are still too trivial for the teacher to distinguish, preventing the teacher from transferring abundant dark knowledge to the student through its soft label. To alleviate this issue, we propose ADAM, a knowledge distillation framework that can better transfer the dark knowledge held in the teacher with Adaptive Dark exAMples. Different from previous works that only rely on one positive and hard negatives as candidate passages, we create dark examples that all have moderate relevance to the query through mixing-up and masking in discrete space. Furthermore, as the quality of knowledge held in different training instances varies as measured by the teacher's confidence score, we propose a self-paced distillation strategy that adaptively concentrates on a subset of high-quality instances to conduct our dark-example-based knowledge distillation to help the student learn better. We conduct experiments on two widely-used benchmarks and verify the effectiveness of our method.
translated by 谷歌翻译