声源本地化旨在从观察到的多通道音频寻求所有声源的到达方向(DOA)。对于未知数量来源的实际问题,现有的本地化算法试图预测基于似然的编码(即空间频谱),并采用预先确定的阈值来检测源编号和相应的DOA值。但是,这些基于阈值的算法不稳定,因为它们受到仔细选择阈值的限制。为了解决此问题,我们提出了一种称为ISSL的迭代声源本地化方法,该方法可以迭代地提取每个源的DOA而无需阈值,直到满足终止标准为止。与基于阈值的算法不同,ISSL设计基于二进制分类器的活动源检测器网络,以接受残留的空间频谱并决定是否停止迭代。通过这样做,我们的ISSL可以处理任意数量的来源,甚至超过培训阶段中看到的来源数量。实验结果表明,与现有的基于阈值的算法相比,我们的ISSL在DOA估计和源数检测方面都取得了重大的性能提高。
translated by 谷歌翻译
最近基于神经网络的到达方向(DOA)估计算法在未知数的声源场景上表现良好。这些算法通常是通过将多通道音频输入映射到单个输出(即所有来源的总空间伪谱(SP))来实现的,称为MISO。但是,这种误语算法在很大程度上取决于经验阈值设置和声音源之间的角度大于固定角度的角度假设。为了解决这些局限性,我们提出了一种新型的多通道输入和多个输出的DOA网络,称为MIMO-DOANET。与一般的误觉算法不同,Mimo-Doanet借助于信息的空间协方差矩阵预测了每个声源的SPS编码。通过这样做,检测声源数量的阈值任务成为检测每个输出中是否存在声音源的更容易的任务,并且在推理阶段,声源之间的严重交互消失。实验结果表明,与3,4个来源场景中的莫斯科基线相比,MIMO-DOANET的相对增长18.6%和绝对13.3%,相对34.4%和绝对20.2%的F1得分提高。结果还证明了Mimo-Doanet减轻了阈值设置问题,并有效地解决了角度假设问题。
translated by 谷歌翻译
Recently, many deep learning based beamformers have been proposed for multi-channel speech separation. Nevertheless, most of them rely on extra cues known in advance, such as speaker feature, face image or directional information. In this paper, we propose an end-to-end beamforming network for direction guided speech separation given merely the mixture signal, namely MIMO-DBnet. Specifically, we design a multi-channel input and multiple outputs architecture to predict the direction-of-arrival based embeddings and beamforming weights for each source. The precisely estimated directional embedding provides quite effective spatial discrimination guidance for the neural beamformer to offset the effect of phase wrapping, thus allowing more accurate reconstruction of two sources' speech signals. Experiments show that our proposed MIMO-DBnet not only achieves a comprehensive decent improvement compared to baseline systems, but also maintain the performance on high frequency bands when phase wrapping occurs.
translated by 谷歌翻译
在本文中,我们介绍了在单个神经网络中执行同时扬声器分离,DERE失眠和扬声器识别的盲言语分离和DERERATERATION(BSSD)网络。扬声器分离由一组预定义的空间线索引导。通过使用神经波束成形进行DERERATERATION,通过嵌入向量和三联挖掘来辅助扬声器识别。我们介绍了一种使用复值神经网络的频域模型,以及在潜伏空间中执行波束成形的时域变体。此外,我们提出了一个块在线模式来处理更长的录音,因为它们在会议场景中发生。我们在规模独立信号方面评估我们的系统,以失真率(SI-SI-SIS),字错误率(WER)和相等的错误率(eer)。
translated by 谷歌翻译
将音频分离成不同声音源的深度学习技术面临着几种挑战。标准架构需要培训不同类型的音频源的独立型号。虽然一些通用分离器采用单个模型来靶向多个来源,但它们难以推广到看不见的来源。在本文中,我们提出了一个三个组件的管道,可以从大型但弱标记的数据集:audioset训练通用音频源分离器。首先,我们提出了一种用于处理弱标记训练数据的变压器的声音事件检测系统。其次,我们设计了一种基于查询的音频分离模型,利用此数据进行模型培训。第三,我们设计一个潜在的嵌入处理器来编码指定用于分离的音频目标的查询,允许零拍摄的概括。我们的方法使用单一模型进行多种声音类型的源分离,并仅依赖于跨标记的培训数据。此外,所提出的音频分离器可用于零拍摄设置,学习以分离从未在培训中看到的音频源。为了评估分离性能,我们在侦察中测试我们的模型,同时在不相交的augioset上培训。我们通过对从训练中保持的音频源类型进行另一个实验,进一步通过对训练进行了另一个实验来验证零射性能。该模型在两种情况下实现了对当前监督模型的相当的源 - 失真率(SDR)性能。
translated by 谷歌翻译
Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two-and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward the realization of speech separation systems for real-world speech processing technologies.
translated by 谷歌翻译
我们提出了一个单阶段的休闲波形到波形多通道模型,该模型可以根据动态的声学场景中的广泛空间位置分离移动的声音源。我们将场景分为两个空间区域,分别包含目标和干扰声源。该模型经过训练有素的端到端,并隐含地进行空间处理,而没有基于传统处理或使用手工制作的空间特征的任何组件。我们在现实世界数据集上评估了所提出的模型,并表明该模型与Oracle Beamformer的性能匹配,然后是最先进的单渠道增强网络。
translated by 谷歌翻译
主动演讲者的检测和语音增强已成为视听场景中越来越有吸引力的主题。根据它们各自的特征,独立设计的体系结构方案已被广泛用于与每个任务的对应。这可能导致模型特定于任务所学的表示形式,并且不可避免地会导致基于多模式建模的功能缺乏概括能力。最近的研究表明,建立听觉和视觉流之间的跨模式关系是针对视听多任务学习挑战的有前途的解决方案。因此,作为弥合视听任务中多模式关联的动机,提出了一个统一的框架,以通过在本研究中通过联合学习视听模型来实现目标扬声器的检测和语音增强。
translated by 谷歌翻译
音频分割和声音事件检测是机器聆听中的关键主题,旨在检测声学类别及其各自的边界。它对于音频分析,语音识别,音频索引和音乐信息检索非常有用。近年来,大多数研究文章都采用分类。该技术将音频分为小帧,并在这些帧上单独执行分类。在本文中,我们提出了一种新颖的方法,叫您只听一次(Yoho),该方法受到计算机视觉中普遍采用的Yolo算法的启发。我们将声学边界的检测转换为回归问题,而不是基于框架的分类。这是通过具有单独的输出神经元来检测音频类的存在并预测其起点和终点来完成的。与最先进的卷积复发性神经网络相比,Yoho的F量的相对改善范围从多个数据集中的1%到6%不等,以进行音频分段和声音事件检测。由于Yoho的输出更端到端,并且可以预测的神经元更少,因此推理速度的速度至少比逐个分类快6倍。另外,由于这种方法可以直接预测声学边界,因此后处理和平滑速度约为7倍。
translated by 谷歌翻译
Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in these domains in task such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement domain to achieve denosing, dereverberation and multi-speaker separation in monaural speech enhancement. In this paper, we review some dominant DNN techniques being employed to achieve speech separation. The review looks at the whole pipeline of speech enhancement from feature extraction, how DNN based tools are modelling both global and local features of speech and model training (supervised and unsupervised). We also review the use of speech-enhancement pre-trained models to boost speech enhancement process. The review is geared towards covering the dominant trends with regards to DNN application in speech enhancement in speech obtained via a single speaker.
translated by 谷歌翻译
多通道多扬声器的自动语音识别(ASR)重叠的语音仍然是语音社区最具挑战性的任务之一。在本文中,我们首次利用3D空间中的目标扬声器的位置信息来研究挑战。为了探讨所提出的3D空间特征的强度,研究了两个范例。 1)带有多通道语音分离模块的流水线系统,后跟最先进的单通道ASR模块; 2)3D空间特征直接用作无明确分离模块的ASR系统的输入的“一体化”模型。它们都是完全可分辨的,并且可以回到倒端的端到端。我们在模拟重叠的语音和实际录音上测试它们。实验结果表明,1)所提出的一体化模型对流水线系统实现了类似的误码率,同时将推理时间减少一半; 2)所提出的3D空间特征显着优于(31 \%CERR)所有先前的应用程序在两个范例中使用的所有先前作品。
translated by 谷歌翻译
设备方向听到需要从给定方向的音频源分离,同时实现严格的人类难以察觉的延迟要求。虽然神经网络可以实现比传统的波束形成器的性能明显更好,但所有现有型号都缺乏对计算受限的可穿戴物的低延迟因果推断。我们展示了一个混合模型,将传统的波束形成器与定制轻质神经网络相结合。前者降低了后者的计算负担,并且还提高了其普遍性,而后者旨在进一步降低存储器和计算开销,以实现实时和低延迟操作。我们的评估显示了合成数据上最先进的因果推断模型的相当性能,同时实现了模型尺寸的5倍,每秒计算的4倍,处理时间减少5倍,更好地概括到真实的硬件数据。此外,我们的实时混合模型在为低功耗可穿戴设备设计的移动CPU上运行8毫秒,并实现17.5毫秒的端到端延迟。
translated by 谷歌翻译
基于RF信号的方向查找和定位系统因多径传播而受到显着影响,特别是在室内环境中。现有算法(例如音乐)在多径存在的情况下解决到达角度(AOA)或在弱信号方案中操作时表现不佳。我们注意到数字采样的RF前端允许轻松分析信号和延迟组件。低成本软件定义的无线电(SDR)模块使能跨宽频谱的通道状态信息(CSI)提取,激励增强的到达角度(AOA)解决方案的设计。我们提出了一种深入的学习方法,可以从SDR多通道数据的单一快照派生AOA。我们比较和对比基于深度学习的角度分类和回归模型,准确地估计最多两个AOA。我们已经在不同平台上实施了推理引擎,实时提取了AOA,展示了我们方法的计算途径。为了证明我们的方法的效用,我们在各种视角(LOS)和非线视线中收集了来自四元通用线性阵列(ULA)的IQ(同步和正交组件)样本( NLOS)环境,并发布了数据集。我们所提出的方法在确定撞击信号的数量并实现平均值为2 ^ {\ rIC} $ 2 ^ {\ cird} $时,我们提出的方法展示了出色的可靠性。
translated by 谷歌翻译
In a scenario with multiple persons talking simultaneously, the spatial characteristics of the signals are the most distinct feature for extracting the target signal. In this work, we develop a deep joint spatial-spectral non-linear filter that can be steered in an arbitrary target direction. For this we propose a simple and effective conditioning mechanism, which sets the initial state of the filter's recurrent layers based on the target direction. We show that this scheme is more effective than the baseline approach and increases the flexibility of the filter at no performance cost. The resulting spatially selective non-linear filters can also be used for speech separation of an arbitrary number of speakers and enable very accurate multi-speaker localization as we demonstrate in this paper.
translated by 谷歌翻译
增强现实设备具有增强人类感知的潜力,并使复杂的会话环境中的其他辅助功能能够实现。有效地捕获理解这些社交交互所必需的视听上下文首先需要检测和定位设备佩戴者和周围人的语音活动。这些任务由于它们的高电平性质而挑战:佩戴者的头部运动可能导致运动模糊,周围的人可能出现在困难的观察中,并且可能有遮挡,视觉杂乱,音频噪声和畸形。在这些条件下,以前的最先进的主动扬声器检测方法不会给出令人满意的结果。相反,我们使用视频和多通道麦克风阵列音频从新设置中解决问题。我们提出了一种新的端到端深度学习方法,可以提供强大的语音活动检测和本地化结果。与以前的方法相比,我们的方法将主动扬声器从球体上的所有可能方向定位,即使在相机的视野之外,同时检测设备佩戴者自己的语音活动。我们的实验表明,该方法提供了卓越的结果,可以实时运行,并且对抗噪音和杂乱是强大的。
translated by 谷歌翻译
Music discovery services let users identify songs from short mobile recordings. These solutions are often based on Audio Fingerprinting, and rely more specifically on the extraction of spectral peaks in order to be robust to a number of distortions. Few works have been done to study the robustness of these algorithms to background noise captured in real environments. In particular, AFP systems still struggle when the signal to noise ratio is low, i.e when the background noise is strong. In this project, we tackle this problematic with Deep Learning. We test a new hybrid strategy which consists of inserting a denoising DL model in front of a peak-based AFP algorithm. We simulate noisy music recordings using a realistic data augmentation pipeline, and train a DL model to denoise them. The denoising model limits the impact of background noise on the AFP system's extracted peaks, improving its robustness to noise. We further propose a novel loss function to adapt the DL model to the considered AFP system, increasing its precision in terms of retrieved spectral peaks. To the best of our knowledge, this hybrid strategy has not been tested before.
translated by 谷歌翻译
基于音频的色情检测可以通过利用不同的光谱特征来实现有效的成人内容过滤。为了改善它,我们根据不同的神经体系结构和声学特征探索色情声音建模。我们发现,经过对数频谱图训练的CNN可以在色情800数据集上实现最佳性能。我们的实验结果还表明,对数MEL频谱图可以为模型识别色情声音提供更好的表示。最后,为了对整个音频波形进行分类,而不是段,我们采用了投票段到原告技术,从而产生最佳的音频级检测结果。
translated by 谷歌翻译
In this paper, we present a new model for Direction of Arrival (DOA) estimation of sound sources based on an Icosahedral Convolutional Neural Network (CNN) applied over SRP-PHAT power maps computed from the signals received by a microphone array. This icosahedral CNN is equivariant to the 60 rotational symmetries of the icosahedron, which represent a good approximation of the continuous space of spherical rotations, and can be implemented using standard 2D convolutional layers, having a lower computational cost than most of the spherical CNNs. In addition, instead of using fully connected layers after the icosahedral convolutions, we propose a new soft-argmax function that can be seen as a differentiable version of the argmax function and allows us to solve the DOA estimation as a regression problem interpreting the output of the convolutional layers as a probability distribution. We prove that using models that fit the equivariances of the problem allows us to outperform other state-of-the-art models with a lower computational cost and more robustness, obtaining root mean square localization errors lower than 10{\deg} even in scenarios with a reverberation time $T_{60}$ of 1.5 s.
translated by 谷歌翻译
我们介绍了扬声器本地化问题的变种,我们呼叫设备仲裁。在设备仲裁问题中,用户将由多个分布式麦克风阵列(智能家居设备)检测到的关键字,并且我们希望确定哪个设备最接近用户。我们提出了一个端到端机器学习系统而不是解决完整的本地化问题。该系统了解在每个设备上独立计算的功能嵌入。然后,每个设备的嵌入式聚合在一起以产生最终的仲裁决策。我们使用大规模的房间模拟来生成培训和评估数据,并将系统与信号处理基线进行比较。
translated by 谷歌翻译
扬声器日流是一个标签音频或视频录制的任务,与扬声器身份或短暂的任务标记对应于扬声器标识的类,以识别“谁谈到何时发表讲话”。在早期,对MultiSpeaker录音的语音识别开发了扬声器日益衰退算法,以使扬声器自适应处理能够实现扬声器自适应处理。这些算法还将自己的价值作为独立应用程序随着时间的推移,为诸如音频检索等下游任务提供特定于扬声器的核算。最近,随着深度学习技术的出现,这在讲话应用领域的研究和实践中引起了革命性的变化,对扬声器日益改善已经进行了快速进步。在本文中,我们不仅审查了扬声器日益改善技术的历史发展,而且还审查了神经扬声器日益改善方法的最新进步。此外,我们讨论了扬声器日复速度系统如何与语音识别应用相结合,以及最近深度学习的激增是如何引领联合建模这两个组件互相互补的方式。通过考虑这种令人兴奋的技术趋势,我们认为本文对社区提供了有价值的贡献,以通过巩固具有神经方法的最新发展,从而促进更有效的扬声器日益改善进一步进展。
translated by 谷歌翻译