拟声术语是语音上模仿声音的字符序列,在表达声音的特征,诸如持续时间,间距和Timbre的特征是有效的。我们提出了一种使用拟声缺陷的环境 - 辐射方法,以指定要提取的目标声音。利用这种方法,我们通过使用U-Net架构来估计来自输入混合谱图和拟声型的时频掩模,然后通过掩蔽频谱图来提取相应的目标声音。实验结果表明,该方法只能提取对应于拟声病的目标声音,并且比使用声音事件类别指定目标声音的传统方法更好地执行。
translated by 谷歌翻译
We present the first neural network model to achieve real-time and streaming target sound extraction. To accomplish this, we propose Waveformer, an encoder-decoder architecture with a stack of dilated causal convolution layers as the encoder, and a transformer decoder layer as the decoder. This hybrid architecture uses dilated causal convolutions for processing large receptive fields in a computationally efficient manner, while also benefiting from the performance transformer-based architectures provide. Our evaluations show as much as 2.2-3.3 dB improvement in SI-SNRi compared to the prior models for this task while having a 1.2-4x smaller model size and a 1.5-2x lower runtime. Open-source code and datasets: https://github.com/vb000/Waveformer
translated by 谷歌翻译
我们为基于语义信息(称为ConceptBeam的语义信息)提出了一个新颖的框架。目标语音提取意味着在混合物中提取目标扬声器的语音。典型的方法一直在利用音频信号的性能,例如谐波结构和到达方向。相反,ConceptBeam通过语义线索解决了问题。具体来说,我们使用概念规范(例如图像或语音)提取说话者谈论概念的演讲,即感兴趣的主题。解决这个新颖的问题将为对话中讨论的特定主题等创新应用打开门。与关键字不同,概念是抽象的概念,使直接代表目标概念的挑战。在我们的方案中,通过将概念规范映射到共享的嵌入空间,将概念编码为语义嵌入。可以使用由图像及其口语字幕组成的配对数据进行深度度量学习来构建这种独立的空间。我们使用它来桥接模式依赖性信息,即混合物中的语音段以及指定的,无模式的概念。作为我们方案的证明,我们使用与口语标题相关的一组图像进行了实验。也就是说,我们从这些口语字幕中产生了语音混合物,并将图像或语音信号用作概念指定符。然后,我们使用已识别段的声学特征提取目标语音。我们将ConceptBeam与两种方法进行比较:一种基于从识别系统获得的关键字,另一个基于声音源分离。我们表明,概念束明显优于基线方法,并根据语义表示有效提取语音。
translated by 谷歌翻译
The marine ecosystem is changing at an alarming rate, exhibiting biodiversity loss and the migration of tropical species to temperate basins. Monitoring the underwater environments and their inhabitants is of fundamental importance to understand the evolution of these systems and implement safeguard policies. However, assessing and tracking biodiversity is often a complex task, especially in large and uncontrolled environments, such as the oceans. One of the most popular and effective methods for monitoring marine biodiversity is passive acoustics monitoring (PAM), which employs hydrophones to capture underwater sound. Many aquatic animals produce sounds characteristic of their own species; these signals travel efficiently underwater and can be detected even at great distances. Furthermore, modern technologies are becoming more and more convenient and precise, allowing for very accurate and careful data acquisition. To date, audio captured with PAM devices is frequently manually processed by marine biologists and interpreted with traditional signal processing techniques for the detection of animal vocalizations. This is a challenging task, as PAM recordings are often over long periods of time. Moreover, one of the causes of biodiversity loss is sound pollution; in data obtained from regions with loud anthropic noise, it is hard to separate the artificial from the fish sound manually. Nowadays, machine learning and, in particular, deep learning represents the state of the art for processing audio signals. Specifically, sound separation networks are able to identify and separate human voices and musical instruments. In this work, we show that the same techniques can be successfully used to automatically extract fish vocalizations in PAM recordings, opening up the possibility for biodiversity monitoring at a large scale.
translated by 谷歌翻译
Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two-and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward the realization of speech separation systems for real-world speech processing technologies.
translated by 谷歌翻译
在本文中,我们基于条件AutoEncoder提出了一种新型音频合成器CaeSynth。 Caesynth通过在其共享潜在特征空间中插入参考声音来实时合成Timbre,同时独立控制俯仰。我们展示了基于Timbre分类的精度培训条件AutoEncoder与俯仰内容的对抗正规化允许潜伏空间中的Timbre分布对Timbre插值和音调调节更有效和稳定。该方法不仅适用于创造音乐线索,还适用于基于具有环境声音的小说模型的混合现实中的音频承担。我们通过实验证明了CAESynth通过Timbre插值实时实现了光滑和高保真音频合成,并为音乐线索的独立且准确的音高控制以及与环境声音的音频提供。在线共享Python实现以及一些生成的样本。
translated by 谷歌翻译
公共网站上可用的音频数据量正在迅速增长,并且需要有效访问所需数据的有效机制。我们提出了一种基于内容的音频检索方法,该方法可以通过引入辅助文本信息来检索与查询音频相似但略有不同的目标音频,该信息描述了查询和目标音频之间的差异。虽然传统基于内容的音频检索的范围仅限于与查询音频相似的音频,但提出的方法可以通过添加辅助文本查询模型的嵌入来调整检索范围,以嵌入查询示例音频中的嵌入共享的潜在空间。为了评估我们的方法,我们构建了一个数据集,其中包括两个不同的音频剪辑以及描述差异的文本。实验结果表明,所提出的方法比基线更准确地检索配对的音频。我们还基于可视化确认了所提出的方法获得了共享的潜在空间,在该空间中,音频差和相应的文本表示为相似的嵌入向量。
translated by 谷歌翻译
Recent research has shown remarkable performance in leveraging multiple extraneous conditional and non-mutually exclusive semantic concepts for sound source separation, allowing the flexibility to extract a given target source based on multiple different queries. In this work, we propose a new optimal condition training (OCT) method for single-channel target source separation, based on greedy parameter updates using the highest performing condition among equivalent conditions associated with a given target source. Our experiments show that the complementary information carried by the diverse semantic concepts significantly helps to disentangle and isolate sources of interest much more efficiently compared to single-conditioned models. Moreover, we propose a variation of OCT with condition refinement, in which an initial conditional vector is adapted to the given mixture and transformed to a more amenable representation for target source extraction. We showcase the effectiveness of OCT on diverse source separation experiments where it improves upon permutation invariant models with oracle assignment and obtains state-of-the-art performance in the more challenging task of text-based source separation, outperforming even dedicated text-only conditioned models.
translated by 谷歌翻译
我们介绍了Audioscopev2,这是一种最先进的通用音频视频在屏幕上的声音分离系统,该系统能够通过观看野外视频来学习将声音与屏幕上的对象相关联。我们确定了先前关于视听屏幕上的声音分离的几个局限性,包括对时空注意力的粗略分辨率,音频分离模型的收敛性不佳,培训和评估数据的差异有限,以及未能说明贸易。在保存屏幕声音和抑制屏幕外声音之间的关闭。我们为所有这些问题提供解决方案。我们提出的跨模式和自我发场网络体系结构随着时间的推移以精细的分辨率捕获了视听依赖性,我们还提出了有效的可分离变体,这些变体能够扩展到更长的视频而不牺牲太多性能。我们还发现,仅在音频上进行预训练模型可大大改善结果。为了进行培训和评估,我们从大型野外视频数据库(YFCC100M)中收集了新的屏幕上的人类注释。这个新数据集更加多样化和具有挑战性。最后,我们提出了一个校准过程,该过程允许对屏幕重建与屏幕外抑制进行精确调整,从而大大简化了具有不同操作点的模型之间的性能。总体而言,我们的实验结果表明,在屏幕上的分离性能在更一般条件下的屏幕分离性能的改善要比以前具有最小的额外计算复杂性的方法更为普遍。
translated by 谷歌翻译
在许多临床情况下,迫切需要具有自动呼吸声分析能力的可靠,遥远,连续的实时呼吸声监测仪,例如在监测2019年冠状病毒疾病的疾病进展中,以用手持式听觉仪替换常规的听诊。但是,在实际应用中尚未验证强大的计算机呼吸道声音分析算法。 In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686个Stridor标签和4,740个Rhonchi标签)和15,606个不连续的不定声标签(所有crack带)。我们进行了长期短期记忆(LSTM),门控复发单元(GRU),双向LSTM(BILSTM),双向GRU(BIGRU),卷积神经网络(CNN)-LSTM,CNN-GRU,CNN-BILSTM,CNN-BILSTM,CNN-BILSTM,CNN-BILSTM,CNN-GRU,我们进行了基准测试。和CNN-BIGRU模型用于呼气阶段检测和不定声检测。我们还对基于LSTM的模型,单向模型和双向模型以及带有CNN和CNN的模型之间进行了性能比较。结果表明,这些模型在肺部声音分析中表现出足够的性能。在大多数定义任务中,基于GRU的模型在接收器操作特征曲线下的F1分数和区域上优于基于LSTM的模型。此外,所有双向模型的表现都优于其单向对应物。最后,添加CNN提高了肺部声音分析的准确性,尤其是在CAS检测任务中。
translated by 谷歌翻译
Current audio-visual separation methods share a standard architecture design where an audio encoder-decoder network is fused with visual encoding features at the encoder bottleneck. This design confounds the learning of multi-modal feature encoding with robust sound decoding for audio separation. To generalize to a new instrument: one must finetune the entire visual and audio network for all musical instruments. We re-formulate visual-sound separation task and propose Instrument as Query (iQuery) with a flexible query expansion mechanism. Our approach ensures cross-modal consistency and cross-instrument disentanglement. We utilize "visually named" queries to initiate the learning of audio queries and use cross-modal attention to remove potential sound source interference at the estimated waveforms. To generalize to a new instrument or event class, drawing inspiration from the text-prompt design, we insert an additional query as an audio prompt while freezing the attention mechanism. Experimental results on three benchmarks demonstrate that our iQuery improves audio-visual sound source separation performance.
translated by 谷歌翻译
Recent years have seen progress beyond domain-specific sound separation for speech or music towards universal sound separation for arbitrary sounds. Prior work on universal sound separation has investigated separating a target sound out of an audio mixture given a text query. Such text-queried sound separation systems provide a natural and scalable interface for specifying arbitrary target sounds. However, supervised text-queried sound separation systems require costly labeled audio-text pairs for training. Moreover, the audio provided in existing datasets is often recorded in a controlled environment, causing a considerable generalization gap to noisy audio in the wild. In this work, we aim to approach text-queried universal sound separation by using only unlabeled data. We propose to leverage the visual modality as a bridge to learn the desired audio-textual correspondence. The proposed CLIPSep model first encodes the input query into a query vector using the contrastive language-image pretraining (CLIP) model, and the query vector is then used to condition an audio separation model to separate out the target sound. While the model is trained on image-audio pairs extracted from unlabeled videos, at test time we can instead query the model with text inputs in a zero-shot setting, thanks to the joint language-image embedding learned by the CLIP model. Further, videos in the wild often contain off-screen sounds and background noise that may hinder the model from learning the desired audio-textual correspondence. To address this problem, we further propose an approach called noise invariant training for training a query-based sound separation model on noisy data. Experimental results show that the proposed models successfully learn text-queried universal sound separation using only noisy unlabeled videos, even achieving competitive performance against a supervised model in some settings.
translated by 谷歌翻译
将音频分离成不同声音源的深度学习技术面临着几种挑战。标准架构需要培训不同类型的音频源的独立型号。虽然一些通用分离器采用单个模型来靶向多个来源,但它们难以推广到看不见的来源。在本文中,我们提出了一个三个组件的管道,可以从大型但弱标记的数据集:audioset训练通用音频源分离器。首先,我们提出了一种用于处理弱标记训练数据的变压器的声音事件检测系统。其次,我们设计了一种基于查询的音频分离模型,利用此数据进行模型培训。第三,我们设计一个潜在的嵌入处理器来编码指定用于分离的音频目标的查询,允许零拍摄的概括。我们的方法使用单一模型进行多种声音类型的源分离,并仅依赖于跨标记的培训数据。此外,所提出的音频分离器可用于零拍摄设置,学习以分离从未在培训中看到的音频源。为了评估分离性能,我们在侦察中测试我们的模型,同时在不相交的augioset上培训。我们通过对从训练中保持的音频源类型进行另一个实验,进一步通过对训练进行了另一个实验来验证零射性能。该模型在两种情况下实现了对当前监督模型的相当的源 - 失真率(SDR)性能。
translated by 谷歌翻译
视听目标语音提取旨在通过查看唇部运动来从嘈杂的混合物中提取某个说话者的语音,这取得了重大进展,结合了时间域的语音分离模型和视觉特征提取器(CNN)。融合音频和视频信息的一个问题是它们具有不同的时间分辨率。当前的大多数研究都会沿时间维度进行视觉特征,以便音频和视频功能能够随时间对齐。但是,我们认为唇部运动主要包含长期或电话级信息。基于这个假设,我们提出了一种融合视听功能的新方法。我们观察到,对于dprnn \ cite {dprnn},互联维度的时间分辨率可能非常接近视频帧的时间分辨率。像\ cite {sepformer}一样,dprnn中的LSTM被内部内部和牙间的自我注意力所取代,但是在提出的算法中,界界的注意力将视觉特征作为附加特征流。这样可以防止视觉提示的提高采样,从而导致更有效的视听融合。结果表明,与其他基于时间域的视听融合模型相比,我们获得了优越的结果。
translated by 谷歌翻译
诸如“ uh”或“ um”之类的填充词是人们用来表示他们停下来思考的声音或词。从录音中查找和删除填充单词是媒体编辑中的一项常见和繁琐的任务。自动检测和分类填充单词可以极大地帮助这项任务,但是迄今为止,很少有关于此问题的研究。一个关键原因是缺少带有带注释的填充词的数据集用于模型培训和评估。在这项工作中,我们介绍了一个新颖的语音数据集,PodcastFillers,带有35K注释的填充单词和50k注释,这些声音通常会出现在播客中,例如呼吸,笑声和单词重复。我们提出了一条利用VAD和ASR来检测填充候选物和分类器以区分填充单词类型的管道。我们评估了有关播客填充器的拟议管道,与几个基线相比,并提供了一项详细的消融研究。特别是,我们评估了使用ASR的重要性以及它与类似于关键字发现的无转录方法的比较。我们表明,我们的管道获得了最新的结果,并且利用ASR强烈优于关键字斑点方法。我们公开播放播客,希望我们的工作是未来研究的基准。
translated by 谷歌翻译
我们提出了一个单阶段的休闲波形到波形多通道模型,该模型可以根据动态的声学场景中的广泛空间位置分离移动的声音源。我们将场景分为两个空间区域,分别包含目标和干扰声源。该模型经过训练有素的端到端,并隐含地进行空间处理,而没有基于传统处理或使用手工制作的空间特征的任何组件。我们在现实世界数据集上评估了所提出的模型,并表明该模型与Oracle Beamformer的性能匹配,然后是最先进的单渠道增强网络。
translated by 谷歌翻译
鉴于音乐源分离和自动混合的最新进展,在音乐曲目中删除音频效果是开发自动混合系统的有意义的一步。本文着重于消除对音乐制作中吉他曲目应用的失真音频效果。我们探索是否可以通过设计用于源分离和音频效应建模的神经网络来解决效果的去除。我们的方法证明对混合处理和清洁信号的效果特别有效。与基于稀疏优化的最新解决方案相比,这些模型获得了更好的质量和更快的推断。我们证明这些模型不仅适合倾斜,而且适用于其他类型的失真效应。通过讨论结果,我们强调了多个评估指标的有用性,以评估重建的不同方面的变形效果去除。
translated by 谷歌翻译
设备方向听到需要从给定方向的音频源分离,同时实现严格的人类难以察觉的延迟要求。虽然神经网络可以实现比传统的波束形成器的性能明显更好,但所有现有型号都缺乏对计算受限的可穿戴物的低延迟因果推断。我们展示了一个混合模型,将传统的波束形成器与定制轻质神经网络相结合。前者降低了后者的计算负担,并且还提高了其普遍性,而后者旨在进一步降低存储器和计算开销,以实现实时和低延迟操作。我们的评估显示了合成数据上最先进的因果推断模型的相当性能,同时实现了模型尺寸的5倍,每秒计算的4倍,处理时间减少5倍,更好地概括到真实的硬件数据。此外,我们的实时混合模型在为低功耗可穿戴设备设计的移动CPU上运行8毫秒,并实现17.5毫秒的端到端延迟。
translated by 谷歌翻译
本文介绍了一个统一的源滤波器网络,具有谐波源源激发生成机制。在以前的工作中,我们提出了统一的源滤波器gan(USFGAN),用于开发具有统一源滤波器神经网络体系结构的灵活语音可控性的高保真神经声码器。但是,USFGAN对Aperiodic源激发信号进行建模的能力不足,并且自然语音和生成的语音之间的声音质量仍然存在差距。为了改善源激发建模和产生的声音质量,提出了一个新的源激励生成网络,分别生成周期性和大约组件。还采用了Hifigan的高级对抗训练程序来代替原始USFGAN中使用的平行波甘的训练。客观和主观评估结果都表明,经过修改的USFGAN可显着提高基本USFGAN的声音质量,同时保持语音可控性。
translated by 谷歌翻译
音频分割和声音事件检测是机器聆听中的关键主题,旨在检测声学类别及其各自的边界。它对于音频分析,语音识别,音频索引和音乐信息检索非常有用。近年来,大多数研究文章都采用分类。该技术将音频分为小帧,并在这些帧上单独执行分类。在本文中,我们提出了一种新颖的方法,叫您只听一次(Yoho),该方法受到计算机视觉中普遍采用的Yolo算法的启发。我们将声学边界的检测转换为回归问题,而不是基于框架的分类。这是通过具有单独的输出神经元来检测音频类的存在并预测其起点和终点来完成的。与最先进的卷积复发性神经网络相比,Yoho的F量的相对改善范围从多个数据集中的1%到6%不等,以进行音频分段和声音事件检测。由于Yoho的输出更端到端,并且可以预测的神经元更少,因此推理速度的速度至少比逐个分类快6倍。另外,由于这种方法可以直接预测声学边界,因此后处理和平滑速度约为7倍。
translated by 谷歌翻译