Binaural rendering of ambisonic signals is of broad interest to virtual reality and immersive media. Conventional methods often require manually measured Head-Related Transfer Functions (HRTFs). To address this issue, we collect a paired ambisonic-binaural dataset and propose a deep learning framework in an end-to-end manner. Experimental results show that neural networks outperform the conventional method in objective metrics and achieve comparable subjective metrics. To validate the proposed framework, we experimentally explore different settings of the input features, model structures, output features, and loss functions. Our proposed system achieves an SDR of 7.32 and MOSs of 3.83, 3.58, 3.87, 3.58 in quality, timbre, localization, and immersion dimensions.
translated by 谷歌翻译
对于沉浸式应用,匹配视觉同行的双耳发电是对虚拟环境中的人们带来有意义的体验至关重要。最近的作品已经显示了使用神经网络来使用2D视觉信息作为指导来使用Mono音频来合成双耳音频。通过使用3D视觉信息引导音频并在波形域中操作来扩展该方法可以允许虚拟音频场景的更准确的Auratization。在本文中,我们提供了一个多模态深入学习模型的点,它使用3D点云场景从单声道音频生成双耳版本。具体地,Point2Sound由具有3D稀疏卷积的视觉网络组成,其从点云场景中提取视觉特征来调节操作在波形域中的音频网络,以合成双耳网络。实验结果表明,3D视觉信息可以成功引导双模深度学习模型的双耳合成任务。此外,我们还调查了不同的丢失函数和3D点云属性,显示直接预测完整的双耳信号并使用RGB深度特征增加了我们所提出的模型的性能。
translated by 谷歌翻译
最近在各种语音域应用中提出了卷积增强的变压器(构象异构体),例如自动语音识别(ASR)和语音分离,因为它们可以捕获本地和全球依赖性。在本文中,我们提出了一个基于构型的度量生成对抗网络(CMGAN),以在时间频率(TF)域中进行语音增强(SE)。发电机使用两阶段构象体块编码大小和复杂的频谱图信息,以模拟时间和频率依赖性。然后,解码器将估计分解为尺寸掩模的解码器分支,以滤除不需要的扭曲和复杂的细化分支,以进一步改善幅度估计并隐式增强相信息。此外,我们还包括一个度量歧视器来通过优化相应的评估评分来减轻度量不匹配。客观和主观评估表明,与三个语音增强任务(DeNoising,dereverberation和Super-Losity)中的最新方法相比,CMGAN能够表现出卓越的性能。例如,对语音库+需求数据集的定量降解分析表明,CMGAN的表现优于以前的差距,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
Binaural audio plays a significant role in constructing immersive augmented and virtual realities. As it is expensive to record binaural audio from the real world, synthesizing them from mono audio has attracted increasing attention. This synthesis process involves not only the basic physical warping of the mono audio, but also room reverberations and head/ear related filtrations, which, however, are difficult to accurately simulate in traditional digital signal processing. In this paper, we formulate the synthesis process from a different perspective by decomposing the binaural audio into a common part that shared by the left and right channels as well as a specific part that differs in each channel. Accordingly, we propose BinauralGrad, a novel two-stage framework equipped with diffusion models to synthesize them respectively. Specifically, in the first stage, the common information of the binaural audio is generated with a single-channel diffusion model conditioned on the mono audio, based on which the binaural audio is generated by a two-channel diffusion model in the second stage. Combining this novel perspective of two-stage synthesis with advanced generative models (i.e., the diffusion models),the proposed BinauralGrad is able to generate accurate and high-fidelity binaural audio samples. Experiment results show that on a benchmark dataset, BinauralGrad outperforms the existing baselines by a large margin in terms of both object and subject evaluation metrics (Wave L2: 0.128 vs. 0.157, MOS: 3.80 vs. 3.61). The generated audio samples (https://speechresearch.github.io/binauralgrad) and code (https://github.com/microsoft/NeuralSpeech/tree/master/BinauralGrad) are available online.
translated by 谷歌翻译
双耳音频为听众提供了沉浸式体验,可以增强增强和虚拟现实。然而,录制双耳音频需要专门设置,具有左耳和右耳的麦克风的假人头部。这种录制设置难以构建和设置,因此单声道音频已成为公共设备中的首选选择。为了获得与双耳音频相同的影响,最近的努力已经针对从场景的视觉输入上升降单声道音频到双耳音频。这种方法没有使用一个重要的提示来任务:不同声音产生对象来自麦克风的距离。在这项工作中,我们认为场景的深度映射可以作为诱导场景中不同对象的距离信息的代理,用于音频双耳的任务。我们提出了一种新颖的编码器解码器架构,具有分层关注机制来共同编码图像,深度和音频特征。我们在最先进的变压器网络上设计网络,用于图像和深度表示。我们凭经验展示了所提出的方法对于两个具有挑战性的公共数据集公平游戏和音乐 - 立体声舒适地表现出最先进的方法。我们还展示了定性结果,该方法能够专注于任务所需的正确信息。项目详细信息可用于\ url {https://krantiparida.github.io/projects/bomobinaural.html}
translated by 谷歌翻译
设备方向听到需要从给定方向的音频源分离,同时实现严格的人类难以察觉的延迟要求。虽然神经网络可以实现比传统的波束形成器的性能明显更好,但所有现有型号都缺乏对计算受限的可穿戴物的低延迟因果推断。我们展示了一个混合模型,将传统的波束形成器与定制轻质神经网络相结合。前者降低了后者的计算负担,并且还提高了其普遍性,而后者旨在进一步降低存储器和计算开销,以实现实时和低延迟操作。我们的评估显示了合成数据上最先进的因果推断模型的相当性能,同时实现了模型尺寸的5倍,每秒计算的4倍,处理时间减少5倍,更好地概括到真实的硬件数据。此外,我们的实时混合模型在为低功耗可穿戴设备设计的移动CPU上运行8毫秒,并实现17.5毫秒的端到端延迟。
translated by 谷歌翻译
声音事件检测(SED)和声学场景分类(ASC)是两项广泛研究的音频任务,构成了声学场景分析研究的重要组成部分。考虑声音事件和声学场景之间的共享信息,共同执行这两个任务是复杂的机器聆听系统的自然部分。在本文中,我们研究了几个空间音频特征在训练执行SED和ASC的关节深神经网络(DNN)模型中的有用性。对包含双耳记录和同步声音事件和声学场景标签的两个不同数据集进行了实验,以分析执行SED和ASC之间的差异。提出的结果表明,使用特定双耳特征,主要是与相变(GCC-PHAT)的广义交叉相关性以及相位差异的罪和余弦,从而在单独和关节任务中具有更好的性能模型,与基线方法相比仅基于logmel能量。
translated by 谷歌翻译
随着最近的研究进展,深度学习模型已成为实时电信应用程序中声学回声取消(AEC)的有吸引力的选择。由于声学回声是音频质量差的主要来源之一,因此提出了各种各样的深层模型。但是,对良好回声取消质量的重要但经常忽略的要求是麦克风和远端信号的同步。通常,使用基于互相关的经典算法实现,对齐模块是具有已知设计限制的单独功能块。在我们的工作中,我们提出了一个基于内置自我注意的对准的深度学习体系结构,该架构能够处理不结盟的输入,从而改善了回声取消性能,同时简化了通信管道。此外,我们表明我们的方法可以在AEC挑战数据集中的真实记录上进行困难的延迟估计案例实现重大改进。
translated by 谷歌翻译
在本文中,我们介绍了在单个神经网络中执行同时扬声器分离,DERE失眠和扬声器识别的盲言语分离和DERERATERATION(BSSD)网络。扬声器分离由一组预定义的空间线索引导。通过使用神经波束成形进行DERERATERATION,通过嵌入向量和三联挖掘来辅助扬声器识别。我们介绍了一种使用复值神经网络的频域模型,以及在潜伏空间中执行波束成形的时域变体。此外,我们提出了一个块在线模式来处理更长的录音,因为它们在会议场景中发生。我们在规模独立信号方面评估我们的系统,以失真率(SI-SI-SIS),字错误率(WER)和相等的错误率(eer)。
translated by 谷歌翻译
Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two-and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward the realization of speech separation systems for real-world speech processing technologies.
translated by 谷歌翻译
Spatial audio methods are gaining a growing interest due to the spread of immersive audio experiences and applications, such as virtual and augmented reality. For these purposes, 3D audio signals are often acquired through arrays of Ambisonics microphones, each comprising four capsules that decompose the sound field in spherical harmonics. In this paper, we propose a dual quaternion representation of the spatial sound field acquired through an array of two First Order Ambisonics (FOA) microphones. The audio signals are encapsulated in a dual quaternion that leverages quaternion algebra properties to exploit correlations among them. This augmented representation with 6 degrees of freedom (6DOF) involves a more accurate coverage of the sound field, resulting in a more precise sound localization and a more immersive audio experience. We evaluate our approach on a sound event localization and detection (SELD) benchmark. We show that our dual quaternion SELD model with temporal convolution blocks (DualQSELD-TCN) achieves better results with respect to real and quaternion-valued baselines thanks to our augmented representation of the sound field. Full code is available at: https://github.com/ispamm/DualQSELD-TCN.
translated by 谷歌翻译
在这项工作中,我们提出了清洁nunet,这是原始波形上的因果语音deno的模型。所提出的模型基于编码器架构,并结合了几个自我注意块,以完善其瓶颈表示,这对于获得良好的结果至关重要。该模型通过在波形和多分辨率光谱图上定义的一组损失进行了优化。所提出的方法在各种客观和主观评估指标中的言语质量方面优于最先进的模型。
translated by 谷歌翻译
我们提出了一个录音录音录音的录音录音。我们的模型通过短时傅立叶变换(STFT)将其输入转换为时频表示,并使用卷积神经网络处理所得的复杂频谱图。该网络在合成音乐数据集上培训了重建和对抗性目标,该数据集是通过将干净的音乐与从旧唱片的安静片段中提取的真实噪声样本混合而创建的。我们在合成数据集的持有测试示例中定量评估我们的方法,并通过人类对实际历史记录样本的评级进行定性评估。我们的结果表明,所提出的方法可有效消除噪音,同时保留原始音乐的质量和细节。
translated by 谷歌翻译
最近,卷积增强的变压器(构象异构体)在自动语音识别(ASR)和时间域语音增强(SE)中实现了有希望的表现,因为它可以捕获语音信号中的本地和全局依赖性。在本文中,我们在时间频率(TF)域中提出了SE的基于构型的度量生成对抗网络(CMGAN)。在发电机中,我们利用两阶段的构象体块来通过对时间和频率依赖性进行建模来汇总所有幅度和复杂的频谱图。大小和复杂谱图的估计在解码器阶段被解耦,然后共同掺入以重建增强的语音。此外,通过优化相应的评估评分,采用了度量歧视器来进一步提高增强估计语音的质量。语音库+需求数据集的定量分析表明,CMGAN在优于以前的模型的功能,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
The marine ecosystem is changing at an alarming rate, exhibiting biodiversity loss and the migration of tropical species to temperate basins. Monitoring the underwater environments and their inhabitants is of fundamental importance to understand the evolution of these systems and implement safeguard policies. However, assessing and tracking biodiversity is often a complex task, especially in large and uncontrolled environments, such as the oceans. One of the most popular and effective methods for monitoring marine biodiversity is passive acoustics monitoring (PAM), which employs hydrophones to capture underwater sound. Many aquatic animals produce sounds characteristic of their own species; these signals travel efficiently underwater and can be detected even at great distances. Furthermore, modern technologies are becoming more and more convenient and precise, allowing for very accurate and careful data acquisition. To date, audio captured with PAM devices is frequently manually processed by marine biologists and interpreted with traditional signal processing techniques for the detection of animal vocalizations. This is a challenging task, as PAM recordings are often over long periods of time. Moreover, one of the causes of biodiversity loss is sound pollution; in data obtained from regions with loud anthropic noise, it is hard to separate the artificial from the fish sound manually. Nowadays, machine learning and, in particular, deep learning represents the state of the art for processing audio signals. Specifically, sound separation networks are able to identify and separate human voices and musical instruments. In this work, we show that the same techniques can be successfully used to automatically extract fish vocalizations in PAM recordings, opening up the possibility for biodiversity monitoring at a large scale.
translated by 谷歌翻译
鉴于音乐源分离和自动混合的最新进展,在音乐曲目中删除音频效果是开发自动混合系统的有意义的一步。本文着重于消除对音乐制作中吉他曲目应用的失真音频效果。我们探索是否可以通过设计用于源分离和音频效应建模的神经网络来解决效果的去除。我们的方法证明对混合处理和清洁信号的效果特别有效。与基于稀疏优化的最新解决方案相比,这些模型获得了更好的质量和更快的推断。我们证明这些模型不仅适合倾斜,而且适用于其他类型的失真效应。通过讨论结果,我们强调了多个评估指标的有用性,以评估重建的不同方面的变形效果去除。
translated by 谷歌翻译
传统上,音乐混合涉及以干净,单个曲目的形式录制乐器,并使用音频效果和专家知识(例如,混合工程师)将它们融合到最终混合物中。近年来,音乐制作任务的自动化已成为一个新兴领域,基于规则的方法和机器学习方法已被探索。然而,缺乏干燥或干净的仪器记录限制了这种模型的性能,这与专业的人造混合物相去甚远。我们探索是否可以使用室外数据,例如潮湿或加工的多轨音乐录音,并将其重新利用以训练有监督的深度学习模型,以弥合自动混合质量的当前差距。为了实现这一目标,我们提出了一种新型的数据预处理方法,该方法允许模型执行自动音乐混合。我们还重新设计了一种用于评估音乐混合系统的听力测试方法。我们使用经验丰富的混合工程师作为参与者来验证结果。
translated by 谷歌翻译
课程学习开始在语音增强区中茁壮成长,使原始频谱估计任务将原始频谱估计任务分成多个更容易的子任务以实现更好的性能。由此,我们提出了一种双分支关注变压器,称为DB-Aiat,以并行地处理光谱的粗糙和细粒度。根据互补视角,提出了一种幅度掩蔽分支以粗略地估计整体幅度谱,并且同时设计复杂的精制分支,设计成补偿缺失的光谱细节和隐式导出的相位信息。在每个分支机构内,我们提出了一种新的注意力互感器的模块,以替换用于时间序列建模的传统RNN和时间卷积网络。具体地,提出的注意力变压器包括自适应时间 - 频率注意力变压器块和自适应分层关注模块,旨在捕获长期时间频率依赖性以及进一步聚合全局分层上下文信息。语音库+需求的实验结果表明,DB-AIAT在以前的高级系统上产生了最先进的性能(例如,3.31 PESQ,95.6%的STOI和10.79dB SSNR),其型号尺寸相对较小(2.81米)。
translated by 谷歌翻译
Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in these domains in task such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement domain to achieve denosing, dereverberation and multi-speaker separation in monaural speech enhancement. In this paper, we review some dominant DNN techniques being employed to achieve speech separation. The review looks at the whole pipeline of speech enhancement from feature extraction, how DNN based tools are modelling both global and local features of speech and model training (supervised and unsupervised). We also review the use of speech-enhancement pre-trained models to boost speech enhancement process. The review is geared towards covering the dominant trends with regards to DNN application in speech enhancement in speech obtained via a single speaker.
translated by 谷歌翻译
In this paper, we present a new model for Direction of Arrival (DOA) estimation of sound sources based on an Icosahedral Convolutional Neural Network (CNN) applied over SRP-PHAT power maps computed from the signals received by a microphone array. This icosahedral CNN is equivariant to the 60 rotational symmetries of the icosahedron, which represent a good approximation of the continuous space of spherical rotations, and can be implemented using standard 2D convolutional layers, having a lower computational cost than most of the spherical CNNs. In addition, instead of using fully connected layers after the icosahedral convolutions, we propose a new soft-argmax function that can be seen as a differentiable version of the argmax function and allows us to solve the DOA estimation as a regression problem interpreting the output of the convolutional layers as a probability distribution. We prove that using models that fit the equivariances of the problem allows us to outperform other state-of-the-art models with a lower computational cost and more robustness, obtaining root mean square localization errors lower than 10{\deg} even in scenarios with a reverberation time $T_{60}$ of 1.5 s.
translated by 谷歌翻译