Spatial audio methods are gaining a growing interest due to the spread of immersive audio experiences and applications, such as virtual and augmented reality. For these purposes, 3D audio signals are often acquired through arrays of Ambisonics microphones, each comprising four capsules that decompose the sound field in spherical harmonics. In this paper, we propose a dual quaternion representation of the spatial sound field acquired through an array of two First Order Ambisonics (FOA) microphones. The audio signals are encapsulated in a dual quaternion that leverages quaternion algebra properties to exploit correlations among them. This augmented representation with 6 degrees of freedom (6DOF) involves a more accurate coverage of the sound field, resulting in a more precise sound localization and a more immersive audio experience. We evaluate our approach on a sound event localization and detection (SELD) benchmark. We show that our dual quaternion SELD model with temporal convolution blocks (DualQSELD-TCN) achieves better results with respect to real and quaternion-valued baselines thanks to our augmented representation of the sound field. Full code is available at: https://github.com/ispamm/DualQSELD-TCN.
translated by 谷歌翻译
事实证明,超复杂的神经网络可以减少参数的总数,同时通过利用Clifford代数的特性来确保有价值的性能。最近,通过涉及有效的参数化kronecker产品,超复合线性层得到了进一步改善。在本文中,我们定义了超复杂卷积层的参数化,并介绍了轻巧有效的大型大型模型的参数化超复杂神经网络(PHNN)。我们的方法直接从数据中掌握了卷积规则和过滤器组织,而无需遵循严格的预定义域结构。 Phnns可以灵活地在任何用户定义或调谐域中操作,无论代数规则是否是预设的,从1D到$ n $ d。这样的锻造性允许在其自然域中处理多维输入,而无需吞并进一步的尺寸,而是在Quaternion神经网络中使用3D输入(例如颜色图像)。结果,拟议中的Phnn家族以$ 1/n $的参数运行,因为其在真实域中的类似物。我们通过在各种图像数据集上执行实验以及音频数据集证明了这种方法对应用程序多个域的多功能性,在这些实验中,我们的方法的表现优于真实和Quaternion值值。完整代码可在以下网址获得:https://github.com/elegan23/hypernets。
translated by 谷歌翻译
声音事件检测(SED)和声学场景分类(ASC)是两项广泛研究的音频任务,构成了声学场景分析研究的重要组成部分。考虑声音事件和声学场景之间的共享信息,共同执行这两个任务是复杂的机器聆听系统的自然部分。在本文中,我们研究了几个空间音频特征在训练执行SED和ASC的关节深神经网络(DNN)模型中的有用性。对包含双耳记录和同步声音事件和声学场景标签的两个不同数据集进行了实验,以分析执行SED和ASC之间的差异。提出的结果表明,使用特定双耳特征,主要是与相变(GCC-PHAT)的广义交叉相关性以及相位差异的罪和余弦,从而在单独和关节任务中具有更好的性能模型,与基线方法相比仅基于logmel能量。
translated by 谷歌翻译
Traditionally, deep learning methods for breast cancer classification perform a single-view analysis. However, radiologists simultaneously analyze all four views that compose a mammography exam, owing to the correlations contained in mammography views, which present crucial information for identifying tumors. In light of this, some studies have started to propose multi-view methods. Nevertheless, in such existing architectures, mammogram views are processed as independent images by separate convolutional branches, thus losing correlations among them. To overcome such limitations, in this paper we propose a novel approach for multi-view breast cancer classification based on parameterized hypercomplex neural networks. Thanks to hypercomplex algebra properties, our networks are able to model, and thus leverage, existing correlations between the different views that comprise a mammogram, thus mimicking the reading process performed by clinicians. The proposed methods are able to handle the information of a patient altogether without breaking the multi-view nature of the exam. We define architectures designed to process two-view exams, namely PHResNets, and four-view exams, i.e., PHYSEnet and PHYBOnet. Through an extensive experimental evaluation conducted with publicly available datasets, we demonstrate that our proposed models clearly outperform real-valued counterparts and also state-of-the-art methods, proving that breast cancer classification benefits from the proposed multi-view architectures. We also assess the method's robustness beyond mammogram analysis by considering different benchmarks, as well as a finer-scaled task such as segmentation. Full code and pretrained models for complete reproducibility of our experiments are freely available at: https://github.com/ispamm/PHBreast.
translated by 谷歌翻译
In this paper, we present a new model for Direction of Arrival (DOA) estimation of sound sources based on an Icosahedral Convolutional Neural Network (CNN) applied over SRP-PHAT power maps computed from the signals received by a microphone array. This icosahedral CNN is equivariant to the 60 rotational symmetries of the icosahedron, which represent a good approximation of the continuous space of spherical rotations, and can be implemented using standard 2D convolutional layers, having a lower computational cost than most of the spherical CNNs. In addition, instead of using fully connected layers after the icosahedral convolutions, we propose a new soft-argmax function that can be seen as a differentiable version of the argmax function and allows us to solve the DOA estimation as a regression problem interpreting the output of the convolutional layers as a probability distribution. We prove that using models that fit the equivariances of the problem allows us to outperform other state-of-the-art models with a lower computational cost and more robustness, obtaining root mean square localization errors lower than 10{\deg} even in scenarios with a reverberation time $T_{60}$ of 1.5 s.
translated by 谷歌翻译
在本文中,我们介绍了在单个神经网络中执行同时扬声器分离,DERE失眠和扬声器识别的盲言语分离和DERERATERATION(BSSD)网络。扬声器分离由一组预定义的空间线索引导。通过使用神经波束成形进行DERERATERATION,通过嵌入向量和三联挖掘来辅助扬声器识别。我们介绍了一种使用复值神经网络的频域模型,以及在潜伏空间中执行波束成形的时域变体。此外,我们提出了一个块在线模式来处理更长的录音,因为它们在会议场景中发生。我们在规模独立信号方面评估我们的系统,以失真率(SI-SI-SIS),字错误率(WER)和相等的错误率(eer)。
translated by 谷歌翻译
Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully-convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network (TCN) consisting of stacked 1-D dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two-and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time-frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study therefore represents a major step toward the realization of speech separation systems for real-world speech processing technologies.
translated by 谷歌翻译
设备方向听到需要从给定方向的音频源分离,同时实现严格的人类难以察觉的延迟要求。虽然神经网络可以实现比传统的波束形成器的性能明显更好,但所有现有型号都缺乏对计算受限的可穿戴物的低延迟因果推断。我们展示了一个混合模型,将传统的波束形成器与定制轻质神经网络相结合。前者降低了后者的计算负担,并且还提高了其普遍性,而后者旨在进一步降低存储器和计算开销,以实现实时和低延迟操作。我们的评估显示了合成数据上最先进的因果推断模型的相当性能,同时实现了模型尺寸的5倍,每秒计算的4倍,处理时间减少5倍,更好地概括到真实的硬件数据。此外,我们的实时混合模型在为低功耗可穿戴设备设计的移动CPU上运行8毫秒,并实现17.5毫秒的端到端延迟。
translated by 谷歌翻译
我们提出了一个单阶段的休闲波形到波形多通道模型,该模型可以根据动态的声学场景中的广泛空间位置分离移动的声音源。我们将场景分为两个空间区域,分别包含目标和干扰声源。该模型经过训练有素的端到端,并隐含地进行空间处理,而没有基于传统处理或使用手工制作的空间特征的任何组件。我们在现实世界数据集上评估了所提出的模型,并表明该模型与Oracle Beamformer的性能匹配,然后是最先进的单渠道增强网络。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
部分微分方程(PDE)参见在科学和工程中的广泛使用,以将物理过程的模拟描述为标量和向量场随着时间的推移相互作用和协调。由于其标准解决方案方法的计算昂贵性质,神经PDE代理已成为加速这些模拟的积极研究主题。但是,当前的方法并未明确考虑不同字段及其内部组件之间的关系,这些关系通常是相关的。查看此类相关场的时间演变通过多活动场的镜头,使我们能够克服这些局限性。多胎场由标量,矢量以及高阶组成部分组成,例如双分数和三分分射线。 Clifford代数可以描述它们的代数特性,例如乘法,加法和其他算术操作。据我们所知,本文介绍了此类多人表示的首次使用以及Clifford的卷积和Clifford Fourier在深度学习的背景下的转换。由此产生的Clifford神经层普遍适用,并将在流体动力学,天气预报和一般物理系统的建模领域中直接使用。我们通过经验评估克利福德神经层的好处,通过在二维Navier-Stokes和天气建模任务以及三维Maxwell方程式上取代其Clifford对应物中常见的神经PDE代理中的卷积和傅立叶操作。克利福德神经层始终提高测试神经PDE代理的概括能力。
translated by 谷歌翻译
对于沉浸式应用,匹配视觉同行的双耳发电是对虚拟环境中的人们带来有意义的体验至关重要。最近的作品已经显示了使用神经网络来使用2D视觉信息作为指导来使用Mono音频来合成双耳音频。通过使用3D视觉信息引导音频并在波形域中操作来扩展该方法可以允许虚拟音频场景的更准确的Auratization。在本文中,我们提供了一个多模态深入学习模型的点,它使用3D点云场景从单声道音频生成双耳版本。具体地,Point2Sound由具有3D稀疏卷积的视觉网络组成,其从点云场景中提取视觉特征来调节操作在波形域中的音频网络,以合成双耳网络。实验结果表明,3D视觉信息可以成功引导双模深度学习模型的双耳合成任务。此外,我们还调查了不同的丢失函数和3D点云属性,显示直接预测完整的双耳信号并使用RGB深度特征增加了我们所提出的模型的性能。
translated by 谷歌翻译
人类自然有效地在复杂的场景中找到突出区域。通过这种观察的动机,引入了计算机视觉中的注意力机制,目的是模仿人类视觉系统的这一方面。这种注意机制可以基于输入图像的特征被视为动态权重调整过程。注意机制在许多视觉任务中取得了巨大的成功,包括图像分类,对象检测,语义分割,视频理解,图像生成,3D视觉,多模态任务和自我监督的学习。在本调查中,我们对计算机愿景中的各种关注机制进行了全面的审查,并根据渠道注意,空间关注,暂时关注和分支注意力进行分类。相关的存储库https://github.com/menghaoguo/awesome-vision-tions致力于收集相关的工作。我们还建议了未来的注意机制研究方向。
translated by 谷歌翻译
最近在各种语音域应用中提出了卷积增强的变压器(构象异构体),例如自动语音识别(ASR)和语音分离,因为它们可以捕获本地和全球依赖性。在本文中,我们提出了一个基于构型的度量生成对抗网络(CMGAN),以在时间频率(TF)域中进行语音增强(SE)。发电机使用两阶段构象体块编码大小和复杂的频谱图信息,以模拟时间和频率依赖性。然后,解码器将估计分解为尺寸掩模的解码器分支,以滤除不需要的扭曲和复杂的细化分支,以进一步改善幅度估计并隐式增强相信息。此外,我们还包括一个度量歧视器来通过优化相应的评估评分来减轻度量不匹配。客观和主观评估表明,与三个语音增强任务(DeNoising,dereverberation和Super-Losity)中的最新方法相比,CMGAN能够表现出卓越的性能。例如,对语音库+需求数据集的定量降解分析表明,CMGAN的表现优于以前的差距,即PESQ为3.41,SSNR为11.10 dB。
translated by 谷歌翻译
The marine ecosystem is changing at an alarming rate, exhibiting biodiversity loss and the migration of tropical species to temperate basins. Monitoring the underwater environments and their inhabitants is of fundamental importance to understand the evolution of these systems and implement safeguard policies. However, assessing and tracking biodiversity is often a complex task, especially in large and uncontrolled environments, such as the oceans. One of the most popular and effective methods for monitoring marine biodiversity is passive acoustics monitoring (PAM), which employs hydrophones to capture underwater sound. Many aquatic animals produce sounds characteristic of their own species; these signals travel efficiently underwater and can be detected even at great distances. Furthermore, modern technologies are becoming more and more convenient and precise, allowing for very accurate and careful data acquisition. To date, audio captured with PAM devices is frequently manually processed by marine biologists and interpreted with traditional signal processing techniques for the detection of animal vocalizations. This is a challenging task, as PAM recordings are often over long periods of time. Moreover, one of the causes of biodiversity loss is sound pollution; in data obtained from regions with loud anthropic noise, it is hard to separate the artificial from the fish sound manually. Nowadays, machine learning and, in particular, deep learning represents the state of the art for processing audio signals. Specifically, sound separation networks are able to identify and separate human voices and musical instruments. In this work, we show that the same techniques can be successfully used to automatically extract fish vocalizations in PAM recordings, opening up the possibility for biodiversity monitoring at a large scale.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
我们研究复杂的缩放作为一种自然的对称性和复杂的测量和表示独特的对称性。深度复杂网络(DCN)将实值的代数扩展到复杂域,而不会解决复杂值缩放。超现实占据复杂数字的限制性歧管视图,采用距离度量来实现复杂的缩放不变性,同时丢失丰富的复合值。我们分析了复杂的缩放,作为共同领域的转换和设计新颖的具有这种特殊转换的不变神经网络层。我们还提出了RGB图像的新型复合值表示,其中复值缩放表示色调偏移或跨色通道的相关变化。在MSTAR,CIFAR10,CIFAR100和SVHN上基准测试,我们的共同域对称(CDS)分类器提供更高的准确性,更好的泛化,对共同域变换的鲁棒性,以及比DCN和超现实的更低模型偏差和方差,具有较少的参数。
translated by 谷歌翻译
增强现实设备具有增强人类感知的潜力,并使复杂的会话环境中的其他辅助功能能够实现。有效地捕获理解这些社交交互所必需的视听上下文首先需要检测和定位设备佩戴者和周围人的语音活动。这些任务由于它们的高电平性质而挑战:佩戴者的头部运动可能导致运动模糊,周围的人可能出现在困难的观察中,并且可能有遮挡,视觉杂乱,音频噪声和畸形。在这些条件下,以前的最先进的主动扬声器检测方法不会给出令人满意的结果。相反,我们使用视频和多通道麦克风阵列音频从新设置中解决问题。我们提出了一种新的端到端深度学习方法,可以提供强大的语音活动检测和本地化结果。与以前的方法相比,我们的方法将主动扬声器从球体上的所有可能方向定位,即使在相机的视野之外,同时检测设备佩戴者自己的语音活动。我们的实验表明,该方法提供了卓越的结果,可以实时运行,并且对抗噪音和杂乱是强大的。
translated by 谷歌翻译
以对象为中心的表示是人类感知的基础,并使我们能够对世界进行推理,并系统地推广到新的环境。当前,大多数在无监督的对象发现上的作品集中在基于插槽的方法上,这些方法明确将单个对象的潜在表示分开。尽管结果很容易解释,但通常需要设计相关建筑的设计。与此相反,我们提出了一种相对简单的方法 - 复杂的自动编码器(CAE) - 创建分布式以对象为中心的表示。遵循对生物神经元中对象表示为基础的编码方案,其复杂值激活表示两个消息:它们的幅度表达了特征的存在,而神经元之间的相对相位差异应绑定在一起以创建关节对象表示。 。与以前使用复杂值激活进行对象发现的方法相反,我们提出了一种完全无监督的方法,该方法是端到端训练的 - 导致了性能和效率的显着提高。此外,我们表明,与最新的基于最新的插槽方法相比,CAE在简单的多对象数据集上实现了竞争性或更好的无监督对象发现性能,同时训练的速度要快100倍。
translated by 谷歌翻译