Audio-visual approaches involving visual inputs have laid the foundation for recent progress in speech separation. However, the optimization of the concurrent usage of auditory and visual inputs is still an active research area. Inspired by the cortico-thalamo-cortical circuit, in which the sensory processing mechanisms of different modalities modulate one another via the non-lemniscal sensory thalamus, we propose a novel cortico-thalamo-cortical neural network (CTCNet) for audio-visual speech separation (AVSS). First, the CTCNet learns hierarchical auditory and visual representations in a bottom-up manner in separate auditory and visual subnetworks, mimicking the functions of the auditory and visual cortical areas. Then, inspired by the large number of connections between cortical regions and the thalamus, the model fuses the auditory and visual information in a thalamic subnetwork through top-down connections. Finally, the model transmits this fused information back to the auditory and visual subnetworks, and the above process is repeated several times. The results of experiments on three speech separation benchmark datasets show that CTCNet remarkably outperforms existing AVSS methods with considerablely fewer parameters. These results suggest that mimicking the anatomical connectome of the mammalian brain has great potential for advancing the development of deep neural networks. Project repo is https://github.com/JusperLee/CTCNet.
translated by 谷歌翻译
人工智能通过许多令人印象深刻的应用深刻地彻底改变了药物化学领域,但是这些应用的成功需要大量具有高质量注释的培训样本,这严重限制了数据驱动方法的广泛使用。在本文中,我们专注于反应产量预测问题,该问题有助于化学家仅通过一些实验试验选择新的化学空间中的高收益反应。为了攻击这一挑战,我们首先提出了Metarf,这是一种基于注意力的随机森林模型,该模型专门针对少量产量预测,其中随机森林的注意力重量通过元学习框架自动优化,可以快速地进行优化适合预测新试剂的性能,同时还提供了一些其他样品。为了提高少量学习绩效,我们进一步引入了基于尺寸的采样方法,以确定要进行实验测试然后学习的有价值的样品。我们的方法在三个不同的数据集上进行了评估,并在几乎没有预测上获得了令人满意的性能。在高通量实验(HTE)数据集中,我们方法论的前10个高收益反应的平均产量相对接近理想的产量选择结果。
translated by 谷歌翻译
脑电图(EEG)录音通常被伪影污染。已经开发了各种方法来消除或削弱伪影的影响。然而,大多数人都依赖于先前的分析经验。在这里,我们提出了一个深入的学习框架,以将神经信号和伪像在嵌入空间中分离并重建被称为DeepSeparator的去噪信号。 DeepSeparator采用编码器来提取和放大原始EEG中的特征,称为分解器的模块以提取趋势,检测和抑制伪像和解码器以重建去噪信号。此外,DeepSeparator可以提取伪像,这在很大程度上增加了模型解释性。通过半合成的EEG数据集和实际任务相关的EEG数据集进行了所提出的方法,建议DeepSepater在EoG和EMG伪像去除中占据了传统模型。 DeepSeparator可以扩展到多通道EEG和任何长度的数据。它可能激励深入学习的EEG去噪的未来发展和应用。 DeepSeparator的代码可在https://github.com/ncclabsustech/deepseparator上获得。
translated by 谷歌翻译
如今,越来越多的数据集以分布式方式存储,以便存储器存储或数据隐私。广义特征值问题(GEP)在大型高维统计模型中起着至关重要的作用。然而,对于特征值分解的现有分布式方法不能在GEP中应用实证协方差矩阵的发散。在这里,我们提出了一般的分布式GEP框架,并为GEP进行一次播放通信。如果对称数据协方差具有重复的特征值,例如,在规范组件分析中,我们进一步修改了更好的收敛方法。对近似误差的理论分析是对数据协方差的差异,经验数据协方差的特征等的关系,以及本地服务器的数量。数值实验还显示了所提出的算法的有效性。
translated by 谷歌翻译
Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.
translated by 谷歌翻译
Transcranial temporal interference stimulation (tTIS) has been reported to be effective in stimulating deep brain structures in experimental studies. However, a computational framework for optimizing the tTIS strategy and simulating the impact of tTIS on the brain is still lacking, as previous methods rely on predefined parameters and hardly adapt to additional constraints. Here, we propose a general framework, namely multi-objective optimization via evolutionary algorithm (MOVEA), to solve the nonconvex optimization problem for various stimulation techniques, including tTIS and transcranial alternating current stimulation (tACS). By optimizing the electrode montage in a two-stage structure, MOVEA can be compatible with additional constraints (e.g., the number of electrodes, additional avoidance regions), and MOVEA can accelerate to obtain the Pareto fronts. These Pareto fronts consist of a set of optimal solutions under different requirements, suggesting a trade-off relationship between conflicting objectives, such as intensity and focality. Based on MOVEA, we make comprehensive comparisons between tACS and tTIS in terms of intensity, focality and maneuverability for targets of different depths. Our results show that although the tTIS can only obtain a relatively low maximum achievable electric field strength, for example, the maximum intensity of motor area under tTIS is 0.42V /m, while 0.51V /m under tACS, it helps improve the focality by reducing 60% activated volume outside the target. We further perform ANOVA on the stimulation results of eight subjects with tACS and tTIS. Despite the individual differences in head models, our results suggest that tACS has a greater intensity and tTIS has a higher focality. These findings provide guidance on the choice between tACS and tTIS and indicate a great potential in tTIS-based personalized neuromodulation. Code will be released soon.
translated by 谷歌翻译
当系统中有某些未知术语和隐藏的物理机制时,基于第一原理的复杂物理系统的管理方程可能会非常具有挑战性。在这项工作中,我们采用深度学习体系结构来学习基于从完全动力学模型中获取的数据的等离子体系统的流体部分微分方程(PDE)。证明了学到的多臂流体PDE可以融合诸如Landau阻尼等动力学效应。基于学习的流体闭合,数据驱动的多音阶流体建模可以很好地再现从完全动力学模型中得出的所有物理量。Landau阻尼的计算阻尼率与完全动力学的模拟和线性理论一致。用于复杂物理系统的PDE的数据驱动的流体建模可以应用于改善流体闭合并降低全球系统多规模建模的计算成本。
translated by 谷歌翻译
深度神经网络和其他现代机器学习模型的培训通常包括解决高维且受大规模数据约束的非凸优化问题。在这里,基于动量的随机优化算法在近年来变得尤其流行。随机性来自数据亚采样,从而降低了计算成本。此外,动量和随机性都应该有助于算法克服当地的最小化器,并希望在全球范围内融合。从理论上讲,这种随机性和动量的结合被糟糕地理解。在这项工作中,我们建议并分析具有动量的随机梯度下降的连续时间模型。该模型是一个分段确定的马尔可夫过程,它通过阻尼不足的动态系统和通过动力学系统的随机切换来代表粒子运动。在我们的分析中,我们研究了长期限制,子采样到无填充采样极限以及动量到非摩托车的限制。我们对随着时间的推移降低动量的情况特别感兴趣:直觉上,动量有助于在算法的初始阶段克服局部最小值,但禁止后来快速收敛到全球最小化器。在凸度的假设下,当降低随时间的动量时,我们显示了动力学系统与全局最小化器的收敛性,并让子采样率转移到无穷大。然后,我们提出了一个稳定的,合成的离散方案,以从我们的连续时间动力学系统中构造算法。在数值实验中,我们研究了我们在凸面和非凸测试问题中的离散方案。此外,我们训练卷积神经网络解决CIFAR-10图像分类问题。在这里,与动量相比,我们的算法与随机梯度下降相比达到了竞争性结果。
translated by 谷歌翻译
多年来,旨在从已知事实中推断出新结论的知识图(KGS)的推理主要集中在静态KG上。现实生活中知识的不断增长提出了使能够扩大KGS的归纳推理能力的必要性。现有的归纳工作假设新实体都在批处理中一次出现,这过度简化了新实体不断出现的实际情况。这项研究探讨了一个更现实,更具挑战性的环境,新实体分为多批次。我们提出了一个基于步行的归纳推理模型来解决新环境。具体而言,具有自适应关系聚合的图形卷积网络旨在使用其邻近关系编码和更新实体。为了捕捉不同的邻居的重要性,我们在聚合过程中采用了一种查询反馈注意机制。此外,为了减轻新实体的稀疏链接问题,我们提出了一种链接增强策略,以将可信赖的事实添加到KGS中。我们构建了三个新数据集,用于模拟此多批次出现方案。实验结果表明,我们所提出的模型优于基于最先进的基于嵌入的,基于步行的基于步行和基于规则的模型。
translated by 谷歌翻译
问答(QA)系统越来越多地部署在支持现实世界决策的应用程序中。但是,最新的模型依赖于深层神经网络,这些网络很难被人类解释。固有的可解释模型或事后解释性方法可以帮助用户理解模型如何达到其预测,并在成功的情况下增加对系统的信任。此外,研究人员可以利用这些见解来开发更准确和偏见的新方法。在本文中,我们介绍了Square V2(Square的新版本),以根据图形和基于图形的说明等方法进行比较模型提供解释性基础架构。尽管显着图对于检查每个输入令牌对模型预测的重要性很有用,但来自外部知识图的基于图的解释使用户能够验证模型预测背后的推理。此外,我们提供了多种对抗性攻击,以比较质量检查模型的鲁棒性。通过这些解释性方法和对抗性攻击,我们旨在简化对可信赖的质量检查模型的研究。 Square可在https://square.ukp-lab.de上找到。
translated by 谷歌翻译