从人体中获取广泛的体征,称为生物医学体征或生物信号,它们可以处于细胞水平,器官水平或亚原子水平。脑电图来自大脑的电活动,心电图是心脏的电活动,来自肌肉声信号的电动作用,称为肌电图,眼睛的电视图等。研究这些信号对医生非常有帮助,它可以帮助他们检查和预测和治愈许多疾病。
translated by 谷歌翻译
A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of nongaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject.
translated by 谷歌翻译
单通道盲源分离(SCBS)是指从由单个传感器收集的混合信号分离多个源。 SCBS的现有方法主要集中在分离两个来源并具有较弱的泛化性能。为了解决这些问题,在本文中提出了一种算法,通过设计并行双生成的对冲网络(PDUPGAN)来分离来自混合的多个源,其可以构建混合和相应的多个源之间的关系以实现一对多跨域映射。该算法可以应用于任何混合模型,例如线性瞬时混合模型和卷积混合模型。此外,创建了一对多数据集,包括该研究的混音和相应来源。实验在四个不同的数据集上进行,并用不同比例混合的信号进行测试。实验结果表明,该算法可以实现高性能的峰值信噪比(PSNR)和相关性,其优于最先进的算法。
translated by 谷歌翻译
Independent component analysis (ICA) is a blind source separation method to recover source signals of interest from their mixtures. Most existing ICA procedures assume independent sampling. Second-order-statistics-based source separation methods have been developed based on parametric time series models for the mixtures from the autocorrelated sources. However, the second-order-statistics-based methods cannot separate the sources accurately when the sources have temporal autocorrelations with mixed spectra. To address this issue, we propose a new ICA method by estimating spectral density functions and line spectra of the source signals using cubic splines and indicator functions, respectively. The mixed spectra and the mixing matrix are estimated by maximizing the Whittle likelihood function. We illustrate the performance of the proposed method through simulation experiments and an EEG data application. The numerical results indicate that our approach outperforms existing ICA methods, including SOBI algorithms. In addition, we investigate the asymptotic behavior of the proposed method.
translated by 谷歌翻译
胎儿心电图(FECG)首先在20世纪初从母体腹表面记录。在过去的五十年中,最先进的电子技术和信号处理算法已被用于将非侵入性胎儿心电图转化为可靠的胎儿心脏监测技术。在本章中,已经对来自非侵入性母亲腹部录像进行了建模,提取和分析的主要信号处理技术,并详细介绍了来自非侵入性母亲腹部录像的型号的建模,提取和分析。本章的主要主题包括:1)FECG的电生理学从信号处理视点,2)母体体积传导介质的数学模型和从体表的FECG的波形模型,3)信号采集要求,4)基于模型的FECG噪声和干扰取消的技术,包括自适应滤波器和半盲源分离技术,以及5)胎儿运动跟踪和在线FECG提取的最近算法的进步。
translated by 谷歌翻译
脑电图(EEG)信号通常被伪影污染。必须开发一种实用可靠的伪影清除方法,以防止神经信号的误解和脑计算机接口的表现。本研究开发了一种新的伪影拆除方法,IC-U-NET,其基于U-Net架构,用于去除普遍的EEG伪像和重建脑源。使用独立分量分析分解的大脑和非大脑源的混合物接受了IC-U-Net培训,并使用损失功能的集合来模拟EEG记录中的复杂信号波动。在恢复脑源和去除各种伪影中的提出方法(例如,眼睛闪烁/运动,肌肉活动和线/沟道噪声)的有效性在模拟研究中展示,并在休息时收集了三个现实世界EEG数据集走路。 IC-U-Net是用户友好的和公开可用的,不需要参数调整或工件类型名称,并且对通道编号没有限制。鉴于在移动设置中越来越需要图像自然脑动力学,IC-U-Net提供了一个有希望的端到端解决方案,用于自动从EEG录像中删除伪影。
translated by 谷歌翻译
大脑毫不费力地解决了盲源分离(BSS)问题,但它使用的算法仍然难以捉摸。在信号处理中,线性BSS问题通常通过独立分量分析(ICA)来解决。为了用作生物电路的模型,ICA神经网络(NN)必须至少满足以下要求:1。算法必须在在线设置中运行,其中一次一次流流,NN计算数据示例源无效,无需存储内存中的任何大部分数据。 2.突触权重更新是局部的,即,它仅取决于突触附近存在的生物物理变量。在这里,我们为ICA提出了一种新颖的目标函数,我们从中获得了生物学似体的NN,包括神经结构和突触学习规则。有趣的是,我们的算法依赖于通过输出神经元的总活性调节突触可塑性。在大脑中,这可以通过神经调节剂,细胞外钙,局部场势或一氧化氮来实现。
translated by 谷歌翻译
脑电图(EEG)录音通常被伪影污染。已经开发了各种方法来消除或削弱伪影的影响。然而,大多数人都依赖于先前的分析经验。在这里,我们提出了一个深入的学习框架,以将神经信号和伪像在嵌入空间中分离并重建被称为DeepSeparator的去噪信号。 DeepSeparator采用编码器来提取和放大原始EEG中的特征,称为分解器的模块以提取趋势,检测和抑制伪像和解码器以重建去噪信号。此外,DeepSeparator可以提取伪像,这在很大程度上增加了模型解释性。通过半合成的EEG数据集和实际任务相关的EEG数据集进行了所提出的方法,建议DeepSepater在EoG和EMG伪像去除中占据了传统模型。 DeepSeparator可以扩展到多通道EEG和任何长度的数据。它可能激励深入学习的EEG去噪的未来发展和应用。 DeepSeparator的代码可在https://github.com/ncclabsustech/deepseparator上获得。
translated by 谷歌翻译
神经影像动物和超越的几个问题需要对多任务稀疏分层回归模型参数的推断。示例包括M / EEG逆问题,用于基于任务的FMRI分析的神经编码模型,以及气候或CPU和GPU的温度监测。在这些域中,要推断的模型参数和测量噪声都可以表现出复杂的时空结构。现有工作要么忽略时间结构,要么导致计算苛刻的推论方案。克服这些限制,我们设计了一种新颖的柔性等级贝叶斯框架,其中模型参数和噪声的时空动态被建模为具有Kronecker产品协方差结构。我们的框架中的推断是基于大大化最小化优化,并有保证的收敛属性。我们高效的算法利用了时间自传矩阵的内在riemannian几何学。对于Toeplitz矩阵描述的静止动力学,采用了循环嵌入的理论。我们证明了Convex边界属性并导出了结果算法的更新规则。在来自M / EEG的合成和真实神经数据上,我们证明了我们的方法导致性能提高。
translated by 谷歌翻译
我们考虑对二进制数据的独立分量分析。虽然实践中的基本情况,但这种情况比ICA持续不断开发,以便连续数据。我们首先假设连续值潜在空间中的线性混合模型,然后是二进制观察模型。重要的是,我们认为这些来源是非静止的;这是必要的,因为任何非高斯基本上都是由二值化摧毁的。有趣的是,该模型通过采用多元高斯分布的累积分布函数来允许闭合形式的似然。在与持续值为案例的鲜明对比中,我们证明了少数观察变量的模型的非可识别性;当观察变量的数量较高时,我们的经验结果意味着可识别性。我们为二进制ICA展示了仅使用成对边缘的二进制ICA的实用方法,这些方法比完全多变量可能性更快地计算。
translated by 谷歌翻译
Digital sensors can lead to noisy results under many circumstances. To be able to remove the undesired noise from images, proper noise modeling and an accurate noise parameter estimation is crucial. In this project, we use a Poisson-Gaussian noise model for the raw-images captured by the sensor, as it fits the physical characteristics of the sensor closely. Moreover, we limit ourselves to the case where observed (noisy), and ground-truth (noise-free) image pairs are available. Using such pairs is beneficial for the noise estimation and is not widely studied in literature. Based on this model, we derive the theoretical maximum likelihood solution, discuss its practical implementation and optimization. Further, we propose two algorithms based on variance and cumulant statistics. Finally, we compare the results of our methods with two different approaches, a CNN we trained ourselves, and another one taken from literature. The comparison between all these methods shows that our algorithms outperform the others in terms of MSE and have good additional properties.
translated by 谷歌翻译
最近的研究利用稀疏的分类来预测高维大脑活动信号的分类变量,以暴露人类的意图和精神状态,从而自动选择模型训练过程中的相关特征。但是,现有的稀疏分类模型可能会容易出现由大脑记录固有的噪声引起的性能降解。为了解决这个问题,我们旨在在本研究中提出一种新的健壮和稀疏分类算法。为此,我们将CorrentRopy学习框架引入基于自动相关性的稀疏分类模型,并提出了一种新的基于Correntropy的鲁棒稀疏逻辑回归算法。为了证明所提出算法的上等大脑活性解码性能,我们在合成数据集,脑电图(EEG)数据集和功能磁共振成像(FMRI)数据集上对其进行了评估。广泛的实验结果证实,不仅提出的方法可以在嘈杂和高维分类任务中实现更高的分类精度,而且还将为解码方案选择那些更有信息的功能。将Correntropy学习方法与自动相关性测定技术相结合,将显着提高噪声的鲁棒性,从而导致更足够的稳健稀疏脑解码算法。它在现实世界中的大脑活动解码和脑部计算机界面中提供了一种更强大的方法。
translated by 谷歌翻译
Erroneous correspondences between samples and their respective channel or target commonly arise in several real-world applications. For instance, whole-brain calcium imaging of freely moving organisms, multiple target tracking or multi-person contactless vital sign monitoring may be severely affected by mismatched sample-channel assignments. To systematically address this fundamental problem, we pose it as a signal reconstruction problem where we have lost correspondences between the samples and their respective channels. We show that under the assumption that the signals of interest admit a sparse representation over an overcomplete dictionary, unique signal recovery is possible. Our derivations reveal that the problem is equivalent to a structured unlabeled sensing problem without precise knowledge of the sensing matrix. Unfortunately, existing methods are neither robust to errors in the regressors nor do they exploit the structure of the problem. Therefore, we propose a novel robust two-step approach for the reconstruction of shuffled sparse signals. The performance and robustness of the proposed approach is illustrated in an application of whole-brain calcium imaging in computational neuroscience. The proposed framework can be generalized to sparse signal representations other than the ones considered in this work to be applied in a variety of real-world problems with imprecise measurement or channel assignment.
translated by 谷歌翻译
独立的分量分析旨在从它们的线性混合物中尽可能独立地恢复未知组件。这种技术已广泛应用于许多领域,例如数据分析,信号处理和机器学习。在本文中,我们提出了一种新的基于促进基于促进的独立分量分析算法。我们的算法通过引入最大似然估计来填充非参数独立分量分析中的间隙。各种实验与许多目前已知的算法相比验证其性能。
translated by 谷歌翻译
One often wants to estimate statistical models where the probability density function is known only up to a multiplicative normalization constant. Typically, one then has to resort to Markov Chain Monte Carlo methods, or approximations of the normalization constant. Here, we propose that such models can be estimated by minimizing the expected squared distance between the gradient of the log-density given by the model and the gradient of the log-density of the observed data. While the estimation of the gradient of log-density function is, in principle, a very difficult non-parametric problem, we prove a surprising result that gives a simple formula for this objective function. The density function of the observed data does not appear in this formula, which simplifies to a sample average of a sum of some derivatives of the log-density given by the model. The validity of the method is demonstrated on multivariate Gaussian and independent component analysis models, and by estimating an overcomplete filter set for natural image data.
translated by 谷歌翻译
We present a new estimation principle for parameterized statistical models. The idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity. We show that this leads to a consistent (convergent) estimator of the parameters, and analyze the asymptotic variance. In particular, the method is shown to directly work for unnormalized models, i.e. models where the density function does not integrate to one. The normalization constant can be estimated just like any other parameter. For a tractable ICA model, we compare the method with other estimation methods that can be used to learn unnormalized models, including score matching, contrastive divergence, and maximum-likelihood where the normalization constant is estimated with importance sampling. Simulations show that noise-contrastive estimation offers the best trade-off between computational and statistical efficiency. The method is then applied to the modeling of natural images: We show that the method can successfully estimate a large-scale two-layer model and a Markov random field.
translated by 谷歌翻译
非线性独立组件分析(NICA)旨在恢复未知非线性函数混合的统计独立的潜在组件。 NICA的核心是潜在组件的可识别性,直到最近才难以捉摸。具体而言,Hyv \“ Arinen等人都表明,在广义对比度学习(GCL)配方中,非线性混合的潜在组件是可识别的(通常是无关紧要的歧义性),因为潜在组件是独立于某个辅助变量的独立条件。 NICA的基于GCL的可识别性非常优雅,并在表示形式学习,因果学习和因素分解范围内建立了有趣的联系与流行的无监督/自我监督的学习范例以及理想的通用功能学习者的使用 - 在理论和实践之间造成了不可忽略的差距。缩小差距是一个非平凡的挑战,因为缺乏既定的``教科书''常规,以进行这种无监督的样本分析问题。这项工作提出了基于GCL的NICA的有限样本可识别性分析。我们的分析方法iCal框架明智地结合了GCL损失函数,统计概括分析和数值分化的特性。我们的框架还考虑了学习函数的近似错误,并揭示了就业功能学习者的复杂性和表现力之间的直观权衡。数值实验用于验证定理。
translated by 谷歌翻译
盲源分离(BSS)算法是无监督的方法,通过允许物理有意义的数据分解,它们是高光谱数据分析的基石。 BSS问题不足,解决方案需要有效的正则化方案,以更好地区分来源并产生可解释的解决方案。为此,我们研究了一种半监督的源分离方法,在这种方法中,我们将预测的交替最小二乘算法与基于学习的正则化方案结合在一起。在本文中,我们专注于通过使用生成模型来限制混合矩阵属于学习的歧管。总而言之,我们表明,这允许具有创新的BSS算法,具有提高的精度,可提供物理上可解释的解决方案。在涉及强噪声,高度相关的光谱和不平衡来源的挑战性场景中,对现实的高光谱天体物理数据进行了测试。结果突出了在减少来源之间的泄漏之前,学到的重大好处,这可以使总体上更好的分解。
translated by 谷歌翻译
准确诊断睡眠障碍对于临床评估和治疗至关重要。多元素摄影(PSG)长期以来用于检测各种睡眠障碍。在本研究中,心电图(ECG)和电磁影(EMG)已被用于识别呼吸和运动相关的睡眠障碍。除了使用SynchroSquezed小波变换(SSWT)开发迭代脉冲峰值检测算法之外,还通过提取EMG特征来执行生物信号处理,除了开发迭代脉冲峰值检测算法以获得来自ECG的心率和呼吸相关特征的可靠提取心率和呼吸相关的特征。深度学习框架旨在融入EMG和ECG功能。该框架已被用于对四组进行分类:健康受试者,患者阻塞性睡眠呼吸暂停(OSA),患者患者患者,患者患者和OSA和RLS患者。拟议的深度学习框架在我们制定的四类问题的主题中产生了平均准确性为72%,重量F1分数为0.57分。
translated by 谷歌翻译
与许多机器学习模型类似,群集加权模型(CWM)的准确性和速度都可以受到高维数据的阻碍,从而导致以前的作品对一种简约的技术,以减少“尺寸诅咒”对混合模型的影响。在这项工作中,我们回顾了集群加权模型(CWM)的背景研究。我们进一步表明,在庞大的高维数据的情况下,简约的技术不足以使混合模型蓬勃发展。我们通过使用“ FlexCWM” R软件包中的默认值选择位置参数的初始值来讨论一种用于检测隐藏组件的启发式。我们引入了一种称为T-分布的随机邻居嵌入(TSNE)的维度降低技术,以增强高维空间中的简约CWM。最初,CWM适用于回归,但出于分类目的,所有多级变量都会用一些噪声进行对数转换。模型的参数是通过预期最大化算法获得的。使用来自不同字段的实际数据集证明了讨论技术的有效性。
translated by 谷歌翻译