类似于心血管和肌肉骨骼系统的熟练程度的差异如何预测个人的运动能力,同一大脑区域如何编码个人的差异可以解释他们的行为。然而,在研究大脑如何编码信息时,研究人员选择不同的神经影像任务(例如,语言或电机任务),其可以依赖于处理不同类型的信息并且可以调制不同的脑区。我们假设信息如何在大脑中编码信息的个人差异是特定于任务的,并预测不同的行为措施。我们提出了一种使用编码模型的框架,以识别大脑编码和测试中的单个差异,如果这些差异可以预测行为。我们使用任务功能磁共振成像数据评估我们的框架。我们的结果表明,编码模型显示的个体差异是预测行为的强大工具,并且研究人员应优化他们对其感兴趣行为的任务和编码模型的选择。
translated by 谷歌翻译
许多领域的研究表明,转移学习(TL)非常适合提高具有少量样品的数据集中深度学习(DL)模型的性能。这种经验成功引发了对具有功能神经影像数据的认知解码分析的应用的兴趣。这里,我们系统地评估了从全脑功能磁共振成像(FMRI)数据的认知状态(例如,观看面部或房屋图像)的解码的TL。我们首先在大型公共FMRI数据集中预先列出两个DL架构,随后在独立实验任务和完全独立的数据集中评估其性能。预先训练的模型始终如一地达到更高的解码精度,并且通常需要较少的训练时间和数据,而不是模型变形,这些模型变体没有预先接受培训,明确强调预制培训的好处。我们证明,这些益处是由于预先训练的模型在使用新数据培训时重用了许多学习功能的这些益处,从而深入了解导致预训练的好处的机制。然而,在解释预先训练模型的解码决策时,我们还通过DL模型对全脑认知解码进行了差别挑战,因为这些已经学会了在不可预见的情况下利用FMRI数据和识别单个认知状态的违反直觉方式。
translated by 谷歌翻译
在过去的几年中,神经语言模型(NLM)取得了巨大进步,在各种语言任务上取得了令人印象深刻的表现。利用这一点,对神经科学的研究已开始使用NLMS在语言处理过程中研究人脑中的神经活动。但是,关于哪些因素决定了神经语言模型捕获大脑活动的能力(又称其“大脑评分”)的能力,许多问题仍未得到解决。在这里,我们朝这个方向迈出了第一步,并检查了测试丢失,训练语料库和模型架构的影响(比较手套,LSTM,GPT-2和BERT),对参与者的功能磁共振成像的预测时间表的预测时间表。 。我们发现(1)每个模型的未经训练的版本已经通过捕获相同单词的大脑响应的相似性来解释大脑中的大量信号,而未经训练的LSTM优于基于变压器的模型,受到上下文效果的影响较小。 (2)训练NLP模型可改善同一大脑区域的大脑评分,而与模型的结构无关; (3)困惑(测试损失)不是大脑评分的良好预测指标; (4)训练数据对结果有很大的影响,尤其是,现成的模型可能缺乏检测大脑激活的统计能力。总体而言,我们概述了模型训练选择的影响,并为未来的研究提出了良好的实践,旨在使用神经语言模型来解释人类语言系统。
translated by 谷歌翻译
Neuroimaging-based prediction methods for intelligence and cognitive abilities have seen a rapid development in literature. Among different neuroimaging modalities, prediction based on functional connectivity (FC) has shown great promise. Most literature has focused on prediction using static FC, but there are limited investigations on the merits of such analysis compared to prediction based on dynamic FC or region level functional magnetic resonance imaging (fMRI) times series that encode temporal variability. To account for the temporal dynamics in fMRI data, we propose a deep neural network involving bi-directional long short-term memory (bi-LSTM) approach that also incorporates feature selection mechanism. The proposed pipeline is implemented via an efficient GPU computation framework and applied to predict intelligence scores based on region level fMRI time series as well as dynamic FC. We compare the prediction performance for different intelligence measures based on static FC, dynamic FC, and region level time series acquired from the Adolescent Brain Cognitive Development (ABCD) study involving close to 7000 individuals. Our detailed analysis illustrates that static FC consistently has inferior prediction performance compared to region level time series or dynamic FC for unimodal rest and task fMRI experiments, and in almost all cases using a combination of task and rest features. In addition, the proposed bi-LSTM pipeline based on region level time series identifies several shared and differential important brain regions across task and rest fMRI experiments that drive intelligence prediction. A test-retest analysis of the selected features shows strong reliability across cross-validation folds. Given the large sample size from ABCD study, our results provide strong evidence that superior prediction of intelligence can be achieved by accounting for temporal variations in fMRI.
translated by 谷歌翻译
如何相关的是神经语言模型,翻译模型和语言标记任务所学到的表示?我们通过调整计算机愿景的编码器 - 解码器传输学习方法来回回答这个问题,以研究从在培训的语言任务上训练的各种网络的隐藏表示中提取的100个不同的特征空间中的结构。该方法揭示了一种低维结构,其中语言模型和翻译模型在Word Embeddings,语法和语义任务中平滑地插入,以及未来的Word Embeddings。我们称之为这种低维结构的语言表示嵌入,因为它会对处理语言进行各种NLP任务所需的表示之间的关系。我们发现,这种代表性嵌入可以预测每个具有FMRI记录的自然语言刺激的人脑响应的每个特征空间映射如何。此外,我们发现这种结构的主要维度可用于创建一个突出大脑的自然语言处理层次结构的度量。这表明嵌入捕获了大脑的自然语言表示结构的某些部分。
translated by 谷歌翻译
Building systems that achieve a deeper understanding of language is one of the central goals of natural language processing (NLP). Towards this goal, recent works have begun to train language models on narrative datasets which require extracting the most critical information by integrating across long contexts. However, it is still an open question whether these models are learning a deeper understanding of the text, or if the models are simply learning a heuristic to complete the task. This work investigates this further by turning to the one language processing system that truly understands complex language: the human brain. We show that training language models for deeper narrative understanding results in richer representations that have improved alignment to human brain activity. We further find that the improvements in brain alignment are larger for character names than for other discourse features, which indicates that these models are learning important narrative elements. Taken together, these results suggest that this type of training can indeed lead to deeper language understanding. These findings have consequences both for cognitive neuroscience by revealing some of the significant factors behind brain-NLP alignment, and for NLP by highlighting that understanding of long-range context can be improved beyond language modeling.
translated by 谷歌翻译
Pretrained language models that have been trained to predict the next word over billions of text documents have been shown to also significantly predict brain recordings of people comprehending language. Understanding the reasons behind the observed similarities between language in machines and language in the brain can lead to more insight into both systems. Recent works suggest that the prediction of the next word is a key mechanism that contributes to the alignment between the two. What is not yet understood is whether prediction of the next word is necessary for this observed alignment or simply sufficient, and whether there are other shared mechanisms or information that is similarly important. In this work, we take a first step towards a better understanding via two simple perturbations in a popular pretrained language model. The first perturbation is to improve the model's ability to predict the next word in the specific naturalistic stimulus text that the brain recordings correspond to. We show that this indeed improves the alignment with the brain recordings. However, this improved alignment may also be due to any improved word-level or multi-word level semantics for the specific world that is described by the stimulus narrative. We aim to disentangle the contribution of next word prediction and semantic knowledge via our second perturbation: scrambling the word order at inference time, which reduces the ability to predict the next word, but maintains any newly learned word-level semantics. By comparing the alignment with brain recordings of these differently perturbed models, we show that improvements in alignment with brain recordings are due to more than improvements in next word prediction and word-level semantics.
translated by 谷歌翻译
解释视觉场景的含义不仅需要识别其成分对象,还需要对象相互关系的丰富语义表征。在这里,我们通过将现代计算技术应用于复杂自然场景引起的人类脑反应的大规模7T fMRI数据集,研究视觉语义转换的神经机制。使用通过将语言深度学习模型应用于人类生成的场景描述获得的语义嵌入,我们确定了编码语义场景描述的大脑区域的广泛分布网络。重要的是,这些语义嵌入比传统对象类别标签更好地解释了这些区域的活动。此外,尽管参与者没有积极从事语义任务,但它们还是活动的有效预测指标,这表明Visuo-Semantic转换是默认的视觉方式。为了支持这种观点,我们表明,可以直接通过大脑活动模式直接将场景字幕的高度精确重建。最后,经过语义嵌入训练的经常性卷积神经网络进一步超过了语义嵌入在预测大脑活动时的语义嵌入,从而提供了大脑视觉语义转换的机械模型。这些实验和计算结果在一起表明,将视觉输入转换为丰富的语义场景描述可能是视觉系统的核心目标,并且将重点放在这一新目标上可能会导致改进人类大脑中视觉信息处理的模型。
translated by 谷歌翻译
Language models have been shown to be very effective in predicting brain recordings of subjects experiencing complex language stimuli. For a deeper understanding of this alignment, it is important to understand the alignment between the detailed processing of linguistic information by the human brain versus language models. In NLP, linguistic probing tasks have revealed a hierarchy of information processing in neural language models that progresses from simple to complex with an increase in depth. On the other hand, in neuroscience, the strongest alignment with high-level language brain regions has consistently been observed in the middle layers. These findings leave an open question as to what linguistic information actually underlies the observed alignment between brains and language models. We investigate this question via a direct approach, in which we eliminate information related to specific linguistic properties in the language model representations and observe how this intervention affects the alignment with fMRI brain recordings obtained while participants listened to a story. We investigate a range of linguistic properties (surface, syntactic and semantic) and find that the elimination of each one results in a significant decrease in brain alignment across all layers of a language model. These findings provide direct evidence for the role of specific linguistic information in the alignment between brain and language models, and opens new avenues for mapping the joint information processing in both systems.
translated by 谷歌翻译
多视图数据是指特征被分成特征集的设置,例如因为它们对应于不同的源。堆叠惩罚的逻辑回归(Staplr)是最近引入的方法,可用于分类并自动选择对预测最重要的视图。我们将此方法的扩展引入到数据具有分层多视图结构的位置。我们还为STAPLR介绍了一个新的视图重要性措施,这使我们能够比较层次结构的任何级别的视图的重要性。我们将扩展的STAPLR算法应用于Alzheimer的疾病分类,其中来自三种扫描类型的不同MRI措施:结构MRI,扩散加权MRI和休息状态FMRI。Staplr可以识别哪种扫描类型以及MRI措施对于分类最重要,并且在分类性能方面优于弹性净回归。
translated by 谷歌翻译
用于预测神经影像数据的深度学习算法在各种应用中显示出巨大的希望。先前的工作表明,利用数据的3D结构的深度学习模型可以在几个学习任务上胜过标准机器学习。但是,该领域的大多数先前研究都集中在成年人的神经影像学数据上。在一项大型纵向发展研究的青少年大脑和认知发展(ABCD)数据集中,我们检查了结构性MRI数据,以预测性别并确定与性别相关的大脑结构变化。结果表明,性别预测准确性异常高(> 97%),训练时期> 200,并且这种准确性随着年龄的增长而增加。大脑区域被确定为研究的任务中最歧视性的,包括主要的额叶区域和颞叶。当评估年龄增加两年的性别预测变化时,揭示了一组更广泛的视觉,扣带和孤立区域。我们的发现表明,即使在较小的年龄范围内,也显示出与性别相关的结构变化模式。这表明,通过查看这些变化与不同的行为和环境因素如何相关,可以研究青春期大脑如何变化。
translated by 谷歌翻译
深度学习最近在自然语言处理方面取得了显着进展。然而,所得到的算法仍然远离人类脑的语言能力。预测编码理论提供了对这种差异的潜在说明:虽然优化深语算法以预测相邻词,但是人类大脑将被调整以进行远程和分层预测。为了测试这一假设,我们分析了304名受试者的FMRI脑信号,每个受试者每张聆听70min的短篇小说。在确认深语算法的激活之后,我们表明,通过远程预测表示增强了这些模型,提高了他们的脑映射。结果进一步揭示了大脑中预测的层次,由此前景皮质预测比时间皮质更摘要和更远的差异。总体而言,这项研究增强了预测编码理论,并表明了在自然语言处理中的远程和分层预测的关键作用。
translated by 谷歌翻译
Pooling publicly-available MRI data from multiple sites allows to assemble extensive groups of subjects, increase statistical power, and promote data reuse with machine learning techniques. The harmonization of multicenter data is necessary to reduce the confounding effect associated with non-biological sources of variability in the data. However, when applied to the entire dataset before machine learning, the harmonization leads to data leakage, because information outside the training set may affect model building, and potentially falsely overestimate performance. We propose a 1) measurement of the efficacy of data harmonization; 2) harmonizer transformer, i.e., an implementation of the ComBat harmonization allowing its encapsulation among the preprocessing steps of a machine learning pipeline, avoiding data leakage. We tested these tools using brain T1-weighted MRI data from 1740 healthy subjects acquired at 36 sites. After harmonization, the site effect was removed or reduced, and we measured the data leakage effect in predicting individual age from MRI data, highlighting that introducing the harmonizer transformer into a machine learning pipeline allows for avoiding data leakage.
translated by 谷歌翻译
科学家经常使用观察时间序列数据来研究从气候变化到民间冲突再到大脑活动的复杂自然过程。但是对这些数据的回归分析通常假定简单的动态。深度学习的最新进展使从语音理解到核物理学再到竞争性游戏的复杂过程模型的表现实现了令人震惊的改进。但是深度学习通常不用于科学分析。在这里,我们通过证明可以使用深度学习,不仅可以模仿,而且可以分析复杂的过程,在保留可解释性的同时提供灵活的功能近似。我们的方法 - 连续时间反向逆转回归神经网络(CDRNN) - 放宽标准简化的假设(例如,线性,平稳性和同质性)对于许多自然系统来说是不可信的,并且可能会严重影响数据的解释。我们评估CDRNNS对人类语言处理,这是一个具有复杂连续动态的领域。我们证明了行为和神经影像数据中预测可能性的显着改善,我们表明CDRNN可以在探索性分析中灵活发现新型模式,在确认分析中对可能的混杂性提供强有力的控制,并打开否则就可以使用这些问题来进行研究,这些问题否则就可以使用这些问题来进行研究,而这些问题否则就可以使用这些问题进行研究,而这些问题否则就可以使用这些问题进行研究。观察数据。
translated by 谷歌翻译
多模式学习,尤其是大规模的多模式预训练,在过去的几年中已经迅速发展,并带来了人工智能(AI)的最大进步。尽管具有有效性,但了解多模式预训练模型的潜在机制仍然是一个巨大的挑战。揭示此类模型的解释性可能会使AI领域中新型学习范式的突破。为此,鉴于人脑的多模式性质,我们建议借助非侵入性脑成像技术(例如功能磁共振成像(fMRI))探索多模式学习模型的解释性。具体而言,我们首先提出了1500万个图像文本对预训练的新设计的多模式基础模型,该模型在各种认知下游任务中显示出强烈的多模式理解和概括能力。此外,从神经编码的角度来看(基于我们的基础模型),我们发现,与单峰相比,经过多模式训练的视觉和舌编码器都更像脑状。特别是,我们确定了许多大脑区域,其中多模式训练的编码器表现出更好的神经编码性能。这与现有有关探索大脑多感觉整合的研究的发现是一致的。因此,我们认为,多模式基础模型是神经科学家研究人脑中多模式信号处理机制的更合适的工具。我们的发现还证明了多模式基础模型作为理想的计算模拟器的潜力,以促进脑和大脑的AI研究。
translated by 谷歌翻译
Mapping the connectome of the human brain using structural or functional connectivity has become one of the most pervasive paradigms for neuroimaging analysis. Recently, Graph Neural Networks (GNNs) motivated from geometric deep learning have attracted broad interest due to their established power for modeling complex networked data. Despite their superior performance in many fields, there has not yet been a systematic study of how to design effective GNNs for brain network analysis. To bridge this gap, we present BrainGB, a benchmark for brain network analysis with GNNs. BrainGB standardizes the process by (1) summarizing brain network construction pipelines for both functional and structural neuroimaging modalities and (2) modularizing the implementation of GNN designs. We conduct extensive experiments on datasets across cohorts and modalities and recommend a set of general recipes for effective GNN designs on brain networks. To support open and reproducible research on GNN-based brain network analysis, we host the BrainGB website at https://braingb.us with models, tutorials, examples, as well as an out-of-box Python package. We hope that this work will provide useful empirical evidence and offer insights for future research in this novel and promising direction.
translated by 谷歌翻译
神经记录的进展现在在前所未有的细节中研究神经活动的机会。潜在的变量模型(LVMS)是用于分析各种神经系统和行为的丰富活动的有希望的工具,因为LVM不依赖于活动与外部实验变量之间的已知关系。然而,目前缺乏标准化目前阻碍了对神经元群体活性的LVM进行的进展,导致采用临时方式进行和比较方法。为协调这些建模工作,我们为神经人群活动的潜在变量建模介绍了基准套件。我们从认知,感官和机动领域策划了四种神经尖峰活动的数据集,以促进适用于这些地区各地的各种活动的模型。我们将无监督的评估视为用于评估数据集的模型的共同框架,并应用几个显示基准多样性的基线。我们通过评估释放此基准。 http://neurallatents.github.io.
translated by 谷歌翻译
扩散张量成像(DTI)已被用于研究神经退行性疾病对神经途径的影响,这可能导致这些疾病的更可靠和早期诊断,以及更好地了解它们如何影响大脑。我们介绍了一种基于标记为DTI光纤数据和相应统计数据的智能视觉分析系统,用于研究患者组。系统的AI增强界面通过组织和整体分析空间引导用户,包括统计特征空间,物理空间和不同组的患者的空间。我们使用自定义机器学习管道来帮助缩小此大型分析空间,然后通过一系列链接可视化务实拨动它。我们使用来自Parkinson进展标记倡议的研究数据库的实际数据进行多种案例研究。
translated by 谷歌翻译
背景:虽然卷积神经网络(CNN)实现了检测基于磁共振成像(MRI)扫描的阿尔茨海默病(AD)痴呆的高诊断准确性,但它们尚未应用于临床常规。这是一个重要原因是缺乏模型可理解性。最近开发的用于导出CNN相关性图的可视化方法可能有助于填补这种差距。我们调查了具有更高准确性的模型还依赖于先前知识预定义的判别脑区域。方法:我们培训了CNN,用于检测痴呆症和Amnestic认知障碍(MCI)患者的N = 663 T1加权MRI扫描的AD,并通过交叉验证和三个独立样本验证模型的准确性= 1655例。我们评估了相关评分和海马体积的关联,以验证这种方法的临床效用。为了提高模型可理解性,我们实现了3D CNN相关性图的交互式可视化。结果:跨三个独立数据集,组分离表现出广告痴呆症与控制的高精度(AUC $ \ GEQUQ $ 0.92)和MCI与控制的中等精度(AUC $ \约0.75美元)。相关性图表明海马萎缩被认为是广告检测的最具信息性因素,其其他皮质和皮质区域中的萎缩额外贡献。海马内的相关评分与海马体积高度相关(Pearson的r $ \大约$ -0.86,p <0.001)。结论:相关性地图突出了我们假设先验的地区的萎缩。这加强了CNN模型的可理解性,这些模型基于扫描和诊断标签以纯粹的数据驱动方式培训。
translated by 谷歌翻译
多模式培训的最新进展使用文本描述,可以显着增强机器对图像和视频的理解。然而,目前尚不清楚语言在多大程度上可以完全捕捉不同方式的感官体验。一种表征感官体验的良好方法取决于相似性判断,即人们认为两个截然不同的刺激是相似的程度。我们在一系列大规模的行为研究($ n = 1,823美元的参与者)中探讨了人类相似性判断与语言之间的关系,这三种模式(图像,音频和视频)和两种类型的文本描述符:简单的文字描述符: - 文本字幕。在此过程中,我们引入了一条新型的自适应管道,用于标签挖掘,既有高效又是领域。我们表明,基于文本描述符的预测管道表现出色,我们将其与基于视觉,音频和视频处理体系结构的611基线模型进行了比较。我们进一步表明,文本描述符和模型在多种方式之间和模型之间预测人类相似性的程度各不相同。综上所述,这些研究说明了整合机器学习和认知科学方法的价值,以更好地了解人类和机器表示之间的相似性和差异。我们在https://words-are-are-all-you-need.s3.amazonaws.com/index.html上介绍了交互式可视化,以探索人类所经历的刺激和本文中报道的不同方法之间的相似性。
translated by 谷歌翻译