A tractogram is a virtual representation of the brain white matter. It is composed of millions of virtual fibers, encoded as 3D polylines, which approximate the white matter axonal pathways. To date, tractograms are the most accurate white matter representation and thus are used for tasks like presurgical planning and investigations of neuroplasticity, brain disorders, or brain networks. However, it is a well-known issue that a large portion of tractogram fibers is not anatomically plausible and can be considered artifacts of the tracking procedure. With Verifyber, we tackle the problem of filtering out such non-plausible fibers using a novel fully-supervised learning approach. Differently from other approaches based on signal reconstruction and/or brain topology regularization, we guide our method with the existing anatomical knowledge of the white matter. Using tractograms annotated according to anatomical principles, we train our model, Verifyber, to classify fibers as either anatomically plausible or non-plausible. The proposed Verifyber model is an original Geometric Deep Learning method that can deal with variable size fibers, while being invariant to fiber orientation. Our model considers each fiber as a graph of points, and by learning features of the edges between consecutive points via the proposed sequence Edge Convolution, it can capture the underlying anatomical properties. The output filtering results highly accurate and robust across an extensive set of experiments, and fast; with a 12GB GPU, filtering a tractogram of 1M fibers requires less than a minute. Verifyber implementation and trained models are available at https://github.com/FBK-NILab/verifyber.
translated by 谷歌翻译
White matter bundle segmentation is a cornerstone of modern tractography to study the brain's structural connectivity in domains such as neurological disorders, neurosurgery, and aging. In this study, we present FIESTA (FIber gEneration and bundle Segmentation in Tractography using Autoencoders), a reliable and robust, fully automated, and easily semi-automatically calibrated pipeline based on deep autoencoders that can dissect and fully populate WM bundles. Our framework allows the transition from one anatomical bundle definition to another with marginal calibrating time. This pipeline is built upon FINTA, CINTA, and GESTA methods that demonstrated how autoencoders can be used successfully for streamline filtering, bundling, and streamline generation in tractography. Our proposed method improves bundling coverage by recovering hard-to-track bundles with generative sampling through the latent space seeding of the subject bundle and the atlas bundle. A latent space of streamlines is learned using autoencoder-based modeling combined with contrastive learning. Using an atlas of bundles in standard space (MNI), our proposed method segments new tractograms using the autoencoder latent distance between each tractogram streamline and its closest neighbor bundle in the atlas of bundles. Intra-subject bundle reliability is improved by recovering hard-to-track streamlines, using the autoencoder to generate new streamlines that increase each bundle's spatial coverage while remaining anatomically meaningful. Results show that our method is more reliable than state-of-the-art automated virtual dissection methods such as RecoBundles, RecoBundlesX, TractSeg, White Matter Analysis and XTRACT. Overall, these results show that our framework improves the practicality and usability of current state-of-the-art bundling framework
translated by 谷歌翻译
白质纤维聚类(WMFC)是白质细胞的重要策略,可以对健康和疾病中的白质连接进行定量分析。 WMFC通常以无监督的方式进行,而无需标记地面真相数据。尽管广泛使用的WMFC方法使用经典的机器学习技术显示出良好的性能,但深度学习的最新进展揭示了朝着快速有效的WMFC方向发展。在这项工作中,我们为WMFC,深纤维聚类(DFC)提出了一个新颖的深度学习框架,该框架解决了无监督的聚类问题,作为具有特定领域的借口任务,以预测成对的光纤距离。这使纤维表示能够在WMFC中学习已知的挑战,即聚类的敏感性对沿纤维的点排序的敏感性。我们设计了一种新颖的网络体系结构,该网络体系结构代表输入纤维作为点云,并允许从灰质拟合中纳入其他输入信息来源。因此,DFC利用有关白质纤维几何形状和灰质解剖结构的组合信息来改善纤维簇的解剖相干性。此外,DFC通过拒绝簇分配概率低的纤维来以自然方式进行异常去除。我们评估了三个独立获取的队列的DFC,包括来自220名性别,年龄(年轻和老年人)的220名个人的数据,以及不同的健康状况(健康对照和多种神经精神疾病)。我们将DFC与几种最先进的WMFC算法进行比较。实验结果表明,DFC在集群紧凑,泛化能力,解剖相干性和计算效率方面的表现出色。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
Continual Learning (CL) is a field dedicated to devise algorithms able to achieve lifelong learning. Overcoming the knowledge disruption of previously acquired concepts, a drawback affecting deep learning models and that goes by the name of catastrophic forgetting, is a hard challenge. Currently, deep learning methods can attain impressive results when the data modeled does not undergo a considerable distributional shift in subsequent learning sessions, but whenever we expose such systems to this incremental setting, performance drop very quickly. Overcoming this limitation is fundamental as it would allow us to build truly intelligent systems showing stability and plasticity. Secondly, it would allow us to overcome the onerous limitation of retraining these architectures from scratch with the new updated data. In this thesis, we tackle the problem from multiple directions. In a first study, we show that in rehearsal-based techniques (systems that use memory buffer), the quantity of data stored in the rehearsal buffer is a more important factor over the quality of the data. Secondly, we propose one of the early works of incremental learning on ViTs architectures, comparing functional, weight and attention regularization approaches and propose effective novel a novel asymmetric loss. At the end we conclude with a study on pretraining and how it affects the performance in Continual Learning, raising some questions about the effective progression of the field. We then conclude with some future directions and closing remarks.
translated by 谷歌翻译
人类生理学中的各种结构遵循特异性形态,通常在非常细的尺度上表达复杂性。这种结构的例子是胸前气道,视网膜血管和肝血管。可以观察到可以观察到可以观察到可以观察到可以观察到空间排列的磁共振成像(MRI),计算机断层扫描(CT),光学相干断层扫描(OCT)等医学成像模式(MRI),计算机断层扫描(CT),可以观察到空间排列的大量2D和3D图像的集合。这些结构在医学成像中的分割非常重要,因为对结构的分析提供了对疾病诊断,治疗计划和预后的见解。放射科医生手动标记广泛的数据通常是耗时且容易出错的。结果,在过去的二十年中,自动化或半自动化的计算模型已成为医学成像的流行研究领域,迄今为止,许多计算模型已经开发出来。在这项调查中,我们旨在对当前公开可用的数据集,细分算法和评估指标进行全面审查。此外,讨论了当前的挑战和未来的研究方向。
translated by 谷歌翻译
扩散MRI拖拉术是一种先进的成像技术,可实现大脑白质连接的体内映射。白质拟层将拖拉机分类为簇或解剖学上有意义的区域。它可以量化和可视化全脑拖拉学。当前,大多数拟层方法都集中在深白质(DWM)上,而由于其复杂性,更少的方法解决了浅表白质(SWM)。我们提出了一种新型的两阶段深度学习的框架,即浅表白质分析(SUPWMA​​),该框架对全脑拖拉机的198个SWM簇进行了有效且一致的分析。一个基于点云的网络适应了我们的SWM分析任务,并且监督的对比度学习可以在SWM的合理流线和离群值之间进行更多的歧视性表示。我们在大规模拖拉机数据集上训练模型,包括来自标签的SWM簇和解剖学上难以置信的流线样本的简化样品,我们对六个不同年龄和健康状况的独立获取的数据集进行测试(包括新生儿和具有空间型脑肿瘤的患者) )。与几种最先进的方法相比,SupWMA在所有数据集上获得了高度一致,准确的SWM分析结果,在整个健康和疾病的寿命中都良好的概括。另外,SUPWMA​​的计算速度比其他方法快得多。
translated by 谷歌翻译
Like fingerprints, cortical folding patterns are unique to each brain even though they follow a general species-specific organization. Some folding patterns have been linked with neurodevelopmental disorders. However, due to the high inter-individual variability, the identification of rare folding patterns that could become biomarkers remains a very complex task. This paper proposes a novel unsupervised deep learning approach to identify rare folding patterns and assess the degree of deviations that can be detected. To this end, we preprocess the brain MR images to focus the learning on the folding morphology and train a beta-VAE to model the inter-individual variability of the folding. We compare the detection power of the latent space and of the reconstruction errors, using synthetic benchmarks and one actual rare configuration related to the central sulcus. Finally, we assess the generalization of our method on a developmental anomaly located in another region. Our results suggest that this method enables encoding relevant folding characteristics that can be enlightened and better interpreted based on the generative power of the beta-VAE. The latent space and the reconstruction errors bring complementary information and enable the identification of rare patterns of different nature. This method generalizes well to a different region on another dataset. Code is available at https://github.com/neurospin-projects/2022_lguillon_rare_folding_detection.
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
我们向传感器独立性(Sensei)介绍了一种新型神经网络架构 - 光谱编码器 - 通过该传感器独立性(Sensei) - 通过其中具有不同组合的光谱频带组合的多个多光谱仪器可用于训练广义深度学习模型。我们专注于云屏蔽的问题,使用几个预先存在的数据集,以及Sentinel-2的新的自由可用数据集。我们的模型显示在卫星上实现最先进的性能,它受过训练(Sentinel-2和Landsat 8),并且能够推断到传感器,它在训练期间尚未见过Landsat 7,每\ 'USAT-1,和Sentinel-3 SLST。当多种卫星用于培训,接近或超越专用单传感器型号的性能时,模型性能显示出改善。这项工作是激励遥感社区可以使用巨大各种传感器采取的数据的动机。这不可避免地导致标记用于不同传感器的努力,这限制了深度学习模型的性能,因为他们需要最佳地执行巨大的训练。传感器独立性可以使深度学习模型能够同时使用多个数据集进行培训,提高性能并使它们更广泛适用。这可能导致深入学习方法,用于在板载应用程序和地面分段数据处理中更频繁地使用,这通常需要模型在推出时或之后即将开始。
translated by 谷歌翻译
Graph classification is an important area in both modern research and industry. Multiple applications, especially in chemistry and novel drug discovery, encourage rapid development of machine learning models in this area. To keep up with the pace of new research, proper experimental design, fair evaluation, and independent benchmarks are essential. Design of strong baselines is an indispensable element of such works. In this thesis, we explore multiple approaches to graph classification. We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning. Classical approaches, such as graph descriptors and molecular fingerprints, are also addressed. We design fair evaluation experimental protocol and choose proper datasets collection. This allows us to perform numerous experiments and rigorously analyze modern approaches. We arrive to many conclusions, which shed new light on performance and quality of novel algorithms. We investigate application of Jumping Knowledge GNN architecture to graph classification, which proves to be an efficient tool for improving base graph neural network architectures. Multiple improvements to baseline models are also proposed and experimentally verified, which constitutes an important contribution to the field of fair model comparison.
translated by 谷歌翻译
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-ofthe-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
社会VR,绩效捕获和虚拟试验的领域通常面临着忠实地在虚拟世界中重现真正的服装。一项关键的任务是由于织物特性,物理力和与身体接触而导致的固有服装形状不构成形状。我们建议使用一种逼真而紧凑的服装描述来促进固有的服装形状估计。另一个主要挑战是该域中的形状和设计多样性。 3D服装深度学习的最常见方法是为单个服装或服装类型建立专门的模型。我们认为,为各种服装设计建立统一的模型具有对新型服装类型的概括的好处,因此涵盖了比单个模型更大的设计领域。我们介绍了Neuraltailor,这是一种基于点级的新型架构,以可变的基数为基础回归,并将其应用于从3D点重建2D服装缝制模式的任务,可以使用服装模型。我们的实验表明,NeuralTailor成功地重建了缝纫模式,并将其推广到训练过程中未见模式拓扑的服装类型。
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译
即使机器学习算法已经在数据科学中发挥了重要作用,但许多当前方法对输入数据提出了不现实的假设。由于不兼容的数据格式,或数据集中的异质,分层或完全缺少的数据片段,因此很难应用此类方法。作为解决方案,我们提出了一个用于样本表示,模型定义和培训的多功能,统一的框架,称为“ Hmill”。我们深入审查框架构建和扩展的机器学习的多个范围范式。从理论上讲,为HMILL的关键组件的设计合理,我们将通用近似定理的扩展显示到框架中实现的模型所实现的所有功能的集合。本文还包含有关我们实施中技术和绩效改进的详细讨论,该讨论将在MIT许可下发布供下载。该框架的主要资产是其灵活性,它可以通过相同的工具对不同的现实世界数据源进行建模。除了单独观察到每个对象的一组属性的标准设置外,我们解释了如何在框架中实现表示整个对象系统的图表中的消息推断。为了支持我们的主张,我们使用框架解决了网络安全域的三个不同问题。第一种用例涉及来自原始网络观察结果的IoT设备识别。在第二个问题中,我们研究了如何使用以有向图表示的操作系统的快照可以对恶意二进制文件进行分类。最后提供的示例是通过网络中实体之间建模域黑名单扩展的任务。在所有三个问题中,基于建议的框架的解决方案可实现与专业方法相当的性能。
translated by 谷歌翻译
机器学习和计算机视觉技术近年来由于其自动化,适合性和产生惊人结果的能力而迅速发展。因此,在本文中,我们调查了2014年至2022年之间发表的关键研究,展示了不同的机器学习算法研究人员用来分割肝脏,肝肿瘤和肝脉管结构的研究。我们根据感兴趣的组织(肝果,肝肿瘤或肝毒剂)对被调查的研究进行了划分,强调了同时解决多个任务的研究。此外,机器学习算法被归类为受监督或无监督的,如果属于某个方案的工作量很大,则将进一步分区。此外,对文献和包含上述组织面具的网站发现的不同数据集和挑战进行了彻底讨论,强调了组织者的原始贡献和其他研究人员的贡献。同样,在我们的评论中提到了文献中过度使用的指标,这强调了它们与手头的任务的相关性。最后,强调创新研究人员应对需要解决的差距的关键挑战和未来的方向,例如许多关于船舶分割挑战的研究的稀缺性以及为什么需要早日处理他们的缺席。
translated by 谷歌翻译
深度学习属于人工智能领域,机器执行通常需要某种人类智能的任务。类似于大脑的基本结构,深度学习算法包括一种人工神经网络,其类似于生物脑结构。利用他们的感官模仿人类的学习过程,深入学习网络被送入(感官)数据,如文本,图像,视频或声音。这些网络在不同的任务中优于最先进的方法,因此,整个领域在过去几年中看到了指数增长。这种增长在过去几年中每年超过10,000多种出版物。例如,只有在医疗领域中的所有出版物中覆盖的搜索引擎只能在Q3 2020中覆盖所有出版物的子集,用于搜索术语“深度学习”,其中大约90%来自过去三年。因此,对深度学习领域的完全概述已经不可能在不久的将来获得,并且在不久的将来可能会难以获得难以获得子场的概要。但是,有几个关于深度学习的综述文章,这些文章专注于特定的科学领域或应用程序,例如计算机愿景的深度学习进步或在物体检测等特定任务中进行。随着这些调查作为基础,这一贡献的目的是提供对不同科学学科的深度学习的第一个高级,分类的元调查。根据底层数据来源(图像,语言,医疗,混合)选择了类别(计算机愿景,语言处理,医疗信息和其他工程)。此外,我们还审查了每个子类别的常见架构,方法,专业,利弊,评估,挑战和未来方向。
translated by 谷歌翻译
小儿肌肉骨骼系统的临床诊断依赖于医学成像检查的分析。在医学图像处理管道中,使用深度学习算法的语义分割使人可以自动生成患者特定的三维解剖模型,这对于形态学评估至关重要。但是,小儿成像资源的稀缺性可能导致单个深层分割模型的准确性和泛化性能降低。在这项研究中,我们建议设计一个新型的多任务多任务多域学习框架,在该框架中,单个分割网络对由解剖学的不同部分产生的多个数据集进行了优化。与以前的方法不同,我们同时考虑多个强度域和分割任务来克服小儿数据的固有稀缺性,同时利用成像数据集之间的共享特征。为了进一步提高概括能力,我们从自然图像分类中采用了转移学习方案,以及旨在在共享表示中促进域特异性群集的多尺度对比正则化,以及多连接解剖学先验来执行解剖学上一致的预测。我们评估了使用脚踝,膝盖和肩关节的三个稀缺和小儿成像数据集进行骨分割的贡献。我们的结果表明,所提出的方法在骰子指标中的表现优于个人,转移和共享分割方案,并具有统计学上足够的利润。拟议的模型为智能使用成像资源和更好地管理小儿肌肉骨骼疾病提供了新的观点。
translated by 谷歌翻译