从放射学图像中准确识别和定位异常是临床诊断和治疗计划中不可或缺的一部分。为这些任务建立高度准确的预测模型通常需要大量手动用标签注释并找到异常部位的图像。然而,实际上,这种带注释的数据对于获取来说是昂贵的,尤其是具有位置注释的数据。我们需要能够只使用少量位置注释的方法。为了解决这一挑战,我们提出了一种统一的方法,通过相同的基础模型对所有图像同时进行疾病识别和定位。我们证明了我们的方法可以有效地利用类信息和有限位置注释,并且在分类和本地化任务中显着地优于比较参考基线。
translated by 谷歌翻译
Highlights • A data-driven lung nodule segmentation method without involving shape hypothesis. • Two-branch convolutional neural networks extract both 3D and multi-scale 2D features. • A novel central pooling layer is proposed for feature selection. • We propose a weighted sampling method to solve imbalanced training label problem. • The method shows strong performance for segmenting juxta-pleural nod-ules. Abstract Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule seg-mentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%.
translated by 谷歌翻译
在这项工作中,我们提出了一系列对抗性特征(BAF),用于通过结合非监督特征学习技术从其扩散磁共振图像(MRI)(在受伤后一个月内获得)识别轻度创伤性脑损伤(MTBI)患者。 MTBI是一个日益严重的公共卫生问题,估计每年在美国发病率超过170万人。诊断依据临床病史和症状,缺乏准确,具体的损伤措施。与大多数以前使用从大脑不同部分提取的手工制作的特征进行MTBI分类的作品不同,我们采用特征学习算法来学习这项任务的更多判别性表示。在这个领域中的一个主要挑战是可用于训练的受试者数量相对较少。这使得难以使用端对端卷积神经网络从MR图像中直接对受试者进行分类。为了克服这一挑战,我们首先应用对抗性自动编码器(具有卷积结构)来学习从不同脑区域提取的重叠图像块的补丁级特征。然后我们通过一个词汇方法汇总这些功能。我们对227名受试者(包括109名MTBI患者,118名年龄和性别匹配的健康对照)的数据集进行了广泛的实验研究,并比较了深度包 - 具有多种先前方法的特征。实验结果表明,BAF明显优于早期的工作,依赖于选定大脑区域的MR指标的平均值。
translated by 谷歌翻译
We investigate the problem of lung nodule malignancy suspiciousness (the likelihood of nodule ma-lignancy) classification using thoracic Computed Tomography (CT) images. Unlike traditional studies primarily relying on cautious nodule segmentation and time-consuming feature extraction, we tackle a more challenging task on directly modeling raw nodule patches and building an end-to-end machine-learning architecture for classifying lung nodule malignancy suspiciousness. We present a Multi-crop Convolutional Neural Network (MC-CNN) to automatically extract nodule salient information by employing a novel multi-crop pooling strategy which crops different regions from convolutional feature maps and then applies max-pooling different times. Extensive experimental results show that the proposed method not only achieves state-of-the-art nodule suspiciousness classification performance, but also effectively characterizes nodule semantic attributes (subtlety and margin) and nodule diameter which are potentially helpful in modeling nodule malignancy.
translated by 谷歌翻译
我们提出了一种新的图像分类架构,称为自注意胶囊网络(SACN)。 SACN是第一个将自注意机制作为囊网络(CapsNet)中的一个整体层的模型。虽然自我注意机制选择了更重要的图像区域,但CapsNet仅分析这些区域内的相关特征及其空间相关性。这些特征在卷积层中提取。然后,自注意层基于特征分析学习对不相关区域进行抑制,并突出显示对特定任务有用的显着特征。然后将注意力图输入到CapNet主要层,然后是分类层。 SACN提议的模型被设计为使用相对较浅的CapsNet架构来减少计算负荷,并通过使用自注意模块来显着改善结果来补偿不存在更深的网络。除了天然MNIST和SVHN之外,所提出的Self-Attention CapsNet架构在五个不同的数据集上进行了广泛的评估,主要是在三种不同的医疗集上。该模型能够比基线CapsNet更好地分类具有多样和复杂背景的图像及其补丁。因此,提议的Self-Attention CapsNet显着提高了不同数据集内和跨不同数据集的分类性能,并且不仅在分类准确性方面而且在稳健性方面优于基线CapsNet。
translated by 谷歌翻译
The Annual Review of Biomedical Engineering is online at bioeng.annualreviews.org Abstract This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
translated by 谷歌翻译
已经应用不同类型的卷积神经网络(CNN)来从计算机断层扫描(CT)扫描中检测癌性肺结节。然而,结节的大小非常多样,可以在3到30毫米之间。结节大小的高度变化使得它们难以分类并且具有挑战性。在这项研究中,我们提出了一种新的CNN体​​系结构,称为门控扩张(GD)网络,以对结节进行恶性或良性分类。与以前的研究不同,GD网络使用多次卷积而不是最大池来捕获尺度变化。此外,GD网络具有Context-Aware子网络,该子网络分析输入特征并将特征引导到适当的扩张卷积。我们对来自LIDC-LDRIdataset的1000多次CT扫描评估了所提出的网络。我们提出的网络优于基线模型,包括常规CNN,Resnet和Densenet,AUC> 0.95。与基线模型相比,GD网络提高了粒度范围大小的结节的分类精度。此外,我们观察到结节的大小与Context-Awaresub网络生成的注意信号之间的关系,后者验证了我们的新网络架构。
translated by 谷歌翻译
We investigate the problem of diagnostic lung nodule classification using thoracic Computed Tomography (CT) screening. Unlike traditional studies primarily relied on nodule segmentation for regional analysis, we tackle a more challenging problem on directly modelling raw nodule patches without any prior definition of nodule morphology. We propose a hierarchical learning framework-Multi-scale Convolutional Neural Networks (MCNN)-to capture nodule heterogeneity by extracting discriminative features from alternatingly stacked layers. In particular , to sufficiently quantify nodule characteristics, our framework utilizes multi-scale nodule patches to learn a set of class-specific features simultaneously by concatenating response neuron activations obtained at the last layer from each input scale. We evaluate the proposed method on CT images from Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), where both lung nodule screening and nodule annotations are provided. Experimental results demonstrate the effectiveness of our method on classifying malignant and benign nodules without nodule segmentation.
translated by 谷歌翻译
心脏几何形状和功能的改变定义了心脏病的良好建立的原因。然而,目前诊断心血管疾病的方法通常依赖于主观人体评估以及医学图像的手动分析。这两个因素限制了定量复杂结构和功能表型的灵敏度。深度学习方法最近在医学图像的分类分类等任务中取得了成功,但在特征提取和决策过程中缺乏可解释性,限制了它们在临床诊断中的价值。在这项工作中,我们提出了一种3D卷积生成模型,用于对患者图像进行自动分类。与心脏疾病相关的结构重塑。该模型利用从3D分割中学习的可解释的任务特定解剖模式。它还允许在图像的原始输入空间中可视化和量化所学习的病理学特定的重塑模式。当从我们自己的多中心数据集(100%)以及ACDC MICCAI 2017数据集(90%)检测到看不见的MR图像时,该方法在健康和肥大性心肌病受试者的分类中产生高准确度。我们认为,所提出的深度学习方法是朝着开发医学成像领域的可解释分类器迈出了有希望的一步,这可以帮助临床医生提高诊断准确性并增强患者风险分层。
translated by 谷歌翻译
提出了一种使用高分辨率计算机断层扫描(HRCT)扫描慢性阻塞性肺病(COPD)患者自动量化肺气肿区域的方法,该方法不需要手动注释扫描进行训练。在两个不同的中心获得对照和具有不同疾病严重程度的COPD患者的HRCT扫描。来自共生矩阵和高斯滤波器组的纹理特征用于描述扫描中的肺实质。研究了两个强大版本的多实例学习(MIL)分类器,miSVM和MILES。使用在1分钟内从力呼吸量提取的弱标签(FEV $ _1 $)和肺部对一氧化碳(DLCO)的扩散能力来训练分类器。在测试时,分类器输出指示总体COPD诊断的患者标签和指示肺气肿存在的局部标签。将分类器性能与两位放射科医师,基于经典密度的方法和肺功能测试(PFT)的制造笔记进行比较。 miSVM分类器在患者和肺气肿分类方面均优于MILES。与基于密度的方法相比,分类器与PFT具有更强的相关性,来自两位放射科医师的注释交叉点中的重症百分比,以及由一位放射科医师注释的肺气肿的百分比。分类器和PFT之间的相关性仅优于第二放射科医师。因此,该方法有利于促进评估脓毒症和减少观察者间的变异性。
translated by 谷歌翻译
作为一种流行的深度学习模型,卷积神经网络(CNN)在分析低剂量CT图像中的肺结节和肿瘤方面产生了有希望的结果。然而,这种方法仍然缺乏标记数据,这是进一步改善CNN的筛选和诊断性能的主要挑战。结节的准确定位和表征提供了关键的病理线索,特别是相关的大小,衰减,形状,边缘以及病变的生长或稳定性,可以提高检测和分类的敏感性和特异性。为了应对这一挑战,我们在本文中进行了开发软激活映射(SAM),可以使用CNN进行细粒度病变分析,从而可以获得丰富的放射学特征。通过将高级卷积特征与SAM相结合,我们进一步提出了一种高级特征增强方案,以精确地从多个CT切片进行局部化,这有助于减轻过度拟合而无需任何额外的数据增加。在LIDC-IDRIbenchmark数据集上的实验表明,我们提出的方法实现了最先进的预测性能,降低了误报率。此外,SAM方法侧重于不规则的边缘,这通常与恶意相关。
translated by 谷歌翻译
胸部X光是临床实践中最常用和最经济的放射学检查之一。虽然检测胸部X射线上的胸部疾病仍然是机器智能的一项具有挑战性的任务,因为1)不同胸部疾病患者X射线上病变区域的高度变化和2)放射科医师缺乏准确的像素 - 水平注释模特训练。现有的机器学习方法无法应对胸部疾病通常在局部疾病特定区域发生的挑战。在本文中,我们提出了一个弱监督的深度学习框架,它配备了挤压和激发块,多图转移和最大最小池,用于分类胸部疾病以及定位可疑病变区域。全面的实验和讨论在ChestX-ray14数据集上进行。数值和视觉结果均证明了所提出的模型的有效性及其对最先进管道的更好性能。
translated by 谷歌翻译
多实例学习(MIL)是监督学习的变体,其中单个类标签被分配给一包实例。在本文中,我们将MIL问题陈述为学习袋标签的伯努利分布,其中袋标签概率通过神经网络完全参数化。此外,我们提出了一种基于神经网络的置换 - 不变聚集算子,其对应于注意机制。值得注意的是,所提出的基于注意力的操作员的应用提供了对每个实例对包标签的贡献的洞察。我们根据经验证明,ourapproach在基准MRI数据集上实现了与最佳MIL方法相当的性能,并且在基于MNIST的MIL数据集和两个真实的组织病理学数据集上优于其他方法而不牺牲可解释性。
translated by 谷歌翻译
早期发现肺癌对降低死亡率至关重要。最近的研究已经证明了低剂量计算机断层扫描(CT)在基于非常有限的临床信息选择的个体中检测肺癌的临床效用。然而,这种策略产生高假阳性率,这可能导致不必要的和可能有害的程序。为了应对这些挑战,我们建立了一条管道,从详细的临床人口统计学和3D CT图像中共同学习。为此,我们利用来自筛选检测细胞分子和细胞表征联盟(MCL)的数据,该联盟致力于早期发现肺癌。基于3D注意的深度卷积神经网络(DCNN)被提出用于从胸部CT扫描中识别肺癌,而没有先前解剖位置的可疑结节。为了改善良性和恶性之间的非侵入性区分,我们将随机森林分类器应用于将临床信息与成像数据相结合的数据。结果显示,仅从临床人口统计学中获得的AUC为0.635,而注意网络单独达到0.687的准确度。相比之下,当应用我们提出的整合临床和成像变量的管道时,我们在测试数据集中达到了0.787的AUC。所提出的网络既可以有效地捕获用于分类的解剖信息,也可以生成解释驱动性能的特征的注意力图。
translated by 谷歌翻译
Precision medicine approaches rely on obtaining precise knowledge of the true state of health of an individual patient, which results from a combination of their genetic risks and environmental exposures. This approach is currently limited by the lack of effective and efficient non-invasive medical tests to define the full range of phenotypic variation associated with individual health. Such knowledge is critical for improved early intervention, for better treatment decisions, and for ameliorating the steadily worsening epidemic of chronic disease. We present proof-of-concept experiments to demonstrate how routinely acquired cross-sectional CT imaging may be used to predict patient longevity as a proxy for overall individual health and disease status using computer image analysis techniques. Despite the limitations of a modest dataset and the use of off-the-shelf machine learning methods, our results are comparable to previous 'manual' clinical methods for longevity prediction. This work demonstrates that radiomics techniques can be used to extract biomarkers relevant to one of the most widely used outcomes in epidemiological and clinical research-mortality, and that deep learning with convolutional neural networks can be usefully applied to radiomics research. Computer image analysis applied to routinely collected medical images offers substantial potential to enhance precision medicine initiatives. Measuring phenotypic variation in precision medicine Precision medicine has become a key focus of modern bioscience and medicine, and involves "prevention and treatment strategies that take individual variability into account", through the use of "large-scale biologic databases … powerful methods for characterizing patients … and computational tools for analysing large sets of data" 1. The variation within individuals that enables the identification of patient subgroups for precision medicine strategies is termed the "phenotype". The observable phenotype reflects both genomic variation and the accumulated lifestyle and environmental exposures that impact biological function-the exposome 2. Precision medicine relies upon the availability of useful biomarkers, defined as "a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacological responses to a therapeutic intervention" 3. A 'good' biomarker has the following characteristics: it is sensitive, specific, predictive, robust, bridges clinical and preclinical health states, and is non-invasive 4. Genomics can produce good biomarkers useful for precision medicine 5. There has been significant success in exploring human genetic variation in the field of genomics, where data-driven methods have highlighted the role of human genetic variation in disease diagnosis, prognosis, and treatment response 6. However, for the chronic and age-related diseases which account for the majority of morbidity and mortality in developed nat
translated by 谷歌翻译
Saliency detection models aiming to quantitatively predict human eye-attended locations in the visual field have been receiving increasing research interest in recent years. Unlike traditional methods that rely on hand-designed features and contrast inference mechanisms, this paper proposes a novel framework to learn saliency detection models from raw image data using deep networks. The proposed framework mainly consists of two learning stages. At the first learning stage, we develop a stacked denoising autoencoder (SDAE) model to learn robust, representative features from raw image data under an unsupervised manner. The second learning stage aims to jointly learn optimal mechanisms to capture the intrinsic mutual patterns as the feature contrast and to integrate them for final saliency prediction. Given the input of pairs of a center patch and its surrounding patches represented by the features learned at the first stage, a SDAE network is trained under the supervision of eye fixation labels, which achieves both contrast inference and contrast integration simultaneously. Experiments on three pub-lically available eye tracking benchmarks and the comparisons with 16 state-of-the-art approaches demonstrate the effectiveness of the proposed framework.
translated by 谷歌翻译
Traditional Cox proportional hazard model for survival analysis are based on structured features like patients' sex, smoke years, BMI, etc. With the development of medical imaging technology, more and more unstructured medical images are available for diagnosis, treatment and survival analysis. Traditional survival models utilize these unstructured images by extracting human-designed features from them. However, we argue that those hand-crafted features have limited abilities in representing highly abstract information. In this paper, we for the first time develop a deep convolutional neural network for survival analysis (DeepConvSurv) with pathological images. The deep layers in our model could represent more abstract information compared with hand-crafted features from the images. Hence, it will improve the survival prediction performance. From our extensive experiments on the National Lung Screening Trial (NLST) lung cancer data, we show that the proposed DeepConvSurv model improves significantly compared with four state-of-the-art methods.
translated by 谷歌翻译
Medical image analysis is the science of analyzing or solving medical problems using different image analysis techniques for affective and efficient extraction of information. It has emerged as one of the top research area in the field of engineering and medicine. Recent years have witnessed rapid use of machine learning algorithms in medical image analysis. These machine learning techniques are used to extract compact information for improved performance of medical image analysis system, when compared to the traditional methods that use extraction of handcrafted features. Deep learning is a breakthrough in machine learning techniques that has overwhelmed the field of pattern recognition and computer vision research by providing state-of-the-art results. Deep learning provides different machine learning algorithms that model high level data abstractions and do not rely on handcrafted features. Recently, deep learning methods utilizing deep convolutional neural networks have been applied to medical image analysis providing promising results. The application area covers the whole spectrum of medical image analysis including detection, segmentation, classification, and computer aided diagnosis. This paper presents a review of the state-of-the-art convolutional neural network based techniques used for medical image analysis.
translated by 谷歌翻译
自动数字组织病理学图像分割是帮助病理学家诊断肿瘤和癌症亚型的重要任务。对于癌症亚型的病理诊断,病理学家通常改变全滑动图像(WSI)观察者的放大率。一个关键的假设是放大率的重要性取决于输入图像的特征,例如癌症子类型。在本文中,我们提出了一种新的语义分割方法,称为自适应加权多视场CNN(AWMF-CNN),它可以自适应地使用来自不同放大率的图像的图像特征来对输入图像中的多个癌症亚型区域进行分类。 。所提出的方法通过根据输入图像自适应地改变每个专家的权重来聚集几个专家CNN用于不同放大率的图像。它利用可能有助于识别子类型的不同放大率的图像中的信息。它在实验中的表现优于其他最先进的方法。
translated by 谷歌翻译
目的:肺癌是全球癌症相关死亡的主要原因。计算机辅助诊断(CAD)系统近年来在促进计算机断层扫描(CT)扫描中异常肺结节的有效检测和分类方面表现出了显着的优势。虽然传统上使用手工设计的放射学特征来进行肺癌预测,但是最近的成功实现了发现放射组学领域的最新成果。在此,从档案医学数据中直接发现包括高度辨别的放射学特征的放射性序列。然而,使用这种放射性测序序列进行预测的解释仍然是一个挑战。方法:设计,构建和测试了novelend-end可解释发现放射免疫学驱动的肺癌预测管道。被发现的放射性测序仪具有由堆叠的可解释的测序细胞(SISC)组成的深层结构。结果:SISC体系结构显示出了超越前的方法,同时为其决策过程提供了更多的洞察力。结论:SISC放射学测序仪能够在肺癌预测中获得最先进的结果,并且还以关键响应图的形式提供预测可解释性。意义:临界响应图不仅可用于验证所提出的SISC放射性测序仪的预测,还可用于改进放射科医师 - 机器协作以进行有效诊断。
translated by 谷歌翻译