信号处理和机器学习的最新进展加上医院电子病历记录的发展以及通过内部/外部通信系统获得大量医学图像,最近引起了“放射学”的重大兴趣。 Radiomics是一个新兴的相对较新的研究领域,它指的是从医学图像中提取半定量和/或定量特征,目的是开发预测和/或预测模型,并有望成为整合图像衍生信息的关键组成部分。在不久的将来进行个性化治疗。传统的Radomomics工作流程通常基于从感兴趣的分段区域提取预先设计的特征(也被称为手工制作或工程特征)。尽管如此,深度学习的最新进展已引起趋势深度学习型Radiomics(也称为discoveryRadiomics)。利用这两种方法的优势,还开发了混合解决方案以利用多个数据源的潜力。考虑到放射性组学的各种方法,进一步的改进需要全面和综合的草图,这是本文的目标。本手稿通过讨论癌症放射组学背景下最先进的信号处理解决方案,提供了关于放射组学的独特跨学科视角。
translated by 谷歌翻译
从放射学图像中准确识别和定位异常是临床诊断和治疗计划中不可或缺的一部分。为这些任务建立高度准确的预测模型通常需要大量手动用标签注释并找到异常部位的图像。然而,实际上,这种带注释的数据对于获取来说是昂贵的,尤其是具有位置注释的数据。我们需要能够只使用少量位置注释的方法。为了解决这一挑战,我们提出了一种统一的方法,通过相同的基础模型对所有图像同时进行疾病识别和定位。我们证明了我们的方法可以有效地利用类信息和有限位置注释,并且在分类和本地化任务中显着地优于比较参考基线。
translated by 谷歌翻译
在这项工作中,我们提出了一系列对抗性特征(BAF),用于通过结合非监督特征学习技术从其扩散磁共振图像(MRI)(在受伤后一个月内获得)识别轻度创伤性脑损伤(MTBI)患者。 MTBI是一个日益严重的公共卫生问题,估计每年在美国发病率超过170万人。诊断依据临床病史和症状,缺乏准确,具体的损伤措施。与大多数以前使用从大脑不同部分提取的手工制作的特征进行MTBI分类的作品不同,我们采用特征学习算法来学习这项任务的更多判别性表示。在这个领域中的一个主要挑战是可用于训练的受试者数量相对较少。这使得难以使用端对端卷积神经网络从MR图像中直接对受试者进行分类。为了克服这一挑战,我们首先应用对抗性自动编码器(具有卷积结构)来学习从不同脑区域提取的重叠图像块的补丁级特征。然后我们通过一个词汇方法汇总这些功能。我们对227名受试者(包括109名MTBI患者,118名年龄和性别匹配的健康对照)的数据集进行了广泛的实验研究,并比较了深度包 - 具有多种先前方法的特征。实验结果表明,BAF明显优于早期的工作,依赖于选定大脑区域的MR指标的平均值。
translated by 谷歌翻译
多实例学习(MIL)是监督学习的变体,其中单个类标签被分配给一包实例。在本文中,我们将MIL问题陈述为学习袋标签的伯努利分布,其中袋标签概率通过神经网络完全参数化。此外,我们提出了一种基于神经网络的置换 - 不变聚集算子,其对应于注意机制。值得注意的是,所提出的基于注意力的操作员的应用提供了对每个实例对包标签的贡献的洞察。我们根据经验证明,ourapproach在基准MRI数据集上实现了与最佳MIL方法相当的性能,并且在基于MNIST的MIL数据集和两个真实的组织病理学数据集上优于其他方法而不牺牲可解释性。
translated by 谷歌翻译
Traditional Cox proportional hazard model for survival analysis are based on structured features like patients' sex, smoke years, BMI, etc. With the development of medical imaging technology, more and more unstructured medical images are available for diagnosis, treatment and survival analysis. Traditional survival models utilize these unstructured images by extracting human-designed features from them. However, we argue that those hand-crafted features have limited abilities in representing highly abstract information. In this paper, we for the first time develop a deep convolutional neural network for survival analysis (DeepConvSurv) with pathological images. The deep layers in our model could represent more abstract information compared with hand-crafted features from the images. Hence, it will improve the survival prediction performance. From our extensive experiments on the National Lung Screening Trial (NLST) lung cancer data, we show that the proposed DeepConvSurv model improves significantly compared with four state-of-the-art methods.
translated by 谷歌翻译
Highlights • A data-driven lung nodule segmentation method without involving shape hypothesis. • Two-branch convolutional neural networks extract both 3D and multi-scale 2D features. • A novel central pooling layer is proposed for feature selection. • We propose a weighted sampling method to solve imbalanced training label problem. • The method shows strong performance for segmenting juxta-pleural nod-ules. Abstract Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule seg-mentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%.
translated by 谷歌翻译
提出了一种使用高分辨率计算机断层扫描(HRCT)扫描慢性阻塞性肺病(COPD)患者自动量化肺气肿区域的方法,该方法不需要手动注释扫描进行训练。在两个不同的中心获得对照和具有不同疾病严重程度的COPD患者的HRCT扫描。来自共生矩阵和高斯滤波器组的纹理特征用于描述扫描中的肺实质。研究了两个强大版本的多实例学习(MIL)分类器,miSVM和MILES。使用在1分钟内从力呼吸量提取的弱标签(FEV $ _1 $)和肺部对一氧化碳(DLCO)的扩散能力来训练分类器。在测试时,分类器输出指示总体COPD诊断的患者标签和指示肺气肿存在的局部标签。将分类器性能与两位放射科医师,基于经典密度的方法和肺功能测试(PFT)的制造笔记进行比较。 miSVM分类器在患者和肺气肿分类方面均优于MILES。与基于密度的方法相比,分类器与PFT具有更强的相关性,来自两位放射科医师的注释交叉点中的重症百分比,以及由一位放射科医师注释的肺气肿的百分比。分类器和PFT之间的相关性仅优于第二放射科医师。因此,该方法有利于促进评估脓毒症和减少观察者间的变异性。
translated by 谷歌翻译
The Annual Review of Biomedical Engineering is online at bioeng.annualreviews.org Abstract This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
translated by 谷歌翻译
心脏几何形状和功能的改变定义了心脏病的良好建立的原因。然而,目前诊断心血管疾病的方法通常依赖于主观人体评估以及医学图像的手动分析。这两个因素限制了定量复杂结构和功能表型的灵敏度。深度学习方法最近在医学图像的分类分类等任务中取得了成功,但在特征提取和决策过程中缺乏可解释性,限制了它们在临床诊断中的价值。在这项工作中,我们提出了一种3D卷积生成模型,用于对患者图像进行自动分类。与心脏疾病相关的结构重塑。该模型利用从3D分割中学习的可解释的任务特定解剖模式。它还允许在图像的原始输入空间中可视化和量化所学习的病理学特定的重塑模式。当从我们自己的多中心数据集(100%)以及ACDC MICCAI 2017数据集(90%)检测到看不见的MR图像时,该方法在健康和肥大性心肌病受试者的分类中产生高准确度。我们认为,所提出的深度学习方法是朝着开发医学成像领域的可解释分类器迈出了有希望的一步,这可以帮助临床医生提高诊断准确性并增强患者风险分层。
translated by 谷歌翻译
We investigate the problem of lung nodule malignancy suspiciousness (the likelihood of nodule ma-lignancy) classification using thoracic Computed Tomography (CT) images. Unlike traditional studies primarily relying on cautious nodule segmentation and time-consuming feature extraction, we tackle a more challenging task on directly modeling raw nodule patches and building an end-to-end machine-learning architecture for classifying lung nodule malignancy suspiciousness. We present a Multi-crop Convolutional Neural Network (MC-CNN) to automatically extract nodule salient information by employing a novel multi-crop pooling strategy which crops different regions from convolutional feature maps and then applies max-pooling different times. Extensive experimental results show that the proposed method not only achieves state-of-the-art nodule suspiciousness classification performance, but also effectively characterizes nodule semantic attributes (subtlety and margin) and nodule diameter which are potentially helpful in modeling nodule malignancy.
translated by 谷歌翻译
Precision medicine approaches rely on obtaining precise knowledge of the true state of health of an individual patient, which results from a combination of their genetic risks and environmental exposures. This approach is currently limited by the lack of effective and efficient non-invasive medical tests to define the full range of phenotypic variation associated with individual health. Such knowledge is critical for improved early intervention, for better treatment decisions, and for ameliorating the steadily worsening epidemic of chronic disease. We present proof-of-concept experiments to demonstrate how routinely acquired cross-sectional CT imaging may be used to predict patient longevity as a proxy for overall individual health and disease status using computer image analysis techniques. Despite the limitations of a modest dataset and the use of off-the-shelf machine learning methods, our results are comparable to previous 'manual' clinical methods for longevity prediction. This work demonstrates that radiomics techniques can be used to extract biomarkers relevant to one of the most widely used outcomes in epidemiological and clinical research-mortality, and that deep learning with convolutional neural networks can be usefully applied to radiomics research. Computer image analysis applied to routinely collected medical images offers substantial potential to enhance precision medicine initiatives. Measuring phenotypic variation in precision medicine Precision medicine has become a key focus of modern bioscience and medicine, and involves "prevention and treatment strategies that take individual variability into account", through the use of "large-scale biologic databases … powerful methods for characterizing patients … and computational tools for analysing large sets of data" 1. The variation within individuals that enables the identification of patient subgroups for precision medicine strategies is termed the "phenotype". The observable phenotype reflects both genomic variation and the accumulated lifestyle and environmental exposures that impact biological function-the exposome 2. Precision medicine relies upon the availability of useful biomarkers, defined as "a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacological responses to a therapeutic intervention" 3. A 'good' biomarker has the following characteristics: it is sensitive, specific, predictive, robust, bridges clinical and preclinical health states, and is non-invasive 4. Genomics can produce good biomarkers useful for precision medicine 5. There has been significant success in exploring human genetic variation in the field of genomics, where data-driven methods have highlighted the role of human genetic variation in disease diagnosis, prognosis, and treatment response 6. However, for the chronic and age-related diseases which account for the majority of morbidity and mortality in developed nat
translated by 谷歌翻译
什么是对象的良好矢量表示?我们相信它应该在3D中生成,因为它可以产生新的3D物体;从2D可以看出,从2D可以看出它是可以预测的。我们提出了一种称为TL嵌入网络的新型架构,它允许具有这些属性的嵌入空间。该网络由两个组成部分组成:(a)自动编码器,确保表示是生成的;以及(b)卷积网络,确保表示是可预测的。这使得能够处理许多任务,包括2D图像和3D模型检索的体素预测。广泛的实验分析证明了这种嵌入的有用性和多功能性。
translated by 谷歌翻译
获取捕获与疾病进展和治疗监测相关的成像标志物的模型具有挑战性。模型通常基于大量数据,带有注释的已知标记的示例,旨在自动检测。高注释努力和对已知标记的词汇的限制限制了这些方法的能力。在这里,我们进行了监督学习,以识别成像数据中的异常作为标记的候选者。我们提出AnoGAN,一个深度卷积生成对抗网络,以学习正常的解剖变异,伴随着基于从图像空间到潜在空间的映射的novelanomaly评分方案。应用于新数据,模型标记异常,并评分图像补丁指示他们的适合学习的分布。视网膜的光学相干断层扫描图像的结果表明,该方法正确地识别异常图像,例如包含视网膜流体或高反射灶的图像。
translated by 谷歌翻译
Saliency detection models aiming to quantitatively predict human eye-attended locations in the visual field have been receiving increasing research interest in recent years. Unlike traditional methods that rely on hand-designed features and contrast inference mechanisms, this paper proposes a novel framework to learn saliency detection models from raw image data using deep networks. The proposed framework mainly consists of two learning stages. At the first learning stage, we develop a stacked denoising autoencoder (SDAE) model to learn robust, representative features from raw image data under an unsupervised manner. The second learning stage aims to jointly learn optimal mechanisms to capture the intrinsic mutual patterns as the feature contrast and to integrate them for final saliency prediction. Given the input of pairs of a center patch and its surrounding patches represented by the features learned at the first stage, a SDAE network is trained under the supervision of eye fixation labels, which achieves both contrast inference and contrast integration simultaneously. Experiments on three pub-lically available eye tracking benchmarks and the comparisons with 16 state-of-the-art approaches demonstrate the effectiveness of the proposed framework.
translated by 谷歌翻译
已经应用不同类型的卷积神经网络(CNN)来从计算机断层扫描(CT)扫描中检测癌性肺结节。然而,结节的大小非常多样,可以在3到30毫米之间。结节大小的高度变化使得它们难以分类并且具有挑战性。在这项研究中,我们提出了一种新的CNN体​​系结构,称为门控扩张(GD)网络,以对结节进行恶性或良性分类。与以前的研究不同,GD网络使用多次卷积而不是最大池来捕获尺度变化。此外,GD网络具有Context-Aware子网络,该子网络分析输入特征并将特征引导到适当的扩张卷积。我们对来自LIDC-LDRIdataset的1000多次CT扫描评估了所提出的网络。我们提出的网络优于基线模型,包括常规CNN,Resnet和Densenet,AUC> 0.95。与基线模型相比,GD网络提高了粒度范围大小的结节的分类精度。此外,我们观察到结节的大小与Context-Awaresub网络生成的注意信号之间的关系,后者验证了我们的新网络架构。
translated by 谷歌翻译
Medical image analysis is the science of analyzing or solving medical problems using different image analysis techniques for affective and efficient extraction of information. It has emerged as one of the top research area in the field of engineering and medicine. Recent years have witnessed rapid use of machine learning algorithms in medical image analysis. These machine learning techniques are used to extract compact information for improved performance of medical image analysis system, when compared to the traditional methods that use extraction of handcrafted features. Deep learning is a breakthrough in machine learning techniques that has overwhelmed the field of pattern recognition and computer vision research by providing state-of-the-art results. Deep learning provides different machine learning algorithms that model high level data abstractions and do not rely on handcrafted features. Recently, deep learning methods utilizing deep convolutional neural networks have been applied to medical image analysis providing promising results. The application area covers the whole spectrum of medical image analysis including detection, segmentation, classification, and computer aided diagnosis. This paper presents a review of the state-of-the-art convolutional neural network based techniques used for medical image analysis.
translated by 谷歌翻译
胸部X光是临床实践中最常用和最经济的放射学检查之一。虽然检测胸部X射线上的胸部疾病仍然是机器智能的一项具有挑战性的任务,因为1)不同胸部疾病患者X射线上病变区域的高度变化和2)放射科医师缺乏准确的像素 - 水平注释模特训练。现有的机器学习方法无法应对胸部疾病通常在局部疾病特定区域发生的挑战。在本文中,我们提出了一个弱监督的深度学习框架,它配备了挤压和激发块,多图转移和最大最小池,用于分类胸部疾病以及定位可疑病变区域。全面的实验和讨论在ChestX-ray14数据集上进行。数值和视觉结果均证明了所提出的模型的有效性及其对最先进管道的更好性能。
translated by 谷歌翻译
作为一种流行的深度学习模型,卷积神经网络(CNN)在分析低剂量CT图像中的肺结节和肿瘤方面产生了有希望的结果。然而,这种方法仍然缺乏标记数据,这是进一步改善CNN的筛选和诊断性能的主要挑战。结节的准确定位和表征提供了关键的病理线索,特别是相关的大小,衰减,形状,边缘以及病变的生长或稳定性,可以提高检测和分类的敏感性和特异性。为了应对这一挑战,我们在本文中进行了开发软激活映射(SAM),可以使用CNN进行细粒度病变分析,从而可以获得丰富的放射学特征。通过将高级卷积特征与SAM相结合,我们进一步提出了一种高级特征增强方案,以精确地从多个CT切片进行局部化,这有助于减轻过度拟合而无需任何额外的数据增加。在LIDC-IDRIbenchmark数据集上的实验表明,我们提出的方法实现了最先进的预测性能,降低了误报率。此外,SAM方法侧重于不规则的边缘,这通常与恶意相关。
translated by 谷歌翻译
我们提出了神经图像压缩(NIC),一种通过使用神经网络将它们映射到紧凑的潜在空间来减小千兆像素图像大小的方法。我们表明,这种压缩使我们能够使用弱图像级标签端对端训练卷积神经网络的组织病理学全幻灯片图像。
translated by 谷歌翻译
在计算机视觉中使用运动分析来理解在图像序列中移动物体的行为。优化动态生物系统的解释需要精确和精确的运动跟踪以及高维运动轨迹的高效表示,以便这些可用于预测任务。在这里,我们使用心脏的图像序列,使用心脏磁共振成像获得,使用在解剖学形状先验训练的完全卷积网络创建时间分辨的三维分割。这种密集运动模型形成了对无监督去噪自动编码器(4Dsurvival)的输入,这是一种由自动编码器组成的混合网络,该自动编码器学习在观察到的结果数据上训练的任务特定的潜在代码表示,产生针对生存预测优化的潜在表示。为了处理正确审查的生存结果,我们的网络使用了Cox部分似然丢失函数。在302名患者的研究中,预测准确性(由Harrell'sC指数量化)显着高于(p <.0001)我们的模型C = 0.73(95 $ \%$ CI:0.68-0.78)比C的人类基准= 0.59(95 $ \%$ CI:0.53 - 0.65)。这项工作演示了使用高维医学图像数据的复杂计算机视觉任务如何有效地预测人类生存。
translated by 谷歌翻译