Segmentation of lung tissue in computed tomography (CT) images is a precursor to most pulmonary image analysis applications. Semantic segmentation methods using deep learning have exhibited top-tier performance in recent years. This paper presents a fully automatic method for identifying the lungs in three-dimensional (3D) pulmonary CT images, which we call it Lung-Net. We conjectured that a significant deeper network with inceptionV3 units can achieve a better feature representation of lung CT images without increasing the model complexity in terms of the number of trainable parameters. The method has three main advantages. First, a U-Net architecture with InceptionV3 blocks is developed to resolve the problem of performance degradation and parameter overload. Then, using information from consecutive slices, a new data structure is created to increase generalization potential, allowing more discriminating features to be extracted by making data representation as efficient as possible. Finally, the robustness of the proposed segmentation framework was quantitatively assessed using one public database to train and test the model (LUNA16) and two public databases (ISBI VESSEL12 challenge and CRPF dataset) only for testing the model; each database consists of 700, 23, and 40 CT images, respectively, that were acquired with a different scanner and protocol. Based on the experimental results, the proposed method achieved competitive results over the existing techniques with Dice coefficient of 99.7, 99.1, and 98.8 for LUNA16, VESSEL12, and CRPF datasets, respectively. For segmenting lung tissue in CT images, the proposed model is efficient in terms of time and parameters and outperforms other state-of-the-art methods. Additionally, this model is publicly accessible via a graphical user interface.
translated by 谷歌翻译
最近关于Covid-19的研究表明,CT成像提供了评估疾病进展和协助诊断的有用信息,以及帮助理解疾病。有越来越多的研究,建议使用深度学习来使用胸部CT扫描提供快速准确地定量Covid-19。兴趣的主要任务是胸部CT扫描的肺和肺病变的自动分割,确认或疑似Covid-19患者。在这项研究中,我们使用多中心数据集比较12个深度学习算法,包括开源和内部开发的算法。结果表明,合并不同的方法可以提高肺部分割,二元病变分割和多种子病变分割的总体测试集性能,从而分别为0.982,0.724和0.469的平均骰子分别。将得到的二元病变分段为91.3ml的平均绝对体积误差。通常,区分不同病变类型的任务更加困难,分别具有152mL的平均绝对体积差,分别为整合和磨碎玻璃不透明度为0.369和0.523的平均骰子分数。所有方法都以平均体积误差进行二元病变分割,该分段优于人类评估者的视觉评估,表明这些方法足以用于临床实践中使用的大规模评估。
translated by 谷歌翻译
本文提出了COVID-19患者肺部肺部感染和正常区域的自动分割方法。从2019年12月起,2019年新型冠状病毒疾病(Covid-19)遍布世界,对我们的经济活动和日常生活产生重大影响。为了诊断大量感染的患者,需要计算机诊断辅助。胸部CT对于诊断病毒性肺炎,包括Covid-19是有效的。 Covid-19的诊断辅助需要从计算机的CT卷的肺部条件的定量分析方法。本文用Covid-19分割完全卷积网络(FCN)提出了来自CT卷中的CT卷中肺部感染和正常区域的自动分割方法。在诊断包括Covid-19的肺部疾病中,肺部正常和感染区域的条件分析很重要。我们的方法识别CT卷中的肺正态和感染区。对于具有各种形状和尺寸的细分感染区域,我们引入了密集的汇集连接并扩张了我们的FCN中的互联网。我们将该方法应用于Covid-19案例的CT卷。从轻度到Covid-19的严重病例,所提出的方法在肺部正确分段正常和感染区域。正常和感染区域的骰子评分分别为0.911和0.753。
translated by 谷歌翻译
肺癌是最致命的癌症之一,部分诊断和治疗取决于肿瘤的准确描绘。目前是最常见的方法的人以人为本的分割,须遵守观察者间变异性,并且考虑到专家只能提供注释的事实,也是耗时的。最近展示了有前途的结果,自动和半自动肿瘤分割方法。然而,随着不同的研究人员使用各种数据集和性能指标验证了其算法,可靠地评估这些方法仍然是一个开放的挑战。通过2018年IEEE视频和图像处理(VIP)杯竞赛创建的计算机断层摄影扫描(LOTUS)基准测试的肺起源肿瘤分割的目标是提供唯一的数据集和预定义的指标,因此不同的研究人员可以开发和以统一的方式评估他们的方法。 2018年VIP杯始于42个国家的全球参与,以获得竞争数据。在注册阶段,有129名成员组成了来自10个国家的28个团队,其中9个团队将其达到最后阶段,6队成功完成了所有必要的任务。简而言之,竞争期间提出的所有算法都是基于深度学习模型与假阳性降低技术相结合。三种决赛选手开发的方法表明,有希望的肿瘤细分导致导致越来越大的努力应降低假阳性率。本次竞争稿件概述了VIP-Cup挑战,以及所提出的算法和结果。
translated by 谷歌翻译
随着深度学习方法的进步,如深度卷积神经网络,残余神经网络,对抗网络的进步。 U-Net架构最广泛利用生物医学图像分割,以解决目标区域或子区域的识别和检测的自动化。在最近的研究中,基于U-Net的方法在不同应用中显示了最先进的性能,以便在脑肿瘤,肺癌,阿尔茨海默,乳腺癌等疾病的早期诊断和治疗中发育计算机辅助诊断系统等,使用各种方式。本文通过描述U-Net框架来提出这些方法的成功,然后通过执行1)型号的U-Net变体进行综合分析,2)模特内分类,建立更好的见解相关的挑战和解决方案。此外,本文还强调了基于U-Net框架在持续的大流行病,严重急性呼吸综合征冠状病毒2(SARS-COV-2)中的贡献也称为Covid-19。最后,分析了这些U-Net变体的优点和相似性以及生物医学图像分割所涉及的挑战,以发现该领域的未来未来的研究方向。
translated by 谷歌翻译
我们提出了一种新型的深度学习方法,以分类19.Covid-19患者的肺CTS。具体而言,我们将扫描分为健康的肺组织,非肺部区域,以及两个不同但视觉上相似的病理性肺组织,即地面玻璃透明度和巩固。这是通过独特的端到端层次网络架构和整体学习来实现的,这有助于分割并为细分不确定性提供衡量标准。提出的框架为三个Covid-19数据集实现了竞争成果和出色的概括能力。我们的方法在COVID-19 CT图像细分的公共Kaggle竞赛中排名第二。此外,分割不确定性区域显示与两种不同放射科医生的手动注释之间的分歧相对应。最后,在比较患者的COVID-19严重程度评分(基于临床指标)和分割的肺病理时,显示了我们的私人数据集的初步有希望的对应结果。代码和数据可在我们的存储库中找到:https://github.com/talbenha/covid-seg
translated by 谷歌翻译
机器学习和计算机视觉技术近年来由于其自动化,适合性和产生惊人结果的能力而迅速发展。因此,在本文中,我们调查了2014年至2022年之间发表的关键研究,展示了不同的机器学习算法研究人员用来分割肝脏,肝肿瘤和肝脉管结构的研究。我们根据感兴趣的组织(肝果,肝肿瘤或肝毒剂)对被调查的研究进行了划分,强调了同时解决多个任务的研究。此外,机器学习算法被归类为受监督或无监督的,如果属于某个方案的工作量很大,则将进一步分区。此外,对文献和包含上述组织面具的网站发现的不同数据集和挑战进行了彻底讨论,强调了组织者的原始贡献和其他研究人员的贡献。同样,在我们的评论中提到了文献中过度使用的指标,这强调了它们与手头的任务的相关性。最后,强调创新研究人员应对需要解决的差距的关键挑战和未来的方向,例如许多关于船舶分割挑战的研究的稀缺性以及为什么需要早日处理他们的缺席。
translated by 谷歌翻译
人类生理学中的各种结构遵循特异性形态,通常在非常细的尺度上表达复杂性。这种结构的例子是胸前气道,视网膜血管和肝血管。可以观察到可以观察到可以观察到可以观察到可以观察到空间排列的磁共振成像(MRI),计算机断层扫描(CT),光学相干断层扫描(OCT)等医学成像模式(MRI),计算机断层扫描(CT),可以观察到空间排列的大量2D和3D图像的集合。这些结构在医学成像中的分割非常重要,因为对结构的分析提供了对疾病诊断,治疗计划和预后的见解。放射科医生手动标记广泛的数据通常是耗时且容易出错的。结果,在过去的二十年中,自动化或半自动化的计算模型已成为医学成像的流行研究领域,迄今为止,许多计算模型已经开发出来。在这项调查中,我们旨在对当前公开可用的数据集,细分算法和评估指标进行全面审查。此外,讨论了当前的挑战和未来的研究方向。
translated by 谷歌翻译
由于图像的复杂性和活细胞的时间变化,来自明亮场光显微镜图像的活细胞分割具有挑战性。最近开发的基于深度学习(DL)的方法由于其成功和有希望的结果而在医学和显微镜图像分割任务中变得流行。本文的主要目的是开发一种基于U-NET的深度学习方法,以在明亮场传输光学显微镜中分割HeLa系的活细胞。为了找到适合我们数据集的最合适的体系结构,提出了剩余的注意U-net,并将其与注意力和简单的U-NET体系结构进行了比较。注意机制突出了显着的特征,并抑制了无关图像区域中的激活。残余机制克服了消失的梯度问题。对于简单,注意力和剩余的关注U-NET,我们数据集的平均值得分分别达到0.9505、0.9524和0.9530。通过将残留和注意机制应用在一起,在平均值和骰子指标中实现了最准确的语义分割结果。应用的分水岭方法适用于这种最佳的(残留的关注)语义分割结果,使每个单元格的特定信息进行了分割。
translated by 谷歌翻译
Accurate airway extraction from computed tomography (CT) images is a critical step for planning navigation bronchoscopy and quantitative assessment of airway-related chronic obstructive pulmonary disease (COPD). The existing methods are challenging to sufficiently segment the airway, especially the high-generation airway, with the constraint of the limited label and cannot meet the clinical use in COPD. We propose a novel two-stage 3D contextual transformer-based U-Net for airway segmentation using CT images. The method consists of two stages, performing initial and refined airway segmentation. The two-stage model shares the same subnetwork with different airway masks as input. Contextual transformer block is performed both in the encoder and decoder path of the subnetwork to finish high-quality airway segmentation effectively. In the first stage, the total airway mask and CT images are provided to the subnetwork, and the intrapulmonary airway mask and corresponding CT scans to the subnetwork in the second stage. Then the predictions of the two-stage method are merged as the final prediction. Extensive experiments were performed on in-house and multiple public datasets. Quantitative and qualitative analysis demonstrate that our proposed method extracted much more branches and lengths of the tree while accomplishing state-of-the-art airway segmentation performance. The code is available at https://github.com/zhaozsq/airway_segmentation.
translated by 谷歌翻译
肺癌是世界大多数国家的死亡原因。由于提示肿瘤的诊断可以允许肿瘤学家辨别他们的性质,类型和治疗方式,CT扫描图像的肿瘤检测和分割是全球的关键研究领域。本文通过在Lotus DataSet上应用二维离散小波变换(DWT)来接近肺肿瘤分割,以进行更细致的纹理分析,同时将来自相邻CT切片的信息集成到馈送到深度监督的多路仓模型之前。在训练网络的同时,学习速率,衰减和优化算法的变化导致了不同的骰子共同效率,其详细统计数据已经包含在本文中。我们还讨论了此数据集中的挑战以及我们选择如何克服它们。本质上,本研究旨在通过试验多个适当的网络来最大化从二维CT扫描切片预测肿瘤区域的成功率,导致骰子共同效率为0.8472。
translated by 谷歌翻译
Lung cancer is a severe menace to human health, due to which millions of people die because of late diagnoses of cancer; thus, it is vital to detect the disease as early as possible. The Computerized chest analysis Tomography of scan is assumed to be one of the efficient solutions for detecting and classifying lung nodules. The necessity of high accuracy of analyzing C.T. scan images of the lung is considered as one of the crucial challenges in detecting and classifying lung cancer. A new long-short-term-memory (LSTM) based deep fusion structure, is introduced, where, the texture features computed from lung nodules through new volumetric grey-level-co-occurrence-matrices (GLCM) computations are applied to classify the nodules into: benign, malignant and ambiguous. An improved Otsu segmentation method combined with the water strider optimization algorithm (WSA) is proposed to detect the lung nodules. Otsu-WSA thresholding can overcome the restrictions present in previous thresholding methods. Extended experiments are run to assess this fusion structure by considering 2D-GLCM computations based 2D-slices fusion, and an approximation of this 3D-GLCM with volumetric 2.5D-GLCM computations-based LSTM fusion structure. The proposed methods are trained and assessed through the LIDC-IDRI dataset, where 94.4%, 91.6%, and 95.8% Accuracy, sensitivity, and specificity are obtained, respectively for 2D-GLCM fusion and 97.33%, 96%, and 98%, accuracy, sensitivity, and specificity, respectively, for 2.5D-GLCM fusion. The yield of the same are 98.7%, 98%, and 99%, for the 3D-GLCM fusion. The obtained results and analysis indicate that the WSA-Otsu method requires less execution time and yields a more accurate thresholding process. It is found that 3D-GLCM based LSTM outperforms its counterparts.
translated by 谷歌翻译
肺癌是癌症相关死亡率的主要原因。尽管新技术(例如图像分割)对于改善检测和较早诊断至关重要,但治疗该疾病仍然存在重大挑战。特别是,尽管治愈性分辨率增加,但许多术后患者仍会出现复发性病变。因此,非常需要预后工具,可以更准确地预测患者复发的风险。在本文中,我们探讨了卷积神经网络(CNN)在术前计算机断层扫描(CT)图像中存在的分割和复发风险预测。首先,随着医学图像分割的最新进展扩展,剩余的U-NET用于本地化和表征每个结节。然后,确定的肿瘤将传递给第二个CNN进行复发风险预测。该系统的最终结果是通过随机的森林分类器产生的,该分类器合成具有临床属性的第二个网络的预测。分割阶段使用LIDC-IDRI数据集,并获得70.3%的骰子得分。复发风险阶段使用了国家癌症研究所的NLST数据集,并获得了73.0%的AUC。我们提出的框架表明,首先,自动结节分割方法可以概括地为各种多任务系统提供管道,其次,深度学习和图像处理具有改善当前预后工具的潜力。据我们所知,这是第一个完全自动化的细分和复发风险预测系统。
translated by 谷歌翻译
Covid-19已成为全球大流行,仍然对公众产生严重的健康风险。 CT扫描中肺炎病变的准确和有效的细分对于治疗决策至关重要。我们提出了一种使用循环一致生成的对冲网络(循环GaN)的新型无监督方法,其自动化和加速病变描绘过程。工作流程包括肺体积分割,“合成”健康肺一代,感染和健康的图像减法,以及二元病变面膜创造。首先使用预先训练的U-Net划定肺体积,并作为后续网络的输入。开发了循环GaN,以产生来自受感染的肺图像的合成的“健康”肺CT图像。之后,通过从“受感染的”肺CT图像中减去合成的“健康”肺CT图像来提取肺炎病变。然后将中值过滤器和K-Means聚类应用于轮廓的病变。在两个公共数据集(冠状遗传酶和Radiopedia)上验证了自动分割方法。骰子系数分别达到0.748和0.730,用于冠状遗传酶和RadioPedia数据集。同时,对冠纳卡酶数据集的病变分割性的精度和灵敏度为0.813和0.735,以及用于Radiopedia数据集的0.773和0.726。性能与现有的监督分割网络和以前无监督的特性相当。提出的无监督分割方法在自动Covid-19病变描绘中实现了高精度和效率。分割结果可以作为进一步手动修改的基线和病变诊断的质量保证工具。此外,由于其无人自化的性质,结果不受医师经验的影响,否则对监督方法至关重要。
translated by 谷歌翻译
In this study, we propose a lung nodule detection scheme which fully incorporates the clinic workflow of radiologists. Particularly, we exploit Bi-Directional Maximum intensity projection (MIP) images of various thicknesses (i.e., 3, 5 and 10mm) along with a 3D patch of CT scan, consisting of 10 adjacent slices to feed into self-distillation-based Multi-Encoders Network (MEDS-Net). The proposed architecture first condenses 3D patch input to three channels by using a dense block which consists of dense units which effectively examine the nodule presence from 2D axial slices. This condensed information, along with the forward and backward MIP images, is fed to three different encoders to learn the most meaningful representation, which is forwarded into the decoded block at various levels. At the decoder block, we employ a self-distillation mechanism by connecting the distillation block, which contains five lung nodule detectors. It helps to expedite the convergence and improves the learning ability of the proposed architecture. Finally, the proposed scheme reduces the false positives by complementing the main detector with auxiliary detectors. The proposed scheme has been rigorously evaluated on 888 scans of LUNA16 dataset and obtained a CPM score of 93.6\%. The results demonstrate that incorporating of bi-direction MIP images enables MEDS-Net to effectively distinguish nodules from surroundings which help to achieve the sensitivity of 91.5% and 92.8% with false positives rate of 0.25 and 0.5 per scan, respectively.
translated by 谷歌翻译
新的SARS-COV-2大流行病也被称为Covid-19一直在全世界蔓延,导致生活猖獗。诸如CT,X射线等的医学成像在通过呈现器官功能的视觉表示来诊断患者时起着重要作用。然而,对于任何分析这种扫描的放射科学家是一种乏味且耗时的任务。新兴的深度学习技术展示了它的优势,在分析诸如Covid-19等疾病和病毒的速度更快的诊断中有助于帮助。在本文中,提出了一种基于自动化的基于深度学习的模型CoVID-19层级分割网络(CHS-Net),其用作语义层次分段器,以通过使用两个级联的CT医学成像来识别来自肺轮廓的Covid-19受感染的区域剩余注意力撤销U-NET(RAIU-Net)模型。 Raiu-net包括具有频谱空间和深度关注网络(SSD)的剩余成立U-Net模型,该网络(SSD)是由深度可分离卷积和混合池(MAX和频谱池)的收缩和扩展阶段开发的,以有效地编码和解码语义和不同的分辨率信息。 CHS-NET接受了分割损失函数的培训,该损失函数是二进制交叉熵损失和骰子损失的平均值,以惩罚假阴性和假阳性预测。将该方法与最近提出的方法进行比较,并使用标准度量评估,如准确性,精度,特异性,召回,骰子系数和jaccard相似度以及与Gradcam ++和不确定性地图的模型预测的可视化解释。随着广泛的试验,观察到所提出的方法优于最近提出的方法,并有效地将Covid-19受感染的地区进行肺部。
translated by 谷歌翻译
This paper presents our solution for the 2nd COVID-19 Severity Detection Competition. This task aims to distinguish the Mild, Moderate, Severe, and Critical grades in COVID-19 chest CT images. In our approach, we devise a novel infection-aware 3D Contrastive Mixup Classification network for severity grading. Specifcally, we train two segmentation networks to first extract the lung region and then the inner lesion region. The lesion segmentation mask serves as complementary information for the original CT slices. To relieve the issue of imbalanced data distribution, we further improve the advanced Contrastive Mixup Classification network by weighted cross-entropy loss. On the COVID-19 severity detection leaderboard, our approach won the first place with a Macro F1 Score of 51.76%. It significantly outperforms the baseline method by over 11.46%.
translated by 谷歌翻译
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19. However, there are still some challenges for developing AI system. 1) Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint. 2) Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume. 3) The emergent breaking out of COVID-19 makes it hard to annotate sufficient CT volumes for training deep model. To address these issues, we first build a multiple dimensional-attention convolutional neural network (MDA-CNN) to aggregate multi-scale information along different dimension of input feature maps and impose supervision on multiple predictions from different CNN layers. Second, we assign this MDA-CNN as a basic network into a novel dual multi-scale mean teacher network (DM${^2}$T-Net) for semi-supervised COVID-19 lung infection segmentation on CT volumes by leveraging unlabeled data and exploring the multi-scale information. Our DM${^2}$T-Net encourages multiple predictions at different CNN layers from the student and teacher networks to be consistent for computing a multi-scale consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from multiple predictions of MDA-CNN. Third, we collect two COVID-19 segmentation datasets to evaluate our method. The experimental results show that our network consistently outperforms the compared state-of-the-art methods.
translated by 谷歌翻译
医疗图像分割通常需要在单个图像上分割多个椭圆对象。这包括除其他任务外,还分割了诸如轴向CTA切片的主动脉之类的容器。在本文中,我们提出了一种一般方法,用于改善这些任务中神经网络的语义分割性能,并验证我们在主动脉分割任务中的方法。我们使用两个神经网络的级联反应,其中一个基于U-NET体系结构执行粗糙的分割,另一个对输入的极性图像转换执行了最终分割。粗糙分割的连接组件分析用于构建极性变换,并且使用磁滞阈值融合了对同一图像的多个转换的预测。我们表明,这种方法可以改善主动脉分割性能,而无需复杂的神经网络体系结构。此外,我们表明我们的方法可以提高稳健性和像素级的回忆,同时根据最新的状态实现细分性能。
translated by 谷歌翻译
Achieving accurate and automated tumor segmentation plays an important role in both clinical practice and radiomics research. Segmentation in medicine is now often performed manually by experts, which is a laborious, expensive and error-prone task. Manual annotation relies heavily on the experience and knowledge of these experts. In addition, there is much intra- and interobserver variation. Therefore, it is of great significance to develop a method that can automatically segment tumor target regions. In this paper, we propose a deep learning segmentation method based on multimodal positron emission tomography-computed tomography (PET-CT), which combines the high sensitivity of PET and the precise anatomical information of CT. We design an improved spatial attention network(ISA-Net) to increase the accuracy of PET or CT in detecting tumors, which uses multi-scale convolution operation to extract feature information and can highlight the tumor region location information and suppress the non-tumor region location information. In addition, our network uses dual-channel inputs in the coding stage and fuses them in the decoding stage, which can take advantage of the differences and complementarities between PET and CT. We validated the proposed ISA-Net method on two clinical datasets, a soft tissue sarcoma(STS) and a head and neck tumor(HECKTOR) dataset, and compared with other attention methods for tumor segmentation. The DSC score of 0.8378 on STS dataset and 0.8076 on HECKTOR dataset show that ISA-Net method achieves better segmentation performance and has better generalization. Conclusions: The method proposed in this paper is based on multi-modal medical image tumor segmentation, which can effectively utilize the difference and complementarity of different modes. The method can also be applied to other multi-modal data or single-modal data by proper adjustment.
translated by 谷歌翻译