共聚焦激光内镜显微镜(CLE)是一种先进的光学荧光成像技术,由于其亚细胞尺寸分辨率,有可能提高术中精确度,延长切除范围,并为恶性侵袭性脑肿瘤定制手术。尽管具有良好的诊断潜力,解释灰色调荧光图像对于未经训练的用户来说可能是困难的。在这篇综述中,我们提供了CLE图像的生物信息学分析方法的详细描述,该方法开始帮助神经外科医生和病理学家在治疗诊断概念中快速连接飞行中的影像学,病理学和手术观察到共生系统。我们提出了一个概述,并讨论了用于自动检测诊断CLEimages的深度学习模型,并讨论了各种训练方案和集成建模对深度学习预测模型的影响。本文回顾的两个主要方法包括可以自动分类CLE图像诊断/非诊断,胶质瘤/非肿瘤,肿瘤/损伤/正常类别和模型的模型,这些模型可以使用弱监督方法定位CLE图像上的组织学特征。我们还简要回顾了用于其他器官的CLE图像分析的深度学习方法的进展。检查的重大进展和自动诊断框架选择的精确性将增强CLE的诊断潜力,改善手术工作流程并整合到脑肿瘤手术中。此类技术和生物信息学分析有助于提高braintumor治疗的精确度,个性化和治疗诊断能力。
translated by 谷歌翻译
M edical imaging is expanding globally at an unprecedented rate 1,2 , leading to an ever-expanding quantity of data that requires human expertise and judgement to interpret and triage. In many clinical specialities there is a relative shortage of this expertise to provide timely diagnosis and referral. For example, in ophthalmology, the widespread availability of optical coherence tomography (OCT) has not been matched by the availability of expert humans to interpret scans and refer patients to the appropriate clinical care 3. This problem is exacerbated by the marked increase in prevalence of sight-threatening diseases for which OCT is the gold standard of initial assessment 4-7. Artificial intelligence (AI) provides a promising solution for such medical image interpretation and triage, but despite recent breakthrough studies in which expert-level performance on two-dimensional photographs in preclinical settings has been demonstrated 8,9 , prospective clinical application of this technology remains stymied by three key challenges. First, AI (typically trained on hundreds of thousands of examples from one canonical dataset) must generalize to new populations and devices without a substantial loss of performance , and without prohibitive data requirements for retraining. Second, AI tools must be applicable to real-world scans, problems and pathways, and designed for clinical evaluation and deployment. Finally, AI tools must match or exceed the performance of human experts in such real-world situations. Recent work applying AI to OCT has shown promise in resolving some of these criteria in isolation , but has not yet shown clinical applicability by resolving all three.
translated by 谷歌翻译
The morphological interpretation of histologic sections forms the basis of diagnosis and prognostication for cancer. In the diagnosis of carcinomas, pathologists perform a semiquantitative analysis of a small set of morphological features to determine the cancer's histologic grade. Physicians use histologic grade to inform their assessment of a carcinoma's aggressiveness and a patient's prognosis. Nevertheless, the determination of grade in breast cancer examines only a small set of morphological features of breast cancer epithelial cells, which has been largely unchanged since the 1920s. A comprehensive analysis of automatically quantitated morphological features could identify characteristics of prognostic relevance and provide an accurate and reproducible means for assessing prognosis from microscopic image data. We developed the C-Path (Computational Pathologist) system to measure a rich quantitative feature set from the breast cancer epithelium and stroma (6642 features), including both standard morphometric descriptors of image objects and higher-level contextual, relational, and global image features. These measurements were used to construct a prognostic model. We applied the C-Path system to microscopic images from two independent cohorts of breast cancer patients [from the Netherlands Cancer Institute (NKI) cohort, n = 248, and the Vancouver General Hospital (VGH) cohort, n = 328]. The prognostic model score generated by our system was strongly associated with overall survival in both the NKI and the VGH cohorts (both log-rank P ≤ 0.001). This association was independent of clinical, pathological, and molecular factors. Three stromal features were significantly associated with survival, and this association was stronger than the association of survival with epithelial characteristics in the model. These findings implicate stromal morpho-logic structure as a previously unrecognized prognostic determinant for breast cancer.
translated by 谷歌翻译
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro-and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
translated by 谷歌翻译
糖尿病性视网膜病变(DR)和糖尿病性黄斑水肿是糖尿病的常见并发症,可导致视力丧失。 DR的分级是一个相当复杂的过程,需要检测细微特征,如微动脉瘤,视网膜内出血和视网膜内微血管异常。因此,可能存在相当数量的分级变异性。获得参考标准和解决评分者之间的分歧有不同的方法,虽然通常认为裁决直到完全一致会产生最佳参考标准,但各种解决分歧方法之间的差异还没有被广泛认可。在这项研究中,我们检查了不同分级方法的变异性,参考标准的定义,以及它们对建立用于检测糖尿病眼病的深度学习模型的影响。我们发现,一小组裁定的DR等级可以大大提高算法性能。由此产生的算法的性能与美国董事会认证的眼科医生和视网膜专家的表现相当。
translated by 谷歌翻译
深度学习算法已被用于检测具有专家级准确度的糖尿病视网膜病变(DR)。本研究旨在验证一个这样的算法大规模临床人群,并将算法性能与人类分级者进行比较。根据DR inThailand的社区为基础的全国性筛查计划,对25,326例糖尿病患者的可分级视网膜图像进行了DR严重程度和可诱发的糖尿病性黄斑水肿(DME)的分析。由国际视网膜专家小组裁定的等级作为参考标准。在DR的不同严重程度水平上确定可接受的疾病,深度学习显着降低了假阴性率(23%),但假阳性率略高(2%)。深度学习算法可以作为DR筛选的有价值工具。
translated by 谷歌翻译
青光眼是世界范围内可预防,不可逆转的失明的主要原因。这种疾病可以保持无症状直至严重,估计有50%-90%的青光眼患者仍未确诊。因此,建议对青光眼筛查进行早期检测和治疗。检测青光眼的一种经济有效的工具可以扩大医疗保健对更大患者群体的访问,但目前还没有这种工具。我们使用5833幅图像的回顾性数据集训练深度学习(DL)算法,评估可升级性,青光眼视神经乳头(ONH)特征和可逆性青光眼风险。使用2个单独的数据集验证所得算法。对于可参考的青光眼风险,该算法在验证数据集“A”中具有0.940(95%CI,0.922-0.955)的AUC(1,205个图像,1个图像/患者; 19%可参考其中图像由研究员培训的青光眼专家小组裁定,并在验证数据集“B”中分析0.858(95%CI,0.836-0.878)(来自9,643名患者的17,593张图像; 9.2%的图像来自亚特兰大退伍军人事务部眼科诊所糖尿病视网膜电视检查程序使用临床转诊决定作为参考标准)。此外,我们发现垂直杯与椎间盘比> = 0.7,神经视网膜边缘,视网膜神经纤维层缺损和裸露的环形血管的存在对青光眼专家和算法的青光眼风险评估贡献最大。对于青光眼ONH特征,算法AUC介于0.608-0.977之间。 DL算法对10名年级学生中的6名(包括3名青光眼专家中的2名)具有明显更高的敏感性,相对于所有评分者具有相当或更高的特异性。仅在眼底图像上训练的DL算法可以以更高的灵敏度和对眼睛护理提供者的可比特异性来检测可参考的青光眼风险。
translated by 谷歌翻译
Graphical Abstract Highlights d An artificial intelligence system using transfer learning techniques was developed d It effectively classified images for macular degeneration and diabetic retinopathy d It also accurately distinguished bacterial and viral pneumonia on chest X-rays d This has potential for generalized high-impact application in biomedical imaging In Brief Image-based deep learning classifies macular degeneration and diabetic retinopathy using retinal optical coherence tomography images and has potential for generalized applications in biomedical image interpretation and medical decision making. SUMMARY The implementation of clinical-decision support algorithms for medical imaging faces challenges with reliability and interpretability. Here, we establish a diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases. Our framework utilizes transfer learning, which trains a neural network with a fraction of the data of conventional approaches. Applying this approach to a dataset of optical coherence tomography images, we demonstrate performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macu-lar edema. We also provide a more transparent and interpretable diagnosis by highlighting the regions recognized by the neural network. We further demonstrate the general applicability of our AI system for diagnosis of pediatric pneumonia using chest X-ray images. This tool may ultimately aid in expediting the diagnosis and referral of these treatable conditions, thereby facilitating earlier treatment, resulting in improved clinical outcomes.
translated by 谷歌翻译
肿瘤增殖是指示乳腺癌患者预后的重要生物标志物。在临床环境中评估肿瘤增殖是高度主观和劳动密集型的任务。通过图像分析自动化肿瘤增殖评估的先前努力仅集中于预定义肿瘤区域中的有丝分裂检测。然而,在现实世界的情况下,应该在全幻灯片图像(WSI)中执行自动有丝分裂检测,并且自动方法应该能够产生作为输入的WSI的肿瘤增殖分数。为了解决这个问题,我们组织了2016年TUmor ProliferationAssessment Challenge(TUPAC16)关于WSIs肿瘤增殖评分的预测。挑战数据集包括500次训练和321次测试乳腺癌组织病理学WSI。为了确保公平和独立的评估,仅向挑战参与者提供了训练数据集的基本事实。挑战的第一个任务是预测有丝分裂,即重现由病理学家评估肿瘤增殖的手动方法。第二项任务是从WSI预测基于基因表达的PA50增殖评分。第一项任务的最佳表现自动方法在预测得分和地面事实之间实现了二次加权Cohen的kappa得分$ \ kappa $ = 0.567,95%CI [0.464,0.671]。对于第二项任务,top方法的预测具有一个Sararman相关系数r = 0.617,95%CI [0.581 0.651]与事实真相。这是第一项研究WSI肿瘤增殖评估的研究。鉴于任务的难度和基本事实的弱标记性质,取得的成果是有希望的。然而,需要进一步研究以提高图像分析方法在该任务中的实际应用。
translated by 谷歌翻译
the CAMELYON16 Consortium IMPORTANCE Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. OBJECTIVE Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. DESIGN, SETTING, AND PARTICIPANTS Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). EXPOSURES Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. MAIN OUTCOMES AND MEASURES The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. RESULTS The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). CONCLUSIONS AND RELEVANCE In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this app
translated by 谷歌翻译
前列腺癌(PCa)由病理学家通过检查苏木精和曙红(H&E)染色的载玻片上的癌上皮组织的结构模式来评分。鉴于腺体形态的重要性,腺体上皮组织和其他组织之间的自动分化是开发检测PCa的自动化方法的重要先决条件。我们提出了一种新的方法,使用深度学习,自动分割数字化前列腺切除术幻灯片中的上皮组织。我们采用免疫组织化学(IHC)来比较H&E幻灯片上的手动概述,使得基本事实更少主观和更精确,特别是在具有高等级和低分化PCa的区域。我们的数据集由102个组织块组成,包括低级和高级PCa。从每个区块切下一个新切片,用H&E染色,扫描,用P63和CK8 / 18重新染色以突出上皮结构,并再次扫描。 H&E幻灯片已在IHC幻灯片中注册。在IHC幻灯片的一个子集上,我们应用colordeconvolution,手动校正染色错误,并训练U-Net执行上皮结构的分割。由IHC U-Net生成的全滑动分割掩模用于在H&E上训练第二个U-Net。我们的系统使精细细胞水平分割和细胞分离完整的腺体以及个体(肿瘤)上皮细胞。我们在ahold-out测试集上获得了0.895的F1分数,在不同中心的外部参考集上获得了0.827。我们设想这种分割是全自动前列腺癌检测和分级管道的第一部分。
translated by 谷歌翻译
*These authors contributed equally to this work. The brightfield microscope is instrumental in the visual examination of both biological and physical samples at sub-millimeter scales. One key clinical application has been in cancer histopathology, where the microscopic assessment of the tissue samples is used for the diagnosis and staging of cancer and thus guides clinical therapy​ ​ 1​. However, the interpretation of these samples is inherently subjective, resulting in significant diagnostic variability​ ​ 2,3​. Moreover, in many regions of the world, access to pathologists is severely limited due to lack of trained personnel​ ​ 4​. In this regard, Artificial Intelligence (AI) based tools promise to improve the access and quality of healthcare​ ​ 5-7​. However, despite significant advances in AI research, integration of these tools into real-world cancer diagnosis workflows remains challenging because of the costs of image digitization and difficulties in deploying AI solutions​ ​ 8​ ,​ 9​. Here we propose a cost-effective solution to the integration of AI: the Augmented Reality Microscope (ARM). The ARM overlays AI-based information onto the current view of the sample through the optical pathway in real-time, enabling seamless integration of AI into the regular microscopy workflow. We demonstrate the utility of ARM in the detection of lymph node metastases in breast cancer and the identification of prostate cancer with a latency that supports real-time workflows. We anticipate that ARM will remove barriers towards the use of AI in microscopic analysis and thus improve the accuracy and efficiency of cancer diagnosis. This approach is applicable to other microscopy tasks and AI algorithms in the life sciences​ ​ 10​ and beyond​ ​ 11,12​. Microscopic examination of samples is the gold standard for the diagnosis of cancer, autoimmune diseases, infectious diseases, and more. In cancer, the microscopic examination of stained tissue sections is critical for diagnosing and staging the patient's tumor, which informs treatment decisions and prognosis. In cancer, microscopy analysis faces three major challenges. As a form of image interpretation, these examinations are inherently subjective, exhibiting considerable inter-observer and intra-observer variability​ 2,3​. Moreover, clinical guidelines​ 1​ and studies​ 13​ have begun to require quantitative assessments as part of the effort towards better patient risk stratification​ 1​. For example, breast cancer staging requires counting mitotic cells and quantification of the tumor burden in lymph nodes by measuring the largest tumor focus. However, despite being helpful in treatment planning, quantification is laborious and error-prone. Lastly, access to disease experts can be limited in both developed and developing countries​ 4​ , exacerbating the problem. As a potential solution, recent advances in AI, specifically deep learning​ 14​ , have demonstrated automated medical image analysis with performance c
translated by 谷歌翻译
不同类型细胞的空间分布可以揭示癌细胞的生长模式,它与肿瘤微环境的关系以及身体的免疫反应,所有这些都代表了癌症的关键标志。然而,手动识别和定位病理滑块中的所有细胞几乎都是不可能。在这项研究中,我们开发了一种自动细胞分类管道ConvPath,它包括细胞核分割,基于卷积神经网络的肿瘤,基质和淋巴细胞分类,以及肿瘤微环境相关特征的提取,用于癌症病理图像。在训练和独立测试数据集中,总体分类准确率分别为92.9%和90.1%。通过识别细胞和分类细胞类型,该管道可以将病理学图像转换成肿瘤,基质细胞和淋巴细胞的空间图。从这张空间图中,我们可以提取出表征肿瘤微环境的特征。基于这些特征,我们开发了基于图像特征的预测模型,并在两个独立的队列中验证了该模型。预测风险组作为独立的预后因素,对包括年龄,性别,吸烟状况和阶段在内的临床变量进行预测。
translated by 谷歌翻译
人工计数组织切片中的有丝分裂肿瘤细胞是乳腺癌最强的预后标志之一。然而,该过程耗时且容易出错。我们开发了一种基于卷积神经网络(CNNs)自动检测乳腺癌组织切片中的底色数字的方法。 CNNs在苏木精和伊红(H&E)染色组织切片中的应用受到以下因素的阻碍:(1)由病理学家建立的噪声和昂贵的参考标准,(2)由于实验室间的变异而缺乏概括性,以及(3)处理千兆像素需要高计算量全幻灯片图像(WSI)。在本文中,我们提出了一种训练和评估CNN的方法,以在乳腺癌WSI中有丝分裂检测的背景下专门解决这些问题。首先,通过结合磷酸化组蛋白-H3(PHH3)重建载玻片和登记中的有丝分裂活性的图像分析,我们在整个H&E WSI中建立了有丝分裂检测的参考标准,需要最少的手动注释努力。其次,我们设计了一种数据增强策略,通过直接修改苏木精和曙红颜色通道,创建多样化和逼真的H&E染色变异。在训练期间使用它与网络整合相结合导致染色不变的有丝分裂检测器。第三,我们应用知识蒸馏来减少有丝分裂检测集合的计算要求,并且性能损失可以忽略不计。该系统在单中心队列中进行训练,并在癌症基因组图谱中的独立多中心队列中评估肿瘤增殖评估挑战(TUPAC)的三个方面。对于挑战的大部分任务,我们在前三种最佳方法中获得了性能。
translated by 谷歌翻译
The proliferative activity of breast tumors, which is routinely estimated bycounting of mitotic figures in hematoxylin and eosin stained histologysections, is considered to be one of the most important prognostic markers.However, mitosis counting is laborious, subjective and may suffer from lowinter-observer agreement. With the wider acceptance of whole slide images inpathology labs, automatic image analysis has been proposed as a potentialsolution for these issues. In this paper, the results from the Assessment ofMitosis Detection Algorithms 2013 (AMIDA13) challenge are described. Thechallenge was based on a data set consisting of 12 training and 11 testingsubjects, with more than one thousand annotated mitotic figures by multipleobservers. Short descriptions and results from the evaluation of eleven methodsare presented. The top performing method has an error rate that is comparableto the inter-observer agreement among pathologists.
translated by 谷歌翻译
这项工作旨在确定如何将现代机器学习技术应用于以前未开发的使用数字病理学的黑色素瘤诊断主题。我们使用数字病理学策划了50例患有皮肤黑色素瘤病例的新数据集。我们提供金标准注释,提供四种组织类型(肿瘤,表皮和真皮),这些注释对于称为Breslow厚度和Clark水平的预测测量非常重要。然后,我们设计了一种新的多步全完全卷积网络(FCN)架构,其性能优于其他网络,这些网络根据标准指标使用相同的数据进行训练和评估。最后,我们训练了一个模型来检测和定位目标组织类型。在处理以前看不见的情况时,我们的模型输出在质量上非常类似于黄金标准。除了作为我们方法的基线计算的标准指标之外,我们还要求三位额外的病理学家测量网络输出的Breslow厚度。他们的反应在诊断上等同于地面实况测量,并且在去除不适合测量的情况时,四位病理学家之间的评估者间可靠性(IRR)为75.0%。鉴于定性和定量结果,使用现代机器学习技术可以克服皮肤和肿瘤解剖学的分裂挑战,但需要更多的工作来改善网络在真皮分割上的表现。此外,我们表明可以达到手动执行布朗厚度测量所需的精度水平。
translated by 谷歌翻译
组织样本的高分辨率显微镜图像提供关于正常和患病组织的形态的详细信息。组织形态学的图像分析可以帮助癌症研究人员更好地理解癌症生物学。细胞核分割和组织图像分类是组织图像分析中的两个常见任务。由于组织形态和肿瘤异质性的复杂性,为这些任务开发准确且有效的算法是一个具有挑战性的问题。在本文中,我们提出了两种计算机算法;一个用于细胞核的分割,另一个用于整个载玻片组织图像的分类。分段算法实现了多尺度深度残余聚集网络,以准确地分割核材料,然后将成核的核分离成个体核。分类算法最初通过深度学习方法进行分级分类,然后将补丁级统计和形态特征用作整个幻灯片图像分类的随机森林回归模型的输入。在MICCAI 2017数字病理学挑战中评估分割和分类算法。分割算法实现了0.78的准确度分数。分类算法的准确度得分为0.81。
translated by 谷歌翻译