共聚焦激光内镜显微镜(CLE)是一种先进的光学荧光成像技术,由于其亚细胞尺寸分辨率,有可能提高术中精确度,延长切除范围,并为恶性侵袭性脑肿瘤定制手术。尽管具有良好的诊断潜力,解释灰色调荧光图像对于未经训练的用户来说可能是困难的。在这篇综述中,我们提供了CLE图像的生物信息学分析方法的详细描述,该方法开始帮助神经外科医生和病理学家在治疗诊断概念中快速连接飞行中的影像学,病理学和手术观察到共生系统。我们提出了一个概述,并讨论了用于自动检测诊断CLEimages的深度学习模型,并讨论了各种训练方案和集成建模对深度学习预测模型的影响。本文回顾的两个主要方法包括可以自动分类CLE图像诊断/非诊断,胶质瘤/非肿瘤,肿瘤/损伤/正常类别和模型的模型,这些模型可以使用弱监督方法定位CLE图像上的组织学特征。我们还简要回顾了用于其他器官的CLE图像分析的深度学习方法的进展。检查的重大进展和自动诊断框架选择的精确性将增强CLE的诊断潜力,改善手术工作流程并整合到脑肿瘤手术中。此类技术和生物信息学分析有助于提高braintumor治疗的精确度,个性化和治疗诊断能力。
translated by 谷歌翻译
Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro-and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.
translated by 谷歌翻译
M edical imaging is expanding globally at an unprecedented rate 1,2 , leading to an ever-expanding quantity of data that requires human expertise and judgement to interpret and triage. In many clinical specialities there is a relative shortage of this expertise to provide timely diagnosis and referral. For example, in ophthalmology, the widespread availability of optical coherence tomography (OCT) has not been matched by the availability of expert humans to interpret scans and refer patients to the appropriate clinical care 3. This problem is exacerbated by the marked increase in prevalence of sight-threatening diseases for which OCT is the gold standard of initial assessment 4-7. Artificial intelligence (AI) provides a promising solution for such medical image interpretation and triage, but despite recent breakthrough studies in which expert-level performance on two-dimensional photographs in preclinical settings has been demonstrated 8,9 , prospective clinical application of this technology remains stymied by three key challenges. First, AI (typically trained on hundreds of thousands of examples from one canonical dataset) must generalize to new populations and devices without a substantial loss of performance , and without prohibitive data requirements for retraining. Second, AI tools must be applicable to real-world scans, problems and pathways, and designed for clinical evaluation and deployment. Finally, AI tools must match or exceed the performance of human experts in such real-world situations. Recent work applying AI to OCT has shown promise in resolving some of these criteria in isolation , but has not yet shown clinical applicability by resolving all three.
translated by 谷歌翻译
Graphical Abstract Highlights d An artificial intelligence system using transfer learning techniques was developed d It effectively classified images for macular degeneration and diabetic retinopathy d It also accurately distinguished bacterial and viral pneumonia on chest X-rays d This has potential for generalized high-impact application in biomedical imaging In Brief Image-based deep learning classifies macular degeneration and diabetic retinopathy using retinal optical coherence tomography images and has potential for generalized applications in biomedical image interpretation and medical decision making. SUMMARY The implementation of clinical-decision support algorithms for medical imaging faces challenges with reliability and interpretability. Here, we establish a diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases. Our framework utilizes transfer learning, which trains a neural network with a fraction of the data of conventional approaches. Applying this approach to a dataset of optical coherence tomography images, we demonstrate performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macu-lar edema. We also provide a more transparent and interpretable diagnosis by highlighting the regions recognized by the neural network. We further demonstrate the general applicability of our AI system for diagnosis of pediatric pneumonia using chest X-ray images. This tool may ultimately aid in expediting the diagnosis and referral of these treatable conditions, thereby facilitating earlier treatment, resulting in improved clinical outcomes.
translated by 谷歌翻译
the CAMELYON16 Consortium IMPORTANCE Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. OBJECTIVE Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. DESIGN, SETTING, AND PARTICIPANTS Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). EXPOSURES Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. MAIN OUTCOMES AND MEASURES The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. RESULTS The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image. For the whole-slide image classification task, the best algorithm (AUC, 0.994 [95% CI, 0.983-0.999]) performed significantly better than the pathologists WTC in a diagnostic simulation (mean AUC, 0.810 [range, 0.738-0.884]; P < .001). The top 5 algorithms had a mean AUC that was comparable with the pathologist interpreting the slides in the absence of time constraints (mean AUC, 0.960 [range, 0.923-0.994] for the top 5 algorithms vs 0.966 [95% CI, 0.927-0.998] for the pathologist WOTC). CONCLUSIONS AND RELEVANCE In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this app
translated by 谷歌翻译
深度学习算法已被用于检测具有专家级准确度的糖尿病视网膜病变(DR)。本研究旨在验证一个这样的算法大规模临床人群,并将算法性能与人类分级者进行比较。根据DR inThailand的社区为基础的全国性筛查计划,对25,326例糖尿病患者的可分级视网膜图像进行了DR严重程度和可诱发的糖尿病性黄斑水肿(DME)的分析。由国际视网膜专家小组裁定的等级作为参考标准。在DR的不同严重程度水平上确定可接受的疾病,深度学习显着降低了假阴性率(23%),但假阳性率略高(2%)。深度学习算法可以作为DR筛选的有价值工具。
translated by 谷歌翻译
The morphological interpretation of histologic sections forms the basis of diagnosis and prognostication for cancer. In the diagnosis of carcinomas, pathologists perform a semiquantitative analysis of a small set of morphological features to determine the cancer's histologic grade. Physicians use histologic grade to inform their assessment of a carcinoma's aggressiveness and a patient's prognosis. Nevertheless, the determination of grade in breast cancer examines only a small set of morphological features of breast cancer epithelial cells, which has been largely unchanged since the 1920s. A comprehensive analysis of automatically quantitated morphological features could identify characteristics of prognostic relevance and provide an accurate and reproducible means for assessing prognosis from microscopic image data. We developed the C-Path (Computational Pathologist) system to measure a rich quantitative feature set from the breast cancer epithelium and stroma (6642 features), including both standard morphometric descriptors of image objects and higher-level contextual, relational, and global image features. These measurements were used to construct a prognostic model. We applied the C-Path system to microscopic images from two independent cohorts of breast cancer patients [from the Netherlands Cancer Institute (NKI) cohort, n = 248, and the Vancouver General Hospital (VGH) cohort, n = 328]. The prognostic model score generated by our system was strongly associated with overall survival in both the NKI and the VGH cohorts (both log-rank P ≤ 0.001). This association was independent of clinical, pathological, and molecular factors. Three stromal features were significantly associated with survival, and this association was stronger than the association of survival with epithelial characteristics in the model. These findings implicate stromal morpho-logic structure as a previously unrecognized prognostic determinant for breast cancer.
translated by 谷歌翻译
不同类型细胞的空间分布可以揭示癌细胞的生长模式,它与肿瘤微环境的关系以及身体的免疫反应,所有这些都代表了癌症的关键标志。然而,手动识别和定位病理滑块中的所有细胞几乎都是不可能。在这项研究中,我们开发了一种自动细胞分类管道ConvPath,它包括细胞核分割,基于卷积神经网络的肿瘤,基质和淋巴细胞分类,以及肿瘤微环境相关特征的提取,用于癌症病理图像。在训练和独立测试数据集中,总体分类准确率分别为92.9%和90.1%。通过识别细胞和分类细胞类型,该管道可以将病理学图像转换成肿瘤,基质细胞和淋巴细胞的空间图。从这张空间图中,我们可以提取出表征肿瘤微环境的特征。基于这些特征,我们开发了基于图像特征的预测模型,并在两个独立的队列中验证了该模型。预测风险组作为独立的预后因素,对包括年龄,性别,吸烟状况和阶段在内的临床变量进行预测。
translated by 谷歌翻译
糖尿病性视网膜病变(DR)和糖尿病性黄斑水肿是糖尿病的常见并发症,可导致视力丧失。 DR的分级是一个相当复杂的过程,需要检测细微特征,如微动脉瘤,视网膜内出血和视网膜内微血管异常。因此,可能存在相当数量的分级变异性。获得参考标准和解决评分者之间的分歧有不同的方法,虽然通常认为裁决直到完全一致会产生最佳参考标准,但各种解决分歧方法之间的差异还没有被广泛认可。在这项研究中,我们检查了不同分级方法的变异性,参考标准的定义,以及它们对建立用于检测糖尿病眼病的深度学习模型的影响。我们发现,一小组裁定的DR等级可以大大提高算法性能。由此产生的算法的性能与美国董事会认证的眼科医生和视网膜专家的表现相当。
translated by 谷歌翻译
青光眼是世界范围内可预防,不可逆转的失明的主要原因。这种疾病可以保持无症状直至严重,估计有50%-90%的青光眼患者仍未确诊。因此,建议对青光眼筛查进行早期检测和治疗。检测青光眼的一种经济有效的工具可以扩大医疗保健对更大患者群体的访问,但目前还没有这种工具。我们使用5833幅图像的回顾性数据集训练深度学习(DL)算法,评估可升级性,青光眼视神经乳头(ONH)特征和可逆性青光眼风险。使用2个单独的数据集验证所得算法。对于可参考的青光眼风险,该算法在验证数据集“A”中具有0.940(95%CI,0.922-0.955)的AUC(1,205个图像,1个图像/患者; 19%可参考其中图像由研究员培训的青光眼专家小组裁定,并在验证数据集“B”中分析0.858(95%CI,0.836-0.878)(来自9,643名患者的17,593张图像; 9.2%的图像来自亚特兰大退伍军人事务部眼科诊所糖尿病视网膜电视检查程序使用临床转诊决定作为参考标准)。此外,我们发现垂直杯与椎间盘比> = 0.7,神经视网膜边缘,视网膜神经纤维层缺损和裸露的环形血管的存在对青光眼专家和算法的青光眼风险评估贡献最大。对于青光眼ONH特征,算法AUC介于0.608-0.977之间。 DL算法对10名年级学生中的6名(包括3名青光眼专家中的2名)具有明显更高的敏感性,相对于所有评分者具有相当或更高的特异性。仅在眼底图像上训练的DL算法可以以更高的灵敏度和对眼睛护理提供者的可比特异性来检测可参考的青光眼风险。
translated by 谷歌翻译
这项工作旨在确定如何将现代机器学习技术应用于以前未开发的使用数字病理学的黑色素瘤诊断主题。我们使用数字病理学策划了50例患有皮肤黑色素瘤病例的新数据集。我们提供金标准注释,提供四种组织类型(肿瘤,表皮和真皮),这些注释对于称为Breslow厚度和Clark水平的预测测量非常重要。然后,我们设计了一种新的多步全完全卷积网络(FCN)架构,其性能优于其他网络,这些网络根据标准指标使用相同的数据进行训练和评估。最后,我们训练了一个模型来检测和定位目标组织类型。在处理以前看不见的情况时,我们的模型输出在质量上非常类似于黄金标准。除了作为我们方法的基线计算的标准指标之外,我们还要求三位额外的病理学家测量网络输出的Breslow厚度。他们的反应在诊断上等同于地面实况测量,并且在去除不适合测量的情况时,四位病理学家之间的评估者间可靠性(IRR)为75.0%。鉴于定性和定量结果,使用现代机器学习技术可以克服皮肤和肿瘤解剖学的分裂挑战,但需要更多的工作来改善网络在真皮分割上的表现。此外,我们表明可以达到手动执行布朗厚度测量所需的精度水平。
translated by 谷歌翻译
*These authors contributed equally to this work. The brightfield microscope is instrumental in the visual examination of both biological and physical samples at sub-millimeter scales. One key clinical application has been in cancer histopathology, where the microscopic assessment of the tissue samples is used for the diagnosis and staging of cancer and thus guides clinical therapy​ ​ 1​. However, the interpretation of these samples is inherently subjective, resulting in significant diagnostic variability​ ​ 2,3​. Moreover, in many regions of the world, access to pathologists is severely limited due to lack of trained personnel​ ​ 4​. In this regard, Artificial Intelligence (AI) based tools promise to improve the access and quality of healthcare​ ​ 5-7​. However, despite significant advances in AI research, integration of these tools into real-world cancer diagnosis workflows remains challenging because of the costs of image digitization and difficulties in deploying AI solutions​ ​ 8​ ,​ 9​. Here we propose a cost-effective solution to the integration of AI: the Augmented Reality Microscope (ARM). The ARM overlays AI-based information onto the current view of the sample through the optical pathway in real-time, enabling seamless integration of AI into the regular microscopy workflow. We demonstrate the utility of ARM in the detection of lymph node metastases in breast cancer and the identification of prostate cancer with a latency that supports real-time workflows. We anticipate that ARM will remove barriers towards the use of AI in microscopic analysis and thus improve the accuracy and efficiency of cancer diagnosis. This approach is applicable to other microscopy tasks and AI algorithms in the life sciences​ ​ 10​ and beyond​ ​ 11,12​. Microscopic examination of samples is the gold standard for the diagnosis of cancer, autoimmune diseases, infectious diseases, and more. In cancer, the microscopic examination of stained tissue sections is critical for diagnosing and staging the patient's tumor, which informs treatment decisions and prognosis. In cancer, microscopy analysis faces three major challenges. As a form of image interpretation, these examinations are inherently subjective, exhibiting considerable inter-observer and intra-observer variability​ 2,3​. Moreover, clinical guidelines​ 1​ and studies​ 13​ have begun to require quantitative assessments as part of the effort towards better patient risk stratification​ 1​. For example, breast cancer staging requires counting mitotic cells and quantification of the tumor burden in lymph nodes by measuring the largest tumor focus. However, despite being helpful in treatment planning, quantification is laborious and error-prone. Lastly, access to disease experts can be limited in both developed and developing countries​ 4​ , exacerbating the problem. As a potential solution, recent advances in AI, specifically deep learning​ 14​ , have demonstrated automated medical image analysis with performance c
translated by 谷歌翻译
The proliferative activity of breast tumors, which is routinely estimated bycounting of mitotic figures in hematoxylin and eosin stained histologysections, is considered to be one of the most important prognostic markers.However, mitosis counting is laborious, subjective and may suffer from lowinter-observer agreement. With the wider acceptance of whole slide images inpathology labs, automatic image analysis has been proposed as a potentialsolution for these issues. In this paper, the results from the Assessment ofMitosis Detection Algorithms 2013 (AMIDA13) challenge are described. Thechallenge was based on a data set consisting of 12 training and 11 testingsubjects, with more than one thousand annotated mitotic figures by multipleobservers. Short descriptions and results from the evaluation of eleven methodsare presented. The top performing method has an error rate that is comparableto the inter-observer agreement among pathologists.
translated by 谷歌翻译
人工计数组织切片中的有丝分裂肿瘤细胞是乳腺癌最强的预后标志之一。然而,该过程耗时且容易出错。我们开发了一种基于卷积神经网络(CNNs)自动检测乳腺癌组织切片中的底色数字的方法。 CNNs在苏木精和伊红(H&E)染色组织切片中的应用受到以下因素的阻碍:(1)由病理学家建立的噪声和昂贵的参考标准,(2)由于实验室间的变异而缺乏概括性,以及(3)处理千兆像素需要高计算量全幻灯片图像(WSI)。在本文中,我们提出了一种训练和评估CNN的方法,以在乳腺癌WSI中有丝分裂检测的背景下专门解决这些问题。首先,通过结合磷酸化组蛋白-H3(PHH3)重建载玻片和登记中的有丝分裂活性的图像分析,我们在整个H&E WSI中建立了有丝分裂检测的参考标准,需要最少的手动注释努力。其次,我们设计了一种数据增强策略,通过直接修改苏木精和曙红颜色通道,创建多样化和逼真的H&E染色变异。在训练期间使用它与网络整合相结合导致染色不变的有丝分裂检测器。第三,我们应用知识蒸馏来减少有丝分裂检测集合的计算要求,并且性能损失可以忽略不计。该系统在单中心队列中进行训练,并在癌症基因组图谱中的独立多中心队列中评估肿瘤增殖评估挑战(TUPAC)的三个方面。对于挑战的大部分任务,我们在前三种最佳方法中获得了性能。
translated by 谷歌翻译
IMPORTANCE Deep learning is a family of computational methods that allow an algorithm to program itself by learning from a large set of examples that demonstrate the desired behavior, removing the need to specify rules explicitly. Application of these methods to medical imaging requires further assessment and validation. OBJECTIVE To apply deep learning to create an algorithm for automated detection of diabetic retinopathy and diabetic macular edema in retinal fundus photographs. DESIGN AND SETTING A specific type of neural network optimized for image classification called a deep convolutional neural network was trained using a retrospective development data set of 128 175 retinal images, which were graded 3 to 7 times for diabetic retinopathy, diabetic macular edema, and image gradability by a panel of 54 US licensed ophthalmologists and ophthalmology senior residents between May and December 2015. The resultant algorithm was validated in January and February 2016 using 2 separate data sets, both graded by at least 7 US board-certified ophthalmologists with high intragrader consistency. EXPOSURE Deep learning-trained algorithm. MAIN OUTCOMES AND MEASURES The sensitivity and specificity of the algorithm for detecting referable diabetic retinopathy (RDR), defined as moderate and worse diabetic retinopathy, referable diabetic macular edema, or both, were generated based on the reference standard of the majority decision of the ophthalmologist panel. The algorithm was evaluated at 2 operating points selected from the development set, one selected for high specificity and another for high sensitivity. RESULTS The EyePACS-1 data set consisted of 9963 images from 4997 patients (mean age, 54.4 years; 62.2% women; prevalence of RDR, 683/8878 fully gradable images [7.8%]); the Messidor-2 data set had 1748 images from 874 patients (mean age, 57.6 years; 42.6% women; prevalence of RDR, 254/1745 fully gradable images [14.6%]). For detecting RDR, the algorithm had an area under the receiver operating curve of 0.991 (95% CI, 0.988-0.993) for EyePACS-1 and 0.990 (95% CI, 0.986-0.995) for Messidor-2. Using the first operating cut point with high specificity, for EyePACS-1, the sensitivity was 90.3% (95% CI, 87.5%-92.7%) and the specificity was 98.1% (95% CI, 97.8%-98.5%). For Messidor-2, the sensitivity was 87.0% (95% CI, 81.1%-91.0%) and the specificity was 98.5% (95% CI, 97.7%-99.1%). Using a second operating point with high sensitivity in the development set, for EyePACS-1 the sensitivity was 97.5% and specificity was 93.4% and for Messidor-2 the sensitivity was 96.1% and specificity was 93.9%. CONCLUSIONS AND RELEVANCE In this evaluation of retinal fundus photographs from adults with diabetes, an algorithm based on deep machine learning had high sensitivity and specificity for detecting referable diabetic retinopathy. Further research is necessary to determine the feasibility of applying this algorithm in the clinical setting and to determine whether use of
translated by 谷歌翻译
years),​ ​ ​ ​ gender​ ​ (0.97​ ​ AUC),​ ​ smoking​ ​ status​ ​ (0.71 AUC),​ ​ HbA1c​ ​ (within​ ​ 1.39%),​ ​ systolic​ ​ blood​ ​ pressure​ ​ (within​ ​ 11.23mmHg)​ ​ as​ ​ well​ ​ as​ ​ major adverse​ ​ cardiac​ ​ events​ ​ (0.70​ ​ AUC).​ ​ We​ ​ further​ ​ show​ ​ that​ ​ our​ ​ models​ ​ used​ ​ distinct​ ​ aspects​ ​ of​ ​ the anatomy​ ​ to​ ​ generate​ ​ each​ ​ prediction,​ ​ such​ ​ as​ ​ the​ ​ optic​ ​ disc​ ​ or​ ​ blood​ ​ vessels,​ ​ opening​ ​ avenues​ ​ of further​ ​ research. Figure​ ​ 1​ .​ ​ (A)​ ​ Comparing​ ​ predicted​ ​ and​ ​ actual​ ​ age​ ​ in​ ​ the​ ​ two​ ​ validation​ ​ sets,​ ​ with​ ​ the​ ​ y=x​ ​ line​ ​ in​ ​ black.​ ​ In the​ ​ UK​ ​ Biobank​ ​ validation​ ​ dataset,​ ​ age​ ​ was​ ​ calculated​ ​ using​ ​ the​ ​ birth​ ​ year​ ​ because​ ​ birth​ ​ months​ ​ and days​ ​ were​ ​ not​ ​ available.​ ​ In​ ​ the​ ​ EyePACS-2K​ ​ dataset,​ ​ age​ ​ is​ ​ available​ ​ only​ ​ in​ ​ units​ ​ of​ ​ whole​ ​ years.​ ​ (B) Predicted​ ​ vs​ ​ actual​ ​ systolic​ ​ blood​ ​ pressure​ ​ on​ ​ the​ ​ UK​ ​ Biobank​ ​ validation​ ​ dataset,​ ​ with​ ​ the​ ​ y=x​ ​ line​ ​ in black. Model AUC​ ​ (95%​ ​ CI) Age 0.66​ ​ (0.61-0.71) Systolic​ ​ blood​ ​ pressure​ ​ (SBP) 0.66​ ​ (0.61-0.71) Body​ ​ mass​ ​ index​ ​ (BMI) 0.62​ ​ (0.56-0.67) Gender 0.57​ ​ (0.53-0.62) Current​ ​ smoker 0.55​ ​ (0.52-0.59) Algorithm 0.70​ ​ (0.65-0.74) Age​ ​ +​ ​ SBP​ ​ +​ ​ BMI​ ​ +​ ​ gender​ ​ +​ ​ current​ ​ smoker 0.72​ ​ (0.68-0.76) Algorithm​ ​ +​ ​ age​ ​ +​ ​ SBP​ ​ +​ ​ BMI​ ​ +​ ​ gender​ ​ +​ ​ current​ ​ smoker 0.73​ ​ (0.69-0.77) S​ ystematic​ ​ ​ CO​ ronary​ ​ ​ R​ isk​ ​ ​ E​ valuation​ ​ (SCORE)​ 6,7 0.72​ ​ (0.67-0.76) Algorithm​ ​ +​ ​ SCORE 0.72​ ​ (0.67-0.76) Table​ ​ 3.​ ​ ​ Predicting​ ​ 5-year​ ​ MACE​ ​ on​ ​ biobank​ ​ validation​ ​ set.​ ​ Of​ ​ the​ ​ 12,026​ ​ patients​ ​ in​ ​ the​ ​ UK​ ​ Biobank
translated by 谷歌翻译