A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations, which spreads through collected data. When not properly accounted for, machine learning (ML) models learned from data can reinforce the structural biases already present in society. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches show regularly biased behaviors. However, we show that standard mitigation techniques, and our own post-hoc method, can be effective in reducing the level of unfair bias. We provide practical recommendations to develop ML models for depression risk prediction with increased fairness and trust in the real world. No single best ML model for depression prediction provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions.
translated by 谷歌翻译
生成模型生成的合成数据可以增强医学成像中渴望数据深度学习模型的性能和能力。但是,(1)(合成)数据集的可用性有限,并且(2)生成模型训练很复杂,这阻碍了它们在研究和临床应用中的采用。为了减少此入口障碍,我们提出了Medigan,Medigan是一站式商店,用于验证的生成型号,该型号是开源框架 - 不合骨python图书馆。 Medigan允许研究人员和开发人员仅在几行代码中创建,增加和域名。在基于收集的最终用户需求的设计决策的指导下,我们基于生成模型的模块化组件(i)执行,(ii)可视化,(iii)搜索和排名以及(iv)贡献。图书馆的可伸缩性和设计是通过其越来越多的综合且易于使用的验证生成模型来证明的,该模型由21种模型组成,利用9种不同的生成对抗网络体系结构在4个域中在11个数据集中训练,即乳腺摄影,内窥镜检查,X射线和X射线和X射线镜头,X射线和X型。 MRI。此外,在这项工作中分析了Medigan的3个应用,其中包括(a)启用社区范围内的限制数据共享,(b)研究生成模型评估指标以及(c)改进临床下游任务。在(b)中,扩展了公共医学图像综合评估和报告标准,我们根据图像归一化和特定于放射学特征提取了Fr \'Echet Inception距离变异性。
translated by 谷歌翻译
基于深度学习的计算机辅助检测系统在乳腺癌检测中表现出良好的性能。但是,高密度的乳房显示出较差的检测性能,因为密集组织可以掩盖甚至模拟质量。因此,乳腺癌检测的敏感性可在致密乳房中降低20%以上。此外,与低密度乳房相比,极度致密的病例报告说,患癌症的风险增加。这项研究旨在使用合成高密度的全场数字乳房X线照片(FFDM)作为乳腺质量检测模型训练期间的数据增强来提高高密度乳房的质量检测性能。为此,对使用三个FFDM数据集进行了五个周期一致的GAN(CycleGAN)模型,以高分辨率乳房X线照片中的低密度图像翻译进行了训练。训练图像是由乳房密度双拉德类别分开的,几乎是脂肪的脂肪,双刺是乳房的乳房。我们的结果表明,所提出的数据增强技术在两个不同的测试集中提高了高密度乳房中质量检测的敏感性和精度,并将其作为域适应技术有用。此外,在一项涉及两名专家放射科医生和一名外科肿瘤学家的读者研究中评估了合成图像的临床现实主义。
translated by 谷歌翻译
大多数人工智能(AI)研究都集中在高收入国家,其中成像数据,IT基础设施和临床专业知识丰富。但是,在需要医学成像的有限资源环境中取得了较慢的进步。例如,在撒哈拉以南非洲,由于获得产前筛查的机会有限,围产期死亡率的率很高。在这些国家,可以实施AI模型,以帮助临床医生获得胎儿超声平面以诊断胎儿异常。到目前为止,已经提出了深度学习模型来识别标准的胎儿平面,但是没有证据表明它们能够概括获得高端超声设备和数据的中心。这项工作研究了不同的策略,以减少在高资源临床中心训练并转移到新的低资源中心的胎儿平面分类模型的域转移效果。为此,首先在丹麦的一个新中心对1,008例患者的新中心进行评估,接受了1,008名患者的新中心,后来对五个非洲中心(埃及,阿尔及利亚,乌干达,加纳和马拉维进行了相同的表现),首先在丹麦的一个新中心进行评估。 )每个患者有25名。结果表明,转移学习方法可以是将小型非洲样本与发达国家现有的大规模数据库相结合的解决方案。特别是,该模型可以通过将召回率提高到0.92 \ pm 0.04 $,同时又可以维持高精度。该框架显示了在临床中心构建可概括的新AI模型的希望,该模型在具有挑战性和异质条件下获得的数据有限,并呼吁进行进一步的研究,以开发用于资源较少的国家 /地区的AI可用性的新解决方案。
translated by 谷歌翻译
在过去的几年中,在深度学习中,在深度学习中广泛研究了域的概括问题,但对对比增强成像的关注受到了有限的关注。但是,临床中心之间的对比度成像方案存在明显差异,尤其是在对比度注入和图像采集之间,而与可用的非对抗成像的可用数据集相比,访问多中心对比度增强图像数据受到限制。这需要新的工具来概括单个中心的深度学习模型,跨越新的看不见的域和临床中心,以对比增强成像。在本文中,我们介绍了深度学习技术的详尽评估,以实现对对比度增强图像分割的看不见的临床中心的普遍性。为此,研究,优化和系统评估了几种技术,包括数据增强,域混合,转移学习和域的适应性。为了证明域泛化对对比增强成像的潜力,评估了对对比增强心脏磁共振成像(MRI)中的心室分割的方法。结果是根据位于三个国家(法国,西班牙和中国)的四家医院中获得的多中心心脏对比增强的MRI数据集获得的。他们表明,数据增强和转移学习的组合可以导致单中心模型,这些模型可以很好地推广到训练过程中未包括的新临床中心。在对比增强成像中,具有合适的概括程序的单域神经网络可以达到甚至超过多中心多供应商模型的性能,从而消除了对综合多中心数据集的需求,以训练可概括的模型。
translated by 谷歌翻译
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in Generative Adversarial Networks (GANs), data synthesis, and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
translated by 谷歌翻译
Coronary Computed Tomography Angiography (CCTA) provides information on the presence, extent, and severity of obstructive coronary artery disease. Large-scale clinical studies analyzing CCTA-derived metrics typically require ground-truth validation in the form of high-fidelity 3D intravascular imaging. However, manual rigid alignment of intravascular images to corresponding CCTA images is both time consuming and user-dependent. Moreover, intravascular modalities suffer from several non-rigid motion-induced distortions arising from distortions in the imaging catheter path. To address these issues, we here present a semi-automatic segmentation-based framework for both rigid and non-rigid matching of intravascular images to CCTA images. We formulate the problem in terms of finding the optimal \emph{virtual catheter path} that samples the CCTA data to recapitulate the coronary artery morphology found in the intravascular image. We validate our co-registration framework on a cohort of $n=40$ patients using bifurcation landmarks as ground truth for longitudinal and rotational registration. Our results indicate that our non-rigid registration significantly outperforms other co-registration approaches for luminal bifurcation alignment in both longitudinal (mean mismatch: 3.3 frames) and rotational directions (mean mismatch: 28.6 degrees). By providing a differentiable framework for automatic multi-modal intravascular data fusion, our developed co-registration modules significantly reduces the manual effort required to conduct large-scale multi-modal clinical studies while also providing a solid foundation for the development of machine learning-based co-registration approaches.
translated by 谷歌翻译
Artificial intelligence(AI) systems based on deep neural networks (DNNs) and machine learning (ML) algorithms are increasingly used to solve critical problems in bioinformatics, biomedical informatics, and precision medicine. However, complex DNN or ML models that are unavoidably opaque and perceived as black-box methods, may not be able to explain why and how they make certain decisions. Such black-box models are difficult to comprehend not only for targeted users and decision-makers but also for AI developers. Besides, in sensitive areas like healthcare, explainability and accountability are not only desirable properties of AI but also legal requirements -- especially when AI may have significant impacts on human lives. Explainable artificial intelligence (XAI) is an emerging field that aims to mitigate the opaqueness of black-box models and make it possible to interpret how AI systems make their decisions with transparency. An interpretable ML model can explain how it makes predictions and which factors affect the model's outcomes. The majority of state-of-the-art interpretable ML methods have been developed in a domain-agnostic way and originate from computer vision, automated reasoning, or even statistics. Many of these methods cannot be directly applied to bioinformatics problems, without prior customization, extension, and domain adoption. In this paper, we discuss the importance of explainability with a focus on bioinformatics. We analyse and comprehensively overview of model-specific and model-agnostic interpretable ML methods and tools. Via several case studies covering bioimaging, cancer genomics, and biomedical text mining, we show how bioinformatics research could benefit from XAI methods and how they could help improve decision fairness.
translated by 谷歌翻译
Physics-Informed Neural Networks (PINNs) are gaining popularity as a method for solving differential equations. While being more feasible in some contexts than the classical numerical techniques, PINNs still lack credibility. A remedy for that can be found in Uncertainty Quantification (UQ) which is just beginning to emerge in the context of PINNs. Assessing how well the trained PINN complies with imposed differential equation is the key to tackling uncertainty, yet there is lack of comprehensive methodology for this task. We propose a framework for UQ in Bayesian PINNs (B-PINNs) that incorporates the discrepancy between the B-PINN solution and the unknown true solution. We exploit recent results on error bounds for PINNs on linear dynamical systems and demonstrate the predictive uncertainty on a class of linear ODEs.
translated by 谷歌翻译
We show that for a plane imaged by an endoscope the specular isophotes are concentric circles on the scene plane, which appear as nested ellipses in the image. We show that these ellipses can be detected and used to estimate the plane's normal direction, forming a normal reconstruction method, which we validate on simulated data. In practice, the anatomical surfaces visible in endoscopic images are locally planar. We use our method to show that the surface normal can thus be reconstructed for each of the numerous specularities typically visible on moist tissues. We show results on laparoscopic and colonoscopic images.
translated by 谷歌翻译