Image annotation is one essential prior step to enable data-driven algorithms. In medical imaging, having large and reliably annotated data sets is crucial to recognize various diseases robustly. However, annotator performance varies immensely, thus impacts model training. Therefore, often multiple annotators should be employed, which is however expensive and resource-intensive. Hence, it is desirable that users should annotate unseen data and have an automated system to unobtrusively rate their performance during this process. We examine such a system based on whole slide images (WSIs) showing lung fluid cells. We evaluate two methods the generation of synthetic individual cell images: conditional Generative Adversarial Networks and Diffusion Models (DM). For qualitative and quantitative evaluation, we conduct a user study to highlight the suitability of generated cells. Users could not detect 52.12% of generated images by DM proofing the feasibility to replace the original cells with synthetic cells without being noticed.
translated by 谷歌翻译
注释数据,尤其是在医疗领域,需要专家知识和很多努力。这限制了可用医疗数据集的实验量和/或有用性。因此,发展策略以增加注释的数量,同时降低所需的域知识是感兴趣的。可能的策略是使用游戏,即即将注释任务转换为游戏。我们提出了一种方法来游戏从病理整体幻灯片图像中注释肺部流体细胞的任务。由于该域是未知的非专家注释器所知,我们将用视网网架构检测到的细胞图像到花卉图像域。使用Compygan架构执行此域传输,用于不同的小区类型。在这种更科的域名中,非专家注释器可以(t)要求在俏皮的环境中注释不同种类的花朵。为了提供概念证据,该工作表明,通过评估在真实单元图像上培训的图像分类网络并在由Cyclegan网络生成的小区图像上测试的图像分类网络可以进行域传输。分类网络分别达到原始肺液体细胞和转化肺部流体细胞的精度​​为97.48%和95.16%。通过这项研究,我们为使用自行车队进行了未来的游戏研究的基础。
translated by 谷歌翻译
对从FFPE组织块制备的载玻片上切割的染色组织的光学显微镜检查是组织诊断的金标准。此外,任何病理学家的诊断能力和专业知识都取决于他们在常见和稀有变体形态上的直接经验。最近,深度学习方法已被用来成功显示此类任务的高度准确性。但是,获得专家级注释的图像是一项昂贵且耗时的任务,人为合成的组织学图像可能会非常有益。在这里,我们提出了一种方法,不仅可以生成组织学图像,从而重现普通疾病的诊断形态特征,而且还提供了产生新的和罕见形态的用户能力。我们的方法涉及开发一种生成的对抗网络模型,该模型综合了由类标签约束的病理图像。我们研究了该框架合成现实的前列腺和结肠组织图像的能力,并评估了这些图像在增强机器学习方法的诊断能力以及通过一组经验丰富的解剖病理学家的可用性方面的实用性。我们的框架生成的合成数据在训练深度学习模型中进行了类似于实际数据进行诊断。病理学家无法区分真实图像和合成图像,并显示出相似的前列腺癌分级的观察者间一致性。我们扩展了从结肠活检中显着复杂图像的方法,并表明也可以再现了此类组织中的复杂微环境。最后,我们介绍了用户通过简单的语义标签标记来生成深层组织学图像的能力。
translated by 谷歌翻译
病理学家对患病组织的视觉微观研究一直是一个多世纪以来癌症诊断和预后的基石。最近,深度学习方法在组织图像的分析和分类方面取得了重大进步。但是,关于此类模型在生成组织病理学图像的实用性方面的工作有限。这些合成图像在病理学中有多种应用,包括教育,熟练程度测试,隐私和数据共享的公用事业。最近,引入了扩散概率模型以生成高质量的图像。在这里,我们首次研究了此类模型的潜在用途以及优先的形态加权和颜色归一化,以合成脑癌的高质量组织病理学图像。我们的详细结果表明,与生成对抗网络相比,扩散概率模型能够合成各种组织病理学图像,并且具有较高的性能。
translated by 谷歌翻译
Tumor segmentation in histopathology images is often complicated by its composition of different histological subtypes and class imbalance. Oversampling subtypes with low prevalence features is not a satisfactory solution since it eventually leads to overfitting. We propose to create synthetic images with semantically-conditioned deep generative networks and to combine subtype-balanced synthetic images with the original dataset to achieve better segmentation performance. We show the suitability of Generative Adversarial Networks (GANs) and especially diffusion models to create realistic images based on subtype-conditioning for the use case of HER2-stained histopathology. Additionally, we show the capability of diffusion models to conditionally inpaint HER2 tumor areas with modified subtypes. Combining the original dataset with the same amount of diffusion-generated images increased the tumor Dice score from 0.833 to 0.854 and almost halved the variance between the HER2 subtype recalls. These results create the basis for more reliable automatic HER2 analysis with lower performance variance between individual HER2 subtypes.
translated by 谷歌翻译
在时尚电子商务的快速增长中,时尚文章的远程安装仍然是一个复杂而充满挑战的问题,并且是客户沮丧的主要驱动力。尽管最近在3D虚拟尝试解决方案方面取得了进步,但这种方法仍然限制在非常狭窄的文章(即使是少数文章)中,而且通常只有一种时尚物品。旨在支持客户的其他最先进的方法在网上找到适合他们的方法,主要需要高水平的客户参与度和对隐私敏感的数据(例如身高,体重,年龄,性别,腹部形状等),或者,或者需要穿着紧身衣服的顾客尸体的图像。他们通常还缺乏在大规模上产生合适和塑造视觉指导的能力,仅通过建议订购哪种尺寸,最能与客户的身体属性相匹配,而无需提供有关服装如何合适和外观的任何信息。为了实现飞跃并超越了当前方法的局限性,我们提出了Fitgan,这是一种生成的对抗模型,明确说明了服装的纠缠尺寸和在线时尚的适合特征。以文章的拟合和形状为条件,我们的模型学习了分离的项目表示形式,并生成了逼真的图像,反映了时尚文章的真实拟合和形状。通过大规模的现实世界数据实验,我们演示了我们的方法如何能够合成视觉上现实和各种时尚项目的拟合,并探索其控制数千种在线服装图像的拟合度和形状的能力。
translated by 谷歌翻译
在这项工作中,我们研究了生成图像模型的性能和评估如何受到其培训数据集的种族组成的影响。通过检查和控制各种培训数据集中的种族分布,我们能够观察不同培训分布对生成的图像质量和生成图像的种族分布的影响。我们的结果表明,生成的图像的种族组成成功地保留了培训数据。但是,我们观察到截断是一种用于在推断过程中生成更高质量图像的技术,加剧了数据中的种族失衡。最后,在检查图像质量与种族之间的关系时,我们发现给定种族的最高可感知的视觉质量图像来自该种族代表性很好的分布,并且注释者始终偏爱白人的生成图像,而不是黑人。
translated by 谷歌翻译
数据已成为当今世界上最有价值的资源。随着数据驱动算法的大量扩散,例如基于深度学习的方法,数据的可用性引起了极大的兴趣。在这种情况下,特别需要高质量的培训,验证和测试数据集。体积数据是医学中非常重要的资源,因为它范围从疾病诊断到治疗监测。如果数据集足够,则可以培训模型来帮助医生完成这些任务。不幸的是,在某些情况和应用程序中,大量数据不可用。例如,在医疗领域,罕见疾病和隐私问题可能导致数据可用性受到限制。在非医学领域,获得足够数量的高质量数据的高成本也可能引起人们的关注。解决这些问题的方法可能是生成合成数据,以结合其他更传统的数据增强方法来执行数据增强。因此,关于3D生成对抗网络(GAN)的大多数出版物都在医疗领域内。生成现实合成数据的机制的存在是克服这一挑战的好资产,尤其是在医疗保健中,因为数据必须具有良好的质量并且接近现实,即现实,并且没有隐私问题。在这篇综述中,我们提供了使用GAN生成现实的3D合成数据的作品的摘要。因此,我们概述了具有共同体系结构,优势和缺点的这些领域中基于GAN的方法。我们提出了一种新颖的分类学,评估,挑战和研究机会,以提供医学和其他领域甘恩当前状态的整体概述。
translated by 谷歌翻译
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.
translated by 谷歌翻译
Classification using supervised learning requires annotating a large amount of classes-balanced data for model training and testing. This has practically limited the scope of applications with supervised learning, in particular deep learning. To address the issues associated with limited and imbalanced data, this paper introduces a sample-efficient co-supervised learning paradigm (SEC-CGAN), in which a conditional generative adversarial network (CGAN) is trained alongside the classifier and supplements semantics-conditioned, confidence-aware synthesized examples to the annotated data during the training process. In this setting, the CGAN not only serves as a co-supervisor but also provides complementary quality examples to aid the classifier training in an end-to-end fashion. Experiments demonstrate that the proposed SEC-CGAN outperforms the external classifier GAN (EC-GAN) and a baseline ResNet-18 classifier. For the comparison, all classifiers in above methods adopt the ResNet-18 architecture as the backbone. Particularly, for the Street View House Numbers dataset, using the 5% of training data, a test accuracy of 90.26% is achieved by SEC-CGAN as opposed to 88.59% by EC-GAN and 87.17% by the baseline classifier; for the highway image dataset, using the 10% of training data, a test accuracy of 98.27% is achieved by SEC-CGAN, compared to 97.84% by EC-GAN and 95.52% by the baseline classifier.
translated by 谷歌翻译
使用深层神经网络的临床决策支持已成为稳步增长的兴趣的话题。尽管最近的工作反复证明,深度学习比传统方法具有主要优势,但临床医生通常不愿采用这项技术,因为其基本决策过程被认为是内在的,难以理解的。近年来,各种方法已经成功地提供了更深入的见解来解决这一问题。最值得注意的是,加性功能归因方法能够通过创建一个显着映射来将决策传播回输入空间,从而使从业者可以“查看网络看到的内容”。但是,生成的地图的质量可能会变得很差,并且只要有限的数据就可以 - 在临床环境中的典型情况。我们提出了一种基于CycleGAN激活最大化的新决策解释方案,该方案即使在较小的数据集中也会生成分类器决策的高质量可视化。我们进行了一项用户研究,其中我们在LIDC数据集上评估了用于肺部病变恶性分类的方法,用于超声图像乳腺癌检测的母乳数据集以及CIFAR-10数据集的两个子集用于RBG图像对象对象识别。在这项用户研究中,我们的方法清楚地表现出了医学成像数据集上的现有方法,并在自然图像设置中排名第二。通过我们的方法,我们为更好地理解基于深神网络的临床决策支持系统做出了重大贡献,因此旨在促进总体临床接受。
translated by 谷歌翻译
The availability of large-scale chest X-ray datasets is a requirement for developing well-performing deep learning-based algorithms in thoracic abnormality detection and classification. However, biometric identifiers in chest radiographs hinder the public sharing of such data for research purposes due to the risk of patient re-identification. To counteract this issue, synthetic data generation offers a solution for anonymizing medical images. This work employs a latent diffusion model to synthesize an anonymous chest X-ray dataset of high-quality class-conditional images. We propose a privacy-enhancing sampling strategy to ensure the non-transference of biometric information during the image generation process. The quality of the generated images and the feasibility of serving as exclusive training data are evaluated on a thoracic abnormality classification task. Compared to a real classifier, we achieve competitive results with a performance gap of only 3.5% in the area under the receiver operating characteristic curve.
translated by 谷歌翻译
Automated synthesis of histology images has several potential applications in computational pathology. However, no existing method can generate realistic tissue images with a bespoke cellular layout or user-defined histology parameters. In this work, we propose a novel framework called SynCLay (Synthesis from Cellular Layouts) that can construct realistic and high-quality histology images from user-defined cellular layouts along with annotated cellular boundaries. Tissue image generation based on bespoke cellular layouts through the proposed framework allows users to generate different histological patterns from arbitrary topological arrangement of different types of cells. SynCLay generated synthetic images can be helpful in studying the role of different types of cells present in the tumor microenvironmet. Additionally, they can assist in balancing the distribution of cellular counts in tissue images for designing accurate cellular composition predictors by minimizing the effects of data imbalance. We train SynCLay in an adversarial manner and integrate a nuclear segmentation and classification model in its training to refine nuclear structures and generate nuclear masks in conjunction with synthetic images. During inference, we combine the model with another parametric model for generating colon images and associated cellular counts as annotations given the grade of differentiation and cell densities of different cells. We assess the generated images quantitatively and report on feedback from trained pathologists who assigned realism scores to a set of images generated by the framework. The average realism score across all pathologists for synthetic images was as high as that for the real images. We also show that augmenting limited real data with the synthetic data generated by our framework can significantly boost prediction performance of the cellular composition prediction task.
translated by 谷歌翻译
胸部X射线(CXR)图像中的肺结节检测是肺癌的早期筛查。基于深度学习的计算机辅助诊断(CAD)系统可以支持放射线医生在CXR中进行结节筛选。但是,它需要具有高质量注释的大规模和多样化的医学数据,以训练这种强大而准确的CAD。为了减轻此类数据集的有限可用性,为了增加数据增强而提出了肺结核合成方法。然而,以前的方法缺乏产生结节的能力,这些结节与检测器所需的大小属性相关。为了解决这个问题,我们在本文中介绍了一种新颖的肺结综合框架,该框架分别将结节属性分为三个主要方面,包括形状,大小和纹理。基于GAN的形状生成器首先通过产生各种形状掩模来建模结节形状。然后,以下大小调制可以对像素级粒度中生成的结节形状的直径进行定量控制。一条粗到细门的卷积卷积纹理发生器最终合成了以调制形状掩模为条件的视觉上合理的结节纹理。此外,我们建议通过控制数据增强的分离结节属性来合成结节CXR图像,以便更好地补偿检测任务中容易错过的结节。我们的实验证明了所提出的肺结构合成框架的图像质量,多样性和可控性的增强。我们还验证了数据增强对大大改善结节检测性能的有效性。
translated by 谷歌翻译
生成的对抗网络(GAN)是在众多领域成功使用的一种强大的深度学习模型。它们属于一个称为生成方法的更广泛的家族,该家族通过从真实示例中学习样本分布来生成新数据。在临床背景下,与传统的生成方法相比,GAN在捕获空间复杂,非线性和潜在微妙的疾病作用方面表现出增强的能力。这篇综述评估了有关gan在各种神经系统疾病的成像研究中的应用的现有文献,包括阿尔茨海默氏病,脑肿瘤,脑老化和多发性硬化症。我们为每个应用程序提供了各种GAN方法的直观解释,并进一步讨论了在神经影像学中利用gans的主要挑战,开放问题以及有希望的未来方向。我们旨在通过强调如何利用gan来支持临床决策,并有助于更好地理解脑部疾病的结构和功能模式,从而弥合先进的深度学习方法和神经病学研究之间的差距。
translated by 谷歌翻译
The success of Deep Learning applications critically depends on the quality and scale of the underlying training data. Generative adversarial networks (GANs) can generate arbitrary large datasets, but diversity and fidelity are limited, which has recently been addressed by denoising diffusion probabilistic models (DDPMs) whose superiority has been demonstrated on natural images. In this study, we propose Medfusion, a conditional latent DDPM for medical images. We compare our DDPM-based model against GAN-based models, which constitute the current state-of-the-art in the medical domain. Medfusion was trained and compared with (i) StyleGan-3 on n=101,442 images from the AIROGS challenge dataset to generate fundoscopies with and without glaucoma, (ii) ProGAN on n=191,027 from the CheXpert dataset to generate radiographs with and without cardiomegaly and (iii) wGAN on n=19,557 images from the CRCMS dataset to generate histopathological images with and without microsatellite stability. In the AIROGS, CRMCS, and CheXpert datasets, Medfusion achieved lower (=better) FID than the GANs (11.63 versus 20.43, 30.03 versus 49.26, and 17.28 versus 84.31). Also, fidelity (precision) and diversity (recall) were higher (=better) for Medfusion in all three datasets. Our study shows that DDPM are a superior alternative to GANs for image synthesis in the medical domain.
translated by 谷歌翻译
基于深度学习的疾病检测和分割算法承诺提高许多临床过程。然而,由于数据隐私,法律障碍和非统一数据采集协议,此类算法需要大量的注释训练数据,通常在医学环境中不可用。具有注释病理学的合成数据库可以提供所需的培训数据量。我们展示了缺血性卒中的例子,即利用基于深度学习的增强的病变分割的改善是可行的。为此,我们训练不同的图像到图像转换模型,以合成大脑体积的磁共振图像,并且没有来自语义分割图的中风病变。此外,我们培养一种生成的对抗性网络来产生合成病变面具。随后,我们组合这两个组件来构建大型合成描边图像数据库。使用U-NET评估各种模型的性能,该U-NET在临床测试集上培训以进行段中风病变。我们向最佳性能报告$ \ mathbf {72.8} $%[$ \ mathbf {70.8 \ pm1.0} $%]的骰子分数,这胜过了单独临床图像培训的模型培训$ \ mathbf { 67.3} $%[$ \ mathbf {63.2 \ pm1.9} $%],并且接近人类互相互联网骰子评分$ \ mathbf {76.9} $%。此外,我们表明,对于仅为10或50个临床案例的小型数据库,与使用不使用合成数据的设置相比,合成数据增强产生了显着的改进。据我们所知,这提出了基于图像到图像翻译的合成数据增强的第一个比较分析,并将第一应用于缺血性卒中。
translated by 谷歌翻译
本文介绍了一种基于条件生成的对抗网络的碰撞频率数据增强方法,以改善碰撞频率模型。通过比较基本SPF(使用原始数据开发)和增强SPF(使用原始数据加合成数据开发)的性能来评估所提出的方法,以便在热点识别性能,模型预测精度和色散参数估计精度方面。使用模拟和现实世界崩溃数据集进行实验。结果表明,通过CGAN的合成崩溃数据具有与原始数据相同的分布,并且在分散参数低时,在几乎所有方面都占据了基础SPF的增强SPF。
translated by 谷歌翻译
组织病理学分析是对癌前病变诊断的本金标准。从数字图像自动组织病理学分类的目标需要监督培训,这需要大量的专家注释,这可能是昂贵且耗时的收集。同时,精确分类从全幻灯片裁剪的图像斑块对于基于标准滑动窗口的组织病理学幻灯片分类方法是必不可少的。为了减轻这些问题,我们提出了一个精心设计的条件GaN模型,即hostogan,用于在类标签上合成现实组织病理学图像补丁。我们还研究了一种新颖的合成增强框架,可选择地添加由我们提出的HADOGAN生成的新的合成图像补丁,而不是直接扩展与合成图像的训练集。通过基于其指定标签的置信度和实际标记图像的特征相似性选择合成图像,我们的框架为合成增强提供了质量保证。我们的模型在两个数据集上进行评估:具有有限注释的宫颈组织病理学图像数据集,以及具有转移性癌症的淋巴结组织病理学图像的另一个数据集。在这里,我们表明利用具有选择性增强的组织产生的图像导致对宫颈组织病理学和转移性癌症数据集分别的分类性能(分别为6.7%和2.8%)的显着和一致性。
translated by 谷歌翻译
提供和渲染室内场景一直是室内设计的一项长期任务,艺术家为空间创建概念设计,建立3D模型的空间,装饰,然后执行渲染。尽管任务很重要,但它很乏味,需要巨大的努力。在本文中,我们引入了一个特定领域的室内场景图像合成的新问题,即神经场景装饰。鉴于一张空的室内空间的照片以及用户确定的布局列表,我们旨在合成具有所需的家具和装饰的相同空间的新图像。神经场景装饰可用于以简单而有效的方式创建概念室内设计。我们解决这个研究问题的尝试是一种新颖的场景生成体系结构,它将空的场景和对象布局转化为现实的场景照片。我们通过将其与有条件图像合成基线进行比较,以定性和定量的方式将其进行比较,证明了我们提出的方法的性能。我们进行广泛的实验,以进一步验证我们生成的场景的合理性和美学。我们的实现可在\ url {https://github.com/hkust-vgd/neural_scene_decoration}获得。
translated by 谷歌翻译