通过采用卷积神经网络(CNN)进行电路结构的分割,深度学习在具有挑战性的电路注释任务中取得了巨大的成功。深度学习方法需要大量手动注释的培训数据才能实现良好的性能,如果在给定数据集上培训的深度学习模型被应用于其他数据集,则可能导致性能降解。这通常称为电路注释的域移位问题,这源于不同图像数据集的分布的较大变化。可以从单个设备中的不同设备或不同层获得不同的图像数据集。为了解决域移位问题,我们提出了直方图门控图像翻译(HGIT),这是一个无监督的域适应框架,将图像从给定的源数据集转换为目标数据集的域,并利用转换的图像来训练段网络。具体而言,我们的HGIT执行基于生成的对抗网络(GAN)的图像翻译,并利用直方图统计数据进行数据策划。实验是在适应三个不同目标数据集(无标签的单个标记源数据集上进行的,并评估了每个目标数据集的分割性能。我们已经证明,与报道的域适应技术相比,我们的方法达到了最佳性能,并且还可以合理地接近完全监督的基准。
translated by 谷歌翻译
域适应是一种解决未经看线环境中缺乏大量标记数据的技术。提出了无监督的域适应,以使模型适用于使用单独标记的源数据和未标记的目标域数据的新模式。虽然已经提出了许多图像空间域适配方法来捕获像素级域移位,但是这种技术可能无法维持分割任务的高电平语义信息。对于生物医学图像的情况,在域之间的图像转换操作期间,诸如血管的细细节可能会丢失。在这项工作中,我们提出了一种模型,它使用周期 - 一致丢失在域之间适应域,同时通过在适应过程中强制执行基于边缘的损耗来维持原始图像的边缘细节。我们通过将其与其他两只眼底血管分割数据集的其他方法进行比较来证明我们的算法的有效性。与SOTA和〜5.2增量相比,我们达到了1.1〜9.2递增的骰子分数。
translated by 谷歌翻译
对大脑的电子显微镜(EM)体积的精确分割对于表征细胞或细胞器水平的神经元结构至关重要。尽管有监督的深度学习方法在过去几年中导致了该方向的重大突破,但它们通常需要大量的带注释的数据才能接受培训,并且在类似的实验和成像条件下获得的其他数据上的表现不佳。这是一个称为域适应的问题,因为从样本分布(或源域)中学到的模型难以维持其对从不同分布或目标域提取的样品的性能。在这项工作中,我们解决了基于深度学习的域适应性的复杂案例,以跨不同组织和物种的EM数据集进行线粒体分割。我们提出了三种无监督的域适应策略,以根据(1)两个域之间的最新样式转移来改善目标域中的线粒体分割; (2)使用未标记的源和目标图像预先培训模型的自我监督学习,然后仅用源标签进行微调; (3)具有标记和未标记图像的端到端训练的多任务神经网络体系结构。此外,我们提出了基于在源域中仅获得的形态学先验的新训练停止标准。我们使用三个公开可用的EM数据集进行了所有可能的跨数据库实验。我们评估了目标数据集预测的线粒体语义标签的拟议策略。此处介绍的方法优于基线方法,并与最新的状态相比。在没有验证标签的情况下,监视我们提出的基于形态的度量是停止训练过程并在平均最佳模型中选择的直观有效的方法。
translated by 谷歌翻译
We describe a simple method for unsupervised domain adaptation, whereby the discrepancy between the source and target distributions is reduced by swapping the lowfrequency spectrum of one with the other. We illustrate the method in semantic segmentation, where densely annotated images are aplenty in one domain (e.g., synthetic data), but difficult to obtain in another (e.g., real images). Current state-of-the-art methods are complex, some requiring adversarial optimization to render the backbone of a neural network invariant to the discrete domain selection variable. Our method does not require any training to perform the domain alignment, just a simple Fourier Transform and its inverse. Despite its simplicity, it achieves state-of-the-art performance in the current benchmarks, when integrated into a relatively standard semantic segmentation model. Our results indicate that even simple procedures can discount nuisance variability in the data that more sophisticated methods struggle to learn away. 1
translated by 谷歌翻译
来自不同摄像头设备的光学相干断层扫描(OCT)成像会导致挑战域的变化,并可能导致机器学习模型的精度严重下降。在这项工作中,我们引入了基于单数值分解(SVDNA)的最小噪声适应方法,以克服视网膜OCT成像中三个不同设备制造商的目标域之间的域间隙。我们的方法利用噪声结构的差异成功地弥合了不同OCT设备之间的域间隙,并将样式从未标记的目标域图像转移到可用手动注释的源图像。我们演示了该方法尽管简单,但如何比较甚至胜过最先进的无监督域适应方法,用于在公共OCT数据集中进行语义细分。 SVDNA可以将仅几行代码集成到任何网络的增强管道中,这些网络与许多最新的域适应方法形成鲜明对比,这些方法通常需要更改基础模型体系结构或训练单独的样式转移模型。 SVDNA的完整代码实现可在https://github.com/valentinkoch/svdna上获得。
translated by 谷歌翻译
We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-toimage translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets.
translated by 谷歌翻译
实现域适应是有价值的,以将学习知识从标记为CT数据集传输到腹部多器官分段的目标未标记的MR DataSet。同时,非常希望避免目标数据集的高注重成本并保护源数据集的隐私。因此,我们提出了一种有效的无核心无监督域适应方法,用于跨型号腹部多器官分段而不访问源数据集。所提出的框架的过程包括两个阶段。在第一阶段,特征映射统计损失用于对准顶部分段网络中的源和目标特征的分布,并使用熵最小化损耗来鼓励高席位细分。从顶部分段网络输出的伪标签用于指导样式补偿网络生成类似源图像。从中间分割网络输出的伪标签用于监督所需模型的学习(底部分段网络)。在第二阶段,循环学习和像素自适应掩模细化用于进一步提高所需模型的性能。通过这种方法,我们在肝脏,肾脏,左肾肾脏和脾脏的分割中实现了令人满意的性能,骰子相似系数分别为0.884,0.891,0.864和0.911。此外,当存在目标注释数据时,所提出的方法可以很容易地扩展到情况。该性能在平均骰子相似度系数的0.888至0.922增加到0.888至0.922,靠近监督学习(0.929),只有一个标记的MR卷。
translated by 谷歌翻译
While deep learning methods hitherto have achieved considerable success in medical image segmentation, they are still hampered by two limitations: (i) reliance on large-scale well-labeled datasets, which are difficult to curate due to the expert-driven and time-consuming nature of pixel-level annotations in clinical practices, and (ii) failure to generalize from one domain to another, especially when the target domain is a different modality with severe domain shifts. Recent unsupervised domain adaptation~(UDA) techniques leverage abundant labeled source data together with unlabeled target data to reduce the domain gap, but these methods degrade significantly with limited source annotations. In this study, we address this underexplored UDA problem, investigating a challenging but valuable realistic scenario, where the source domain not only exhibits domain shift~w.r.t. the target domain but also suffers from label scarcity. In this regard, we propose a novel and generic framework called ``Label-Efficient Unsupervised Domain Adaptation"~(LE-UDA). In LE-UDA, we construct self-ensembling consistency for knowledge transfer between both domains, as well as a self-ensembling adversarial learning module to achieve better feature alignment for UDA. To assess the effectiveness of our method, we conduct extensive experiments on two different tasks for cross-modality segmentation between MRI and CT images. Experimental results demonstrate that the proposed LE-UDA can efficiently leverage limited source labels to improve cross-domain segmentation performance, outperforming state-of-the-art UDA approaches in the literature. Code is available at: https://github.com/jacobzhaoziyuan/LE-UDA.
translated by 谷歌翻译
Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.
translated by 谷歌翻译
卷积神经网络(CNN)已经实现了医学图像细分的最先进性能,但需要大量的手动注释进行培训。半监督学习(SSL)方法有望减少注释的要求,但是当数据集大小和注释图像的数量较小时,它们的性能仍然受到限制。利用具有类似解剖结构的现有注释数据集来协助培训,这有可能改善模型的性能。然而,由于目标结构的外观不同甚至成像方式,跨解剖结构域的转移进一步挑战。为了解决这个问题,我们提出了跨解剖结构域适应(CS-CADA)的对比度半监督学习,该学习适应一个模型以在目标结构域中细分相似的结构,这仅需要通过利用一组现有现有的现有的目标域中的限制注释源域中相似结构的注释图像。我们使用特定领域的批归归量表(DSBN)来单独地标准化两个解剖域的特征图,并提出跨域对比度学习策略,以鼓励提取域不变特征。它们被整合到一个自我兼容的均值老师(SE-MT)框架中,以利用具有预测一致性约束的未标记的目标域图像。广泛的实验表明,我们的CS-CADA能够解决具有挑战性的跨解剖结构域移位问题,从而在视网膜血管图像和心脏MR图像的帮助下,在X射线图像中准确分割冠状动脉,并借助底底图像,分别仅给定目标域中的少量注释。
translated by 谷歌翻译
语义分割在广泛的计算机视觉应用中起着基本作用,提供了全球对图像​​的理解的关键信息。然而,最先进的模型依赖于大量的注释样本,其比在诸如图像分类的任务中获得更昂贵的昂贵的样本。由于未标记的数据替代地获得更便宜,因此无监督的域适应达到了语义分割社区的广泛成功并不令人惊讶。本调查致力于总结这一令人难以置信的快速增长的领域的五年,这包含了语义细分本身的重要性,以及将分段模型适应新环境的关键需求。我们提出了最重要的语义分割方法;我们对语义分割的域适应技术提供了全面的调查;我们揭示了多域学习,域泛化,测试时间适应或无源域适应等较新的趋势;我们通过描述在语义细分研究中最广泛使用的数据集和基准测试来结束本调查。我们希望本调查将在学术界和工业中提供具有全面参考指导的研究人员,并有助于他们培养现场的新研究方向。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
域的适应性引起了极大的兴趣,因为标签是一项昂贵且容易出错的任务,尤其是当像素级在语义分段中需要标签时。因此,人们希望能够在数据丰富并且标签精确的合成域上训练神经网络。但是,这些模型通常在室外图像上表现不佳。为了减轻输入的变化,可以使用图像到图像的方法。然而,使用合成训练域桥接部署领域的标准图像到图像方法并不关注下游任务,而仅关注视觉检查级别。因此,我们在图像到图像域的适应方法中提出了gan的“任务意识”版本。借助少量标记的地面真实数据,我们将图像到图像翻译指导为更合适的输入图像,用于培训合成数据(合成域专家)的语义分割网络。这项工作的主要贡献是1)一种模块化半监督域适应方法,通过训练下游任务Aware Cycean,同时避免适应合成语义分割专家2)该方法适用于复杂的域适应任务3)通过使用从头开始网络进行较不偏见的域间隙分析。我们在分类任务以及语义细分方面评估我们的方法。我们的实验表明,我们的方法比仅使用70(10%)地面真实图像的分类任务中的准确性优于标准图像到图像方法 - 准确性的准确性7%。对于语义细分,我们可以在训练过程中仅使用14个地面真相图像,在均值评估数据集上,平均交叉点比联合的平均交叉点约4%至7%。
translated by 谷歌翻译
在图像识别中已广泛提出了生成模型,以生成更多图像,其中分布与真实图像相似。它通常会引入一个歧视网络,以区分真实数据与生成的数据。这样的模型利用了一个歧视网络,该网络负责以区分样式从目标数据集中包含的数据传输的数据。但是,这样做的网络着重于强度分布的差异,并可能忽略数据集之间的结构差异。在本文中,我们制定了一个新的图像到图像翻译问题,以确保生成的图像的结构类似于目标数据集中的图像。我们提出了一个简单但功能强大的结构不稳定的对抗(SUA)网络,该网络在执行图像分割时介绍了训练和测试集之间的强度和结构差异。它由空间变换块组成,然后是强度分布渲染模块。提出了空间变换块来减少两个图像之间的结构缝隙,还产生了一个反变形字段,以使最终的分段图像背部扭曲。然后,强度分布渲染模块将变形结构呈现到具有目标强度分布的图像。实验结果表明,所提出的SUA方法具有在多个数据集之间传递强度分布和结构含量的能力。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
During the last half decade, convolutional neural networks (CNNs) have triumphed over semantic segmentation, which is a core task of various emerging industrial applications such as autonomous driving and medical imaging. However, to train CNNs requires a huge amount of data, which is difficult to collect and laborious to annotate. Recent advances in computer graphics make it possible to train CNN models on photo-realistic synthetic data with computer-generated annotations. Despite this, the domain mismatch between the real images and the synthetic data significantly decreases the models' performance. Hence we propose a curriculum-style learning approach to minimize the domain gap in semantic segmentation. The curriculum domain adaptation solves easy tasks first in order to infer some necessary properties about the target domain; in particular, the first task is to learn global label distributions over images and local distributions over landmark superpixels. These are easy to estimate because images of urban traffic scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.). We then train the segmentation network in such a way that the network predictions in the target domain follow those inferred properties. In experiments, our method significantly outperforms the baselines as well as the only known existing approach to the same problem.
translated by 谷歌翻译
深度神经网络(DNN)极大地促进了语义分割中的性能增益。然而,训练DNN通常需要大量的像素级标记数据,这在实践中收集昂贵且耗时。为了减轻注释负担,本文提出了一种自组装的生成对抗网络(SE-GAN)利用语义分割的跨域数据。在SE-GaN中,教师网络和学生网络构成用于生成语义分割图的自组装模型,与鉴别器一起形成GaN。尽管它很简单,我们发现SE-GaN可以显着提高对抗性训练的性能,提高模型的稳定性,这是由大多数普遍培训的方法共享的常见障碍。我们理论上分析SE-GaN并提供$ \ Mathcal o(1 / \ sqrt {n})$泛化绑定($ n $是培训样本大小),这表明控制了鉴别者的假设复杂性,以提高概括性。因此,我们选择一个简单的网络作为鉴别器。两个标准设置中的广泛和系统实验表明,该方法显着优于最新的最先进的方法。我们模型的源代码即将推出。
translated by 谷歌翻译
Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a "learning via translation" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation.Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a Cy-cleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.
translated by 谷歌翻译
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. An appealing alternative is to render synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images often fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that attempt to map representations between the two domains or learn to extract features that are domain-invariant. In this work, we present a new approach that learns, in an unsupervised manner, a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based model adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.
translated by 谷歌翻译
这项工作提出了一个新颖的框架CISFA(对比图像合成和自我监督的特征适应),该框架建立在图像域翻译和无监督的特征适应性上,以进行跨模式生物医学图像分割。与现有作品不同,我们使用单方面的生成模型,并在输入图像的采样贴片和相应的合成图像之间添加加权贴片对比度损失,该图像用作形状约束。此外,我们注意到生成的图像和输入图像共享相似的结构信息,但具有不同的方式。因此,我们在生成的图像和输入图像上强制实施对比损失,以训练分割模型的编码器,以最大程度地减少学到的嵌入空间中成对图像之间的差异。与依靠对抗性学习进行特征适应的现有作品相比,这种方法使编码器能够以更明确的方式学习独立于域的功能。我们对包含腹腔和全心的CT和MRI图像的分割任务进行了广泛评估。实验结果表明,所提出的框架不仅输出了较小的器官形状变形的合成图像,而且还超过了最先进的域适应方法的较大边缘。
translated by 谷歌翻译