无监督的域适应性(UDA)旨在使标记的源域的模型适应未标记的目标域。现有的基于UDA的语义细分方法始终降低像素级别,功能级别和输出级别的域移动。但是,几乎所有这些都在很大程度上忽略了上下文依赖性,该依赖性通常在不同的领域共享,从而导致较不怀疑的绩效。在本文中,我们提出了一个新颖的环境感知混音(camix)框架自适应语义分割的框架,该框架以完全端到端的可训练方式利用了上下文依赖性的这一重要线索作为显式的先验知识,以增强对适应性的适应性目标域。首先,我们通过利用积累的空间分布和先前的上下文关系来提出上下文掩盖的生成策略。生成的上下文掩码在这项工作中至关重要,并将指导三个不同级别的上下文感知域混合。此外,提供了背景知识,我们引入了重要的一致性损失,以惩罚混合学生预测与混合教师预测之间的不一致,从而减轻了适应性的负面转移,例如早期绩效降级。广泛的实验和分析证明了我们方法对广泛使用的UDA基准的最新方法的有效性。
translated by 谷歌翻译
在无监督的域适应性(UDA)中,直接从源到目标域的适应通常会遭受明显的差异,并导致对齐不足。因此,许多UDA的作品试图通过各种中间空间逐渐和轻柔地消失域间隙,这些空间被称为域桥接(DB)。但是,对于诸如域自适应语义分割(DASS)之类的密集预测任务,现有的解决方案主要依赖于粗糙的样式转移以及如何优雅地桥接域的优雅桥梁。在这项工作中,我们诉诸于数据混合以建立用于DASS的经过经过经过经过讨论的域桥接(DDB),通过该域的源和目标域的联合分布与中间空间中的每个分布进行对齐并与每个分布。 DDB的核心是双路径域桥接步骤,用于使用粗糙和精细的数据混合技术生成两个中间域,以及一个跨路径知识蒸馏步骤,用于对两个互补模型进行对生成的中间样品进行培训的互补模型作为“老师”以多教老师的蒸馏方式发展出色的“学生”。这两个优化步骤以交替的方式工作,并相互加强以具有强大的适应能力引起DDB。对具有不同设置的自适应分割任务进行的广泛实验表明,我们的DDB显着优于最先进的方法。代码可从https://github.com/xiaoachen98/ddb.git获得。
translated by 谷歌翻译
Domain adaptation aims to bridge the domain shifts between the source and the target domain. These shifts may span different dimensions such as fog, rainfall, etc. However, recent methods typically do not consider explicit prior knowledge about the domain shifts on a specific dimension, thus leading to less desired adaptation performance. In this paper, we study a practical setting called Specific Domain Adaptation (SDA) that aligns the source and target domains in a demanded-specific dimension. Within this setting, we observe the intra-domain gap induced by different domainness (i.e., numerical magnitudes of domain shifts in this dimension) is crucial when adapting to a specific domain. To address the problem, we propose a novel Self-Adversarial Disentangling (SAD) framework. In particular, given a specific dimension, we first enrich the source domain by introducing a domainness creator with providing additional supervisory signals. Guided by the created domainness, we design a self-adversarial regularizer and two loss functions to jointly disentangle the latent representations into domainness-specific and domainness-invariant features, thus mitigating the intra-domain gap. Our method can be easily taken as a plug-and-play framework and does not introduce any extra costs in the inference time. We achieve consistent improvements over state-of-the-art methods in both object detection and semantic segmentation.
translated by 谷歌翻译
深度神经网络(DNN)极大地促进了语义分割中的性能增益。然而,训练DNN通常需要大量的像素级标记数据,这在实践中收集昂贵且耗时。为了减轻注释负担,本文提出了一种自组装的生成对抗网络(SE-GAN)利用语义分割的跨域数据。在SE-GaN中,教师网络和学生网络构成用于生成语义分割图的自组装模型,与鉴别器一起形成GaN。尽管它很简单,我们发现SE-GaN可以显着提高对抗性训练的性能,提高模型的稳定性,这是由大多数普遍培训的方法共享的常见障碍。我们理论上分析SE-GaN并提供$ \ Mathcal o(1 / \ sqrt {n})$泛化绑定($ n $是培训样本大小),这表明控制了鉴别者的假设复杂性,以提高概括性。因此,我们选择一个简单的网络作为鉴别器。两个标准设置中的广泛和系统实验表明,该方法显着优于最新的最先进的方法。我们模型的源代码即将推出。
translated by 谷歌翻译
语义细分是智能车辆了解环境的重要任务。当前的深度学习方法需要大量的标记数据进行培训。手动注释很昂贵,而模拟器可以提供准确的注释。但是,在实际场景中应用时,使用模拟器数据训练的语义分割模型的性能将大大降低。对于语义分割的无监督域适应性(UDA)最近引起了越来越多的研究注意力,旨在减少域间隙并改善目标域的性能。在本文中,我们提出了一种新型的基于两阶段熵的UDA方法,用于语义分割。在第一阶段,我们设计了一个阈值适应的无监督局灶性损失,以使目标域中的预测正常,该预测具有轻度的梯度中和机制,并减轻了在基于熵方法中几乎没有优化硬样品的问题。在第二阶段,我们引入了一种名为跨域图像混合(CIM)的数据增强方法,以弥合两个域的语义知识。我们的方法在合成景观和gta5-to-cityscapes上使用DeepLabV2和使用轻量级的Bisenet实现了最新的58.4%和59.6%的MIOS和59.6%的Mious。
translated by 谷歌翻译
域的适应性是将所学的共享知识从源域转移到新的环境,即目标域。一种常见的做法是在标记的源域数据和未标记的目标域数据上训练模型。然而,由于对源域的强有力监督,学到的模型通常会偏差。大多数研究人员采用早期策略来防止过度拟合,但是由于缺乏目标域验证集,因此何时停止培训仍然是一个具有挑战性的问题。在本文中,我们提出了一种高效的自举方法,称为Adaboost学生,在培训过程中明确学习互补模型,并使用户摆脱经验的早期停止。 Adaboost学生将深入的模型学习与常规培训策略(即自适应增强)相结合,并在学习模型与数据采样器之间进行互动。我们采用一个自适应数据采样器来逐步促进硬样品学习并汇总“弱”模型以防止过度拟合。广泛的实验表明,(1)无需担心停止时间,Adaboost学生通过在培训期间通过有效的互补模型学习提供了一个强大的解决方案。 (2)Adaboost学生与大多数领域适应方法是正交的,可以将其与现有方法结合使用,以进一步改善最新性能。我们已经在三个广泛使用的场景细分域适应基准上取得了竞争成果。
translated by 谷歌翻译
无监督的域适应(UDA)旨在使源域上培训的模型适应到新的目标域,其中没有可用标记的数据。在这项工作中,我们调查从合成计算机生成的域的UDA的问题,以用于学习语义分割的类似但实际的域。我们提出了一种与UDA的一致性正则化方法结合的语义一致的图像到图像转换方法。我们克服了将合成图像转移到真实的图像的先前限制。我们利用伪标签来学习生成的图像到图像转换模型,该图像到图像转换模型从两个域上的语义标签接收额外的反馈。我们的方法优于最先进的方法,将图像到图像转换和半监督学习与相关域适应基准,即Citycapes和Synthia上的CutyCapes和Synthia进行了全面的学习。
translated by 谷歌翻译
无监督的域适应性(UDA)旨在使在标记的源域上训练的模型适应未标记的目标域。在本文中,我们提出了典型的对比度适应(PROCA),这是一种无监督域自适应语义分割的简单有效的对比度学习方法。以前的域适应方法仅考虑跨各个域的阶级内表示分布的对齐,而阶层间结构关系的探索不足,从而导致目标域上的对齐表示可能不像在源上歧视的那样容易歧视。域了。取而代之的是,ProCA将类间信息纳入班级原型,并采用以班级为中心的分布对齐进行适应。通过将同一类原型与阳性和其他类原型视为实现以集体为中心的分配对齐方式的负面原型,Proca在经典领域适应任务上实现了最先进的性能,{\ em i.e. text {and} synthia $ \ to $ cityScapes}。代码可在\ href {https://github.com/jiangzhengkai/proca} {proca}获得代码
translated by 谷歌翻译
了解驾驶场景中的雾图像序列对于自主驾驶至关重要,但是由于难以收集和注释不利天气的现实世界图像,这仍然是一项艰巨的任务。最近,自我训练策略被认为是无监督域适应的强大解决方案,通过生成目标伪标签并重新训练模型,它迭代地将模型从源域转化为目标域。但是,选择自信的伪标签不可避免地会遭受稀疏与准确性之间的冲突,这两者都会导致次优模型。为了解决这个问题,我们利用了驾驶场景的雾图图像序列的特征,以使自信的伪标签致密。具体而言,基于顺序图像数据的局部空间相似性和相邻时间对应的两个发现,我们提出了一种新型的目标域驱动的伪标签扩散(TDO-DIF)方案。它采用超像素和光学流来识别空间相似性和时间对应关系,然后扩散自信但稀疏的伪像标签,或者是由流量链接的超像素或时间对应对。此外,为了确保扩散像素的特征相似性,我们在模型重新训练阶段引入了局部空间相似性损失和时间对比度损失。实验结果表明,我们的TDO-DIF方案有助于自适应模型在两个公共可用的天然雾化数据集(超过雾气的Zurich and Forggy驾驶)上实现51.92%和53.84%的平均跨工会(MIOU),这超过了最态度ART无监督的域自适应语义分割方法。可以在https://github.com/velor2012/tdo-dif上找到模型和数据。
translated by 谷歌翻译
Unsupervised source-free domain adaptation methods aim to train a model to be used in the target domain utilizing the pretrained source-domain model and unlabeled target-domain data, where the source data may not be accessible due to intellectual property or privacy issues. These methods frequently utilize self-training with pseudo-labeling thresholded by prediction confidence. In a source-free scenario, only supervision comes from target data, and thresholding limits the contribution of the self-training. In this study, we utilize self-training with a mean-teacher approach. The student network is trained with all predictions of the teacher network. Instead of thresholding the predictions, the gradients calculated from the pseudo-labels are weighted based on the reliability of the teacher's predictions. We propose a novel method that uses proxy-based metric learning to estimate reliability. We train a metric network on the encoder features of the teacher network. Since the teacher is updated with the moving average, the encoder feature space is slowly changing. Therefore, the metric network can be updated in training time, which enables end-to-end training. We also propose a metric-based online ClassMix method to augment the input of the student network where the patches to be mixed are decided based on the metric reliability. We evaluated our method in synthetic-to-real and cross-city scenarios. The benchmarks show that our method significantly outperforms the existing state-of-the-art methods.
translated by 谷歌翻译
受益于从特定情况(源)收集的相当大的像素级注释,训练有素的语义分段模型表现得非常好,但由于大域移位而导致的新情况(目标)失败。为了缓解域间隙,先前的跨域语义分段方法始终在域对齐期间始终假设源数据和目标数据的共存。但是,在实际方案中访问源数据可能会引发隐私问题并违反知识产权。为了解决这个问题,我们专注于一个有趣和具有挑战性的跨域语义分割任务,其中仅向目标域提供训练源模型。具体地,我们提出了一种称为ATP的统一框架,其包括三种方案,即特征对准,双向教学和信息传播。首先,我们设计了课程熵最小化目标,以通过提供的源模型隐式对准目标功能与看不见的源特征。其次,除了vanilla自我训练中的正伪标签外,我们是第一个向该领域引入负伪标签的,并开发双向自我训练策略,以增强目标域中的表示学习。最后,采用信息传播方案来通过伪半监督学习进一步降低目标域内的域内差异。综合与跨城市驾驶数据集的广泛结果验证\ TextBF {ATP}产生最先进的性能,即使是需要访问源数据的方法。
translated by 谷歌翻译
自我训练具有极大的促进域自适应语义分割,它迭代地在目标域上生成伪标签并删除网络。然而,由于现实分割数据集是高度不平衡的,因此目标伪标签通常偏置到多数类并且基本上嘈杂,导致出错和次优模型。为了解决这个问题,我们提出了一个基于区域的主动学习方法,用于在域移位下进行语义分割,旨在自动查询要标记的图像区域的小分区,同时最大化分割性能。我们的算法,通过区域杂质和预测不确定性(AL-RIPU)的主动学习,介绍了一种新的采集策略,其特征在于图像区域的空间邻接以及预测置信度。我们表明,所提出的基于地区的选择策略比基于图像或基于点的对应物更有效地使用有限预算。同时,我们在源图像上强制在像素和其最近邻居之间的局部预测一致性。此外,我们制定了负面学习损失,以提高目标领域的鉴别表现。广泛的实验表明,我们的方法只需要极少的注释几乎达到监督性能,并且大大优于最先进的方法。
translated by 谷歌翻译
语义分割在广泛的计算机视觉应用中起着基本作用,提供了全球对图像​​的理解的关键信息。然而,最先进的模型依赖于大量的注释样本,其比在诸如图像分类的任务中获得更昂贵的昂贵的样本。由于未标记的数据替代地获得更便宜,因此无监督的域适应达到了语义分割社区的广泛成功并不令人惊讶。本调查致力于总结这一令人难以置信的快速增长的领域的五年,这包含了语义细分本身的重要性,以及将分段模型适应新环境的关键需求。我们提出了最重要的语义分割方法;我们对语义分割的域适应技术提供了全面的调查;我们揭示了多域学习,域泛化,测试时间适应或无源域适应等较新的趋势;我们通过描述在语义细分研究中最广泛使用的数据集和基准测试来结束本调查。我们希望本调查将在学术界和工业中提供具有全面参考指导的研究人员,并有助于他们培养现场的新研究方向。
translated by 谷歌翻译
Unsupervised sim-to-real domain adaptation (UDA) for semantic segmentation aims to improve the real-world test performance of a model trained on simulated data. It can save the cost of manually labeling data in real-world applications such as robot vision and autonomous driving. Traditional UDA often assumes that there are abundant unlabeled real-world data samples available during training for the adaptation. However, such an assumption does not always hold in practice owing to the collection difficulty and the scarcity of the data. Thus, we aim to relieve this need on a large number of real data, and explore the one-shot unsupervised sim-to-real domain adaptation (OSUDA) and generalization (OSDG) problem, where only one real-world data sample is available. To remedy the limited real data knowledge, we first construct the pseudo-target domain by stylizing the simulated data with the one-shot real data. To mitigate the sim-to-real domain gap on both the style and spatial structure level and facilitate the sim-to-real adaptation, we further propose to use class-aware cross-domain transformers with an intermediate domain randomization strategy to extract the domain-invariant knowledge, from both the simulated and pseudo-target data. We demonstrate the effectiveness of our approach for OSUDA and OSDG on different benchmarks, outperforming the state-of-the-art methods by a large margin, 10.87, 9.59, 13.05 and 15.91 mIoU on GTA, SYNTHIA$\rightarrow$Cityscapes, Foggy Cityscapes, respectively.
translated by 谷歌翻译
近年来,对语义分割的无监督域适应性(UDA)进行了充分研究。但是,大多数现有的作品在很大程度上忽略了不同领域的本地区域一致性,并且对室外环境的变化的鲁棒性较低。在本文中,我们提出了一种新颖且完全端到端的可训练方法,称为域自适应语义分割的区域对比度一致性(RCCR)。我们的核心思想是从不同图像的相同位置提取的相似区域特征,即原始图像和增强图像,以更加接近,同时将两个图像的不同位置的特征推到要分开的不同位置。我们通过两种抽样策略提出了一个区域对比度损失,以实现有效的区域一致性。此外,我们呈现动力投影头,其中教师投射头是学生的指数移动平均值。最后,内存库机制旨在在不同的环境下学习更健壮和稳定的区域特征。对两个常见的UDA基准测试的广泛实验,即GTAV到CityScapes和CityScapes的合成,这表明我们的方法表现优于最先进的方法。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
The network trained for domain adaptation is prone to bias toward the easy-to-transfer classes. Since the ground truth label on the target domain is unavailable during training, the bias problem leads to skewed predictions, forgetting to predict hard-to-transfer classes. To address this problem, we propose Cross-domain Moving Object Mixing (CMOM) that cuts several objects, including hard-to-transfer classes, in the source domain video clip and pastes them into the target domain video clip. Unlike image-level domain adaptation, the temporal context should be maintained to mix moving objects in two different videos. Therefore, we design CMOM to mix with consecutive video frames, so that unrealistic movements are not occurring. We additionally propose Feature Alignment with Temporal Context (FATC) to enhance target domain feature discriminability. FATC exploits the robust source domain features, which are trained with ground truth labels, to learn discriminative target domain features in an unsupervised manner by filtering unreliable predictions with temporal consensus. We demonstrate the effectiveness of the proposed approaches through extensive experiments. In particular, our model reaches mIoU of 53.81% on VIPER to Cityscapes-Seq benchmark and mIoU of 56.31% on SYNTHIA-Seq to Cityscapes-Seq benchmark, surpassing the state-of-the-art methods by large margins.
translated by 谷歌翻译
培训语义细分模型的现实世界注释收集是一个昂贵的过程。无监督的域适应性(UDA)试图通过研究如何使用更多可访问的数据(例如合成数据)来训练和适应现实世界图像而无需其注释,以解决此问题。最近的UDA方法通过使用学生和教师网络对像素的分类损失进行培训,适用于自学习。在本文中,我们建议通过对网络输出中元素之间的像素间关系进行建模,将一致性正则项添加到半监督UDA中。我们通过将其应用于最先进的涂抹式框架并将GTA5上的MIOU1绩效应用于CityScapes Benchmark,并在Synthia上的MIOU16绩效提高了MIOU19在Synthia上的效果,并将MIOU19上的MIOU1上的性能提高到CityScapes基准,将其应用于CityScapes Benchmark,并将MIOU19上的MIOU1上的性能提高到CityScapes基准,从而证明了拟议的一致性正规化项的有效性。
translated by 谷歌翻译
While deep learning methods hitherto have achieved considerable success in medical image segmentation, they are still hampered by two limitations: (i) reliance on large-scale well-labeled datasets, which are difficult to curate due to the expert-driven and time-consuming nature of pixel-level annotations in clinical practices, and (ii) failure to generalize from one domain to another, especially when the target domain is a different modality with severe domain shifts. Recent unsupervised domain adaptation~(UDA) techniques leverage abundant labeled source data together with unlabeled target data to reduce the domain gap, but these methods degrade significantly with limited source annotations. In this study, we address this underexplored UDA problem, investigating a challenging but valuable realistic scenario, where the source domain not only exhibits domain shift~w.r.t. the target domain but also suffers from label scarcity. In this regard, we propose a novel and generic framework called ``Label-Efficient Unsupervised Domain Adaptation"~(LE-UDA). In LE-UDA, we construct self-ensembling consistency for knowledge transfer between both domains, as well as a self-ensembling adversarial learning module to achieve better feature alignment for UDA. To assess the effectiveness of our method, we conduct extensive experiments on two different tasks for cross-modality segmentation between MRI and CT images. Experimental results demonstrate that the proposed LE-UDA can efficiently leverage limited source labels to improve cross-domain segmentation performance, outperforming state-of-the-art UDA approaches in the literature. Code is available at: https://github.com/jacobzhaoziyuan/LE-UDA.
translated by 谷歌翻译
Semantic segmentation is a key problem for many computer vision tasks. While approaches based on convolutional neural networks constantly break new records on different benchmarks, generalizing well to diverse testing environments remains a major challenge. In numerous real world applications, there is indeed a large gap between data distributions in train and test domains, which results in severe performance loss at run-time. In this work, we address the task of unsupervised domain adaptation in semantic segmentation with losses based on the entropy of the pixel-wise predictions. To this end, we propose two novel, complementary methods using (i) an entropy loss and (ii) an adversarial loss respectively. We demonstrate state-of-theart performance in semantic segmentation on two challenging "synthetic-2-real" set-ups 1 and show that the approach can also be used for detection.
translated by 谷歌翻译