从计算机视觉的频率的角度来看,以前的无监督域适应方法无法处理跨域问题。可以将不同域的图像或特征地图分解为低频组件和高频组件。本文提出了这样一个假设,即低频信息是更域的不变性,而高频信息包含与域相关的信息。因此,我们引入了一种名为低频模块(LFM)的方法,以提取域不变特征表示。 LFM由数字高斯低通滤波器构建。我们的方法易于实施,并且不引入额外的超参数。我们设计了两种有效的方法来利用LFM进行域的适应性,我们的方法与其他现有方法互补,并作为可以与这些方法结合使用的插件单元。实验结果表明,我们的LFM优于各种计算机视觉任务的最先进方法,包括图像分类和对象检测。
translated by 谷歌翻译
在图像分类中,获得足够的标签通常昂贵且耗时。为了解决这个问题,域适应通常提供有吸引力的选择,给出了来自类似性质但不同域的大量标记数据。现有方法主要对准单个结构提取的表示的分布,并且表示可以仅包含部分信息,例如,仅包含部分饱和度,亮度和色调信息。在这一行中,我们提出了多代表性适应,这可以大大提高跨域图像分类的分类精度,并且特别旨在对准由名为Inception Adaption Adationation模块(IAM)提取的多个表示的分布。基于此,我们呈现多色自适应网络(MRAN)来通过多表示对准完成跨域图像分类任务,该任向性可以捕获来自不同方面的信息。此外,我们扩展了最大的平均差异(MMD)来计算适应损耗。我们的方法可以通过扩展具有IAM的大多数前进模型来轻松实现,并且网络可以通过反向传播有效地培训。在三个基准图像数据集上进行的实验证明了备的有效性。代码已在https://github.com/easezyc/deep-transfer -learning上获得。
translated by 谷歌翻译
域自适应对象检测(DAOD)旨在改善探测和测试数据来自不同域时的探测器的泛化能力。考虑到显着的域间隙,一些典型方法,例如基于Conscangan的方法,采用中间域来逐步地桥接源域和靶域。然而,基于Conscangan的中间域缺少对象检测的PIX或实例级监控,这导致语义差异。为了解决这个问题,在本文中,我们介绍了具有四种不同的低频滤波器操作的频谱增强一致性(FSAC)框架。通过这种方式,我们可以获得一系列增强数据作为中间域。具体地,我们提出了一种两级优化框架。在第一阶段,我们利用所有原始和增强的源数据来训练对象检测器。在第二阶段,采用增强源和目标数据,具有伪标签来执行预测一致性的自培训。使用均值优化的教师模型用于进一步修改伪标签。在实验中,我们分别评估了我们在单一和复合目标DAOD上的方法,这证明了我们方法的有效性。
translated by 谷歌翻译
Domain adaptation aims to bridge the domain shifts between the source and the target domain. These shifts may span different dimensions such as fog, rainfall, etc. However, recent methods typically do not consider explicit prior knowledge about the domain shifts on a specific dimension, thus leading to less desired adaptation performance. In this paper, we study a practical setting called Specific Domain Adaptation (SDA) that aligns the source and target domains in a demanded-specific dimension. Within this setting, we observe the intra-domain gap induced by different domainness (i.e., numerical magnitudes of domain shifts in this dimension) is crucial when adapting to a specific domain. To address the problem, we propose a novel Self-Adversarial Disentangling (SAD) framework. In particular, given a specific dimension, we first enrich the source domain by introducing a domainness creator with providing additional supervisory signals. Guided by the created domainness, we design a self-adversarial regularizer and two loss functions to jointly disentangle the latent representations into domainness-specific and domainness-invariant features, thus mitigating the intra-domain gap. Our method can be easily taken as a plug-and-play framework and does not introduce any extra costs in the inference time. We achieve consistent improvements over state-of-the-art methods in both object detection and semantic segmentation.
translated by 谷歌翻译
Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while manual annotations are only available in the source domain. Previous methods minimize the domain discrepancy neglecting the class information, which may lead to misalignment and poor generalization performance. To address this issue, this paper proposes Contrastive Adaptation Network (CAN) optimizing a new metric which explicitly models the intra-class domain discrepancy and the inter-class domain discrepancy. We design an alternating update strategy for training CAN in an end-to-end manner. Experiments on two real-world benchmarks Office-31 and VisDA-2017 demonstrate that CAN performs favorably against the state-of-the-art methods and produces more discriminative features.
translated by 谷歌翻译
关于无监督域适应性(UDA)的大多数现有研究都认为每个域的训练样本都带有域标签(例如绘画,照片)。假定每个域中的样品都遵循相同的分布,并利用域标签通过特征对齐来学习域不变特征。但是,这样的假设通常并不成立 - 通常存在许多较细粒的领域(例如,已经开发出了数十种现代绘画样式,每种绘画样式与经典风格的范围都有很大不同)。因此,在每个人工定义和粗粒结构域之间强迫特征分布对齐可能是无效的。在本文中,我们从完全不同的角度解决了单源和多源UDA,即将每个实例视为一个良好的域。因此,跨域的特征对齐是冗余。相反,我们建议执行动态实例域的适应性(DIDA)。具体而言,开发了具有自适应卷积内核的动态神经网络,以生成实例自适应残差,以使域 - 无知的深度特征适应每个单独的实例。这使得共享分类器可以同时应用于源域数据,而无需依赖任何域注释。此外,我们没有施加复杂的特征对准损失,而是仅使用标记的源和伪标记为目标数据的跨透镜损失采用简单的半监督学习范式。我们的模型被称为DIDA-NET,可以在几种常用的单源和多源UDA数据集上实现最先进的性能,包括数字,办公室房屋,域名,域名,Digit-Five和PAC。
translated by 谷歌翻译
批量归一化(BN)广泛用于现代神经网络,已被证明代表与域相关知识,因此对于跨域任务(如无监督域适应(UDA))无效。现有的BN变体方法在归一化模块中相同信道中的源和目标域知识。然而,跨域跨域的相应通道的特征之间的错位通常导致子最佳的可转换性。在本文中,我们利用跨域关系并提出了一种新颖的归一化方法,互惠归一化(RN)。具体地,RN首先呈现互易补偿(RC)模块,用于基于跨域频道明智的相关性在两个域中获取每个信道的补偿。然后,RN开发互易聚合(RA)模块,以便以其跨域补偿组件自适应地聚合特征。作为BN的替代方案,RN更适合于UDA问题并且可以容易地集成到流行的域适应方法中。实验表明,所提出的RN优于现有的正常化对应物,通过大幅度,并有助于最先进的适应方法实现更好的结果。源代码可在https://github.com/openning07/reciprocal-normalization-for-da上找到。
translated by 谷歌翻译
无监督域适应(UDA)技术的最新进展在跨域计算机视觉任务中有巨大的成功,通过弥合域分布差距来增强数据驱动的深度学习架构的泛化能力。对于基于UDA的跨域对象检测方法,其中大多数通过对抗性学习策略引导域不变特征产生来缓解域偏差。然而,由于不稳定的对抗性培训过程,他们的域名鉴别器具有有限的分类能力。因此,它们引起的提取特征不能完全域不变,仍然包含域私有因素,使障碍物进一步缓解跨域差异。为了解决这个问题,我们设计一个域分离rcnn(DDF),以消除特定于检测任务学习的特定信息。我们的DDF方法促进了全局和本地阶段的功能解剖,分别具有全局三联脱离(GTD)模块和实例相似性解剖(ISD)模块。通过在四个基准UDA对象检测任务上表现出最先进的方法,对我们的DDF方法进行了宽阔的适用性。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
适应分布数据的数据是所有统计学习算法的元挑战,这些算法强烈依赖于I.I.D.假设。它导致不可避免的人工成本和在现实应用中的信心危机。为此,域的概括旨在从多个源域中的挖掘域 - 核定知识,这些知识可以推广到看不见的目标域。在本文中,通过利用图像的频域,我们独特地使用两个关键观察:(i)图像的高频信息描绘了对象边缘结构,该信息保留对象的高级语义信息自然是一致的跨不同域,(ii)低频组件保留对象平滑结构,而此信息易于域移动。在上述观察结果的激励下,我们引入(i)图像的高频和低频功能,(ii)一种信息交互机制,以确保两个部分的有用知识可以有效地合作,并且(iii)一种新型的数据增强技术,可在频域上起作用,以鼓励频率特征的稳健性。提出的方法在三个广泛使用的域概括基准(Digit-DG,Office-home和pac)上获得了最先进的性能。
translated by 谷歌翻译
在本文中,我们提出了一种使用域鉴别特征模块的双模块网络架构,以鼓励域不变的特征模块学习更多域不变的功能。该建议的架构可以应用于任何利用域不变功能的任何模型,用于无监督域适应,以提高其提取域不变特征的能力。我们在作为代表性算法的神经网络(DANN)模型的区域 - 对抗训练进行实验。在培训过程中,我们为两个模块提供相同的输入,然后分别提取它们的特征分布和预测结果。我们提出了差异损失,以找到预测结果的差异和两个模块之间的特征分布。通过对抗训练来最大化其特征分布和最小化其预测结果的差异,鼓励两个模块分别学习更多域歧视和域不变特征。进行了广泛的比较评估,拟议的方法在大多数无监督的域适应任务中表现出最先进的。
translated by 谷歌翻译
Domain adaptive object detection (DAOD) aims to alleviate transfer performance degradation caused by the cross-domain discrepancy. However, most existing DAOD methods are dominated by computationally intensive two-stage detectors, which are not the first choice for industrial applications. In this paper, we propose a novel semi-supervised domain adaptive YOLO (SSDA-YOLO) based method to improve cross-domain detection performance by integrating the compact one-stage detector YOLOv5 with domain adaptation. Specifically, we adapt the knowledge distillation framework with the Mean Teacher model to assist the student model in obtaining instance-level features of the unlabeled target domain. We also utilize the scene style transfer to cross-generate pseudo images in different domains for remedying image-level differences. In addition, an intuitive consistency loss is proposed to further align cross-domain predictions. We evaluate our proposed SSDA-YOLO on public benchmarks including PascalVOC, Clipart1k, Cityscapes, and Foggy Cityscapes. Moreover, to verify its generalization, we conduct experiments on yawning detection datasets collected from various classrooms. The results show considerable improvements of our method in these DAOD tasks. Our code is available on \url{https://github.com/hnuzhy/SSDA-YOLO}.
translated by 谷歌翻译
Domain adaptive detection aims to improve the generalization of detectors on target domain. To reduce discrepancy in feature distributions between two domains, recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning. However, they neglect the relationship between multiple granularities and different features in alignment, degrading detection. Addressing this, we introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning. The key is to encode the dependencies across different granularities including pixel-, instance-, and category-levels simultaneously to align two domains. Specifically, based on pixel-level features, we first develop an omni-scale gated fusion (OSGF) module to aggregate discriminative representations of instances with scale-aware convolutions, leading to robust multi-scale detection. Besides, we introduce multi-granularity discriminators to identify where, either source or target domains, different granularities of samples come from. Note that, MGA not only leverages instance discriminability in different categories but also exploits category consistency between two domains for detection. Furthermore, we present an adaptive exponential moving average (AEMA) strategy that explores model assessments for model update to improve pseudo labels and alleviate local misalignment problem, boosting detection robustness. Extensive experiments on multiple domain adaption scenarios validate the superiority of MGA over other approaches on FCOS and Faster R-CNN detectors. Code will be released at https://github.com/tiankongzhang/MGA.
translated by 谷歌翻译
我们建议利用模拟的潜力,以域的概括方式对现实世界自动驾驶场景的语义分割。对分割网络进行了训练,没有任何目标域数据,并在看不见的目标域进行了测试。为此,我们提出了一种新的域随机化和金字塔一致性的方法,以学习具有高推广性的模型。首先,我们建议使用辅助数据集以视觉外观的方式随机将合成图像随机化,以有效地学习域不变表示。其次,我们进一步在不同的“风格化”图像和图像中实施了金字塔一致性,以分别学习域不变和规模不变的特征。关于从GTA和合成对城市景观,BDD和Mapillary的概括进行了广泛的实验;而我们的方法比最新技术取得了卓越的成果。值得注意的是,我们的概括结果与最先进的模拟域适应方法相比甚至更好,甚至比在训练时访问目标域数据的结果。
translated by 谷歌翻译
Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc., and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.
translated by 谷歌翻译
We consider the problem of unsupervised domain adaptation in semantic segmentation. A key in this campaign consists in reducing the domain shift, i.e., enforcing the data distributions of the two domains to be similar. One of the common strategies is to align the marginal distribution in the feature space through adversarial learning. However, this global alignment strategy does not consider the category-level joint distribution. A possible consequence of such global movement is that some categories which are originally well aligned between the source and target may be incorrectly mapped, thus leading to worse segmentation results in target domain. To address this problem, we introduce a category-level adversarial network, aiming to enforce local semantic consistency during the trend of global alignment. Our idea is to take a close look at the category-level joint distribution and align each class with an adaptive adversarial loss. Specifically, we reduce the weight of the adversarial loss for category-level aligned features while increasing the adversarial force for those poorly aligned. In this process, we decide how well a feature is category-level aligned between source and target by a co-training approach. In two domain adaptation tasks, i.e., GTA5 → Cityscapes and SYN-THIA → Cityscapes, we validate that the proposed method matches the state of the art in segmentation accuracy.
translated by 谷歌翻译
通用域自适应对象检测(UNIDAOD)比域自适应对象检测(DAOD)更具挑战性,因为源域的标签空间可能与目标的标签空间不相同,并且在通用场景中的对象的比例可能会大大变化(即,类别转移和比例位移)。为此,我们提出了US-DAF,即使用多标签学习的US-DAF,即具有多个标记的rcnn自适应率更快,以减少训练期间的负转移效应,同时最大化可传递性以及在各种规模下两个领域的可区分性。具体而言,我们的方法由两个模块实现:1)我们通过设计滤波器机制模块来克服类别移动引起的负转移来促进普通类的特征对齐,并抑制私人类的干扰。 2)我们通过引入一个新的多标签尺度感知适配器来在对象检测中填充比例感知适应的空白,以在两个域的相应刻度之间执行单个对齐。实验表明,US-DAF在三种情况下(即开放式,部分集和封闭设置)实现最新结果,并在基准数据集clipart1k和水彩方面的相对改善中获得7.1%和5.9%的相对改善。特定。
translated by 谷歌翻译
在这项工作中,我们试图通过设计简单和紧凑的条件领域的逆势培训方法来解决无监督的域适应。我们首先重新审视简单的级联调节策略,其中特征与输出预测连接为鉴别器的输入。我们发现倾斜策略遭受了弱势调节力量。我们进一步证明扩大连接预测的规范可以有效地激励条件域对齐。因此,我们通过将输出预测标准化具有相同的特征的输出预测来改善连接调节,并且派生方法作为归一化输出调节器〜(名词)。然而,对域对齐的原始输出预测的调理,名词遭受目标域的不准确预测。为此,我们建议将原型空间中的跨域特征对齐方式而不是输出空间。将新的原型基于原型的调节与名词相结合,我们将增强方法作为基于原型的归一化输出调节器〜(代词)。对象识别和语义分割的实验表明,名词可以有效地对准域跨域的多模态结构,甚至优于最先进的域侵犯训练方法。与基于原型的调节一起,代词进一步提高了UDA的多个对象识别基准上的名词的适应性能。
translated by 谷歌翻译
虽然无监督的域适应(UDA)算法,即,近年来只有来自源域的标记数据,大多数算法和理论结果侧重于单源无监督域适应(SUDA)。然而,在实际情况下,标记的数据通常可以从多个不同的源收集,并且它们可能不仅不同于目标域而且彼此不同。因此,来自多个源的域适配器不应以相同的方式进行建模。最近基于深度学习的多源无监督域适应(Muda)算法专注于通过在通用特征空间中的所有源极和目标域的分布对齐来提取所有域的公共域不变表示。但是,往往很难提取Muda中所有域的相同域不变表示。此外,这些方法匹配分布而不考虑类之间的域特定的决策边界。为了解决这些问题,我们提出了一个新的框架,具有两个对准阶段的Muda,它不仅将每对源和目标域的分布对齐,而且还通过利用域特定的分类器的输出对准决策边界。广泛的实验表明,我们的方法可以对图像分类的流行基准数据集实现显着的结果。
translated by 谷歌翻译
虽然在许多域内生成并提供了大量的未标记数据,但对视觉数据的自动理解的需求高于以往任何时候。大多数现有机器学习模型通常依赖于大量标记的训练数据来实现高性能。不幸的是,在现实世界的应用中,不能满足这种要求。标签的数量有限,手动注释数据昂贵且耗时。通常需要将知识从现有标记域传输到新域。但是,模型性能因域之间的差异(域移位或数据集偏差)而劣化。为了克服注释的负担,域适应(DA)旨在在将知识从一个域转移到另一个类似但不同的域中时减轻域移位问题。无监督的DA(UDA)处理标记的源域和未标记的目标域。 UDA的主要目标是减少标记的源数据和未标记的目标数据之间的域差异,并在培训期间在两个域中学习域不变的表示。在本文中,我们首先定义UDA问题。其次,我们从传统方法和基于深度学习的方法中概述了不同类别的UDA的最先进的方法。最后,我们收集常用的基准数据集和UDA最先进方法的报告结果对视觉识别问题。
translated by 谷歌翻译