Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while manual annotations are only available in the source domain. Previous methods minimize the domain discrepancy neglecting the class information, which may lead to misalignment and poor generalization performance. To address this issue, this paper proposes Contrastive Adaptation Network (CAN) optimizing a new metric which explicitly models the intra-class domain discrepancy and the inter-class domain discrepancy. We design an alternating update strategy for training CAN in an end-to-end manner. Experiments on two real-world benchmarks Office-31 and VisDA-2017 demonstrate that CAN performs favorably against the state-of-the-art methods and produces more discriminative features.
translated by 谷歌翻译
在图像分类中,获得足够的标签通常昂贵且耗时。为了解决这个问题,域适应通常提供有吸引力的选择,给出了来自类似性质但不同域的大量标记数据。现有方法主要对准单个结构提取的表示的分布,并且表示可以仅包含部分信息,例如,仅包含部分饱和度,亮度和色调信息。在这一行中,我们提出了多代表性适应,这可以大大提高跨域图像分类的分类精度,并且特别旨在对准由名为Inception Adaption Adationation模块(IAM)提取的多个表示的分布。基于此,我们呈现多色自适应网络(MRAN)来通过多表示对准完成跨域图像分类任务,该任向性可以捕获来自不同方面的信息。此外,我们扩展了最大的平均差异(MMD)来计算适应损耗。我们的方法可以通过扩展具有IAM的大多数前进模型来轻松实现,并且网络可以通过反向传播有效地培训。在三个基准图像数据集上进行的实验证明了备的有效性。代码已在https://github.com/easezyc/deep-transfer -learning上获得。
translated by 谷歌翻译
Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain.In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.
translated by 谷歌翻译
虽然无监督的域适应(UDA)算法,即,近年来只有来自源域的标记数据,大多数算法和理论结果侧重于单源无监督域适应(SUDA)。然而,在实际情况下,标记的数据通常可以从多个不同的源收集,并且它们可能不仅不同于目标域而且彼此不同。因此,来自多个源的域适配器不应以相同的方式进行建模。最近基于深度学习的多源无监督域适应(Muda)算法专注于通过在通用特征空间中的所有源极和目标域的分布对齐来提取所有域的公共域不变表示。但是,往往很难提取Muda中所有域的相同域不变表示。此外,这些方法匹配分布而不考虑类之间的域特定的决策边界。为了解决这些问题,我们提出了一个新的框架,具有两个对准阶段的Muda,它不仅将每对源和目标域的分布对齐,而且还通过利用域特定的分类器的输出对准决策边界。广泛的实验表明,我们的方法可以对图像分类的流行基准数据集实现显着的结果。
translated by 谷歌翻译
无监督域适应(UDA)旨在将知识从相关但不同的良好标记的源域转移到新的未标记的目标域。大多数现有的UDA方法需要访问源数据,因此当数据保密而不相配在隐私问题时,不适用。本文旨在仅使用培训的分类模型来解决现实设置,而不是访问源数据。为了有效地利用适应源模型,我们提出了一种新颖的方法,称为源假设转移(拍摄),其通过将目标数据特征拟合到冻结源分类模块(表示分类假设)来学习目标域的特征提取模块。具体而言,拍摄挖掘出于特征提取模块的信息最大化和自我监督学习,以确保目标特征通过同一假设与看不见的源数据的特征隐式对齐。此外,我们提出了一种新的标签转移策略,它基于预测的置信度(标签信息),然后采用半监督学习来将目标数据分成两个分裂,然后提高目标域中的较为自信预测的准确性。如果通过拍摄获得预测,我们表示标记转移为拍摄++。关于两位数分类和对象识别任务的广泛实验表明,拍摄和射击++实现了与最先进的结果超越或相当的结果,展示了我们对各种视域适应问题的方法的有效性。代码可用于\ url {https://github.com/tim-learn/shot-plus}。
translated by 谷歌翻译
The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.
translated by 谷歌翻译
半监督域适应(SSDA)是一种具有挑战性的问题,需要克服1)以朝向域的较差的数据和2)分布换档的方法。不幸的是,由于培训数据偏差朝标标样本训练,域适应(DA)和半监督学习(SSL)方法的简单组合通常无法解决这两个目的。在本文中,我们介绍了一种自适应结构学习方法,以规范SSL和DA的合作。灵感来自多视图学习,我们建议的框架由共享特征编码器网络和两个分类器网络组成,用于涉及矛盾的目的。其中,其中一个分类器被应用于组目标特征以提高级别的密度,扩大了鲁棒代表学习的分类集群的间隙。同时,其他分类器作为符号器,试图散射源功能以增强决策边界的平滑度。目标聚类和源扩展的迭代使目标特征成为相应源点的扩张边界内的封闭良好。对于跨域特征对齐和部分标记的数据学习的联合地址,我们应用最大平均差异(MMD)距离最小化和自培训(ST)将矛盾结构投影成共享视图以进行可靠的最终决定。对标准SSDA基准的实验结果包括Domainnet和Office-Home,展示了我们对最先进的方法的方法的准确性和稳健性。
translated by 谷歌翻译
Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.
translated by 谷歌翻译
虽然在许多域内生成并提供了大量的未标记数据,但对视觉数据的自动理解的需求高于以往任何时候。大多数现有机器学习模型通常依赖于大量标记的训练数据来实现高性能。不幸的是,在现实世界的应用中,不能满足这种要求。标签的数量有限,手动注释数据昂贵且耗时。通常需要将知识从现有标记域传输到新域。但是,模型性能因域之间的差异(域移位或数据集偏差)而劣化。为了克服注释的负担,域适应(DA)旨在在将知识从一个域转移到另一个类似但不同的域中时减轻域移位问题。无监督的DA(UDA)处理标记的源域和未标记的目标域。 UDA的主要目标是减少标记的源数据和未标记的目标数据之间的域差异,并在培训期间在两个域中学习域不变的表示。在本文中,我们首先定义UDA问题。其次,我们从传统方法和基于深度学习的方法中概述了不同类别的UDA的最先进的方法。最后,我们收集常用的基准数据集和UDA最先进方法的报告结果对视觉识别问题。
translated by 谷歌翻译
自我监督的学习(SSL)最近成为特征学习方法中的最爱。因此,它可以吸引域适应方法来考虑结合SSL。直觉是强制执行实例级别一致性,使得预测器在域中变得不变。但是,域适应制度中的大多数现有SSL方法通常被视为独立的辅助组件,使域自适应的签名无人看管。实际上,域间隙消失的最佳区域和SSL PERUSES的实例级别约束可能根本不一致。从这一点来看,我们向一个特定的范式的自我监督学习量身定制,用于域适应,即可转让的对比学习(TCL),这与SSL和所需的跨域转移性相一致地联系起来。我们发现对比学习本质上是一个合适的域适应候选者,因为它的实例不变性假设可以方便地促进由域适应任务青睐的跨域类级不变性。基于特定的记忆库结构和伪标签策略,TCL然后通过清洁和新的对比损失来惩罚源头和靶之间的跨域内域差异。免费午餐是由于纳入对比学习,TCL依赖于移动平均的关键编码器,自然地实现了用于目标数据的伪标签的暂停标签,这避免了无额外的成本。因此,TCL有效地减少了跨域间隙。通过对基准(Office-Home,Visda-2017,Diamet-Five,PACS和Domainnet)进行广泛的实验,用于单源和多源域适配任务,TCL已经证明了最先进的性能。
translated by 谷歌翻译
无监督的域适应(UDA)旨在将标记的源分布与未标记的目标分布对齐,以获取域不变预测模型。然而,众所周知的UDA方法的应用在半监督域适应(SSDA)方案中不完全概括,其中来自目标域的少数标记的样本可用。在本文中,我们提出了一种用于半监督域适应(CLDA)的简单对比学习框架,该框架试图在SSDA中弥合标记和未标记的目标分布与源极和未标记的目标分布之间的域间差距之间的域间隙。我们建议采用类明智的对比学学习来降低原始(输入图像)和强大增强的未标记目标图像之间的域间间隙和实例级对比度对准,以最小化域内差异。我们已经凭经验表明,这两个模块相互补充,以实现卓越的性能。在三个众所周知的域适应基准数据集中的实验即Domainnet,Office-Home和Office31展示了我们方法的有效性。 CLDA在所有上述数据集上实现最先进的结果。
translated by 谷歌翻译
大多数现有的多源域适配(MSDA)方法通过特征分布对准最小化多个源 - 目标域对之间的距离,从单个源设置借用的方法。但是,对于不同的源极域,对齐成对特征分布是具有挑战性的,甚至可以对MSDA进行反效率。在本文中,我们介绍了一种新颖的方法:可转让的属性学习。动机很简单:虽然不同的域可以具有急剧不同的视野,但它们包含相同的类类,其特征在一起相同的属性;因此,MSDA模型应该专注于学习目标域的最可转换的属性。采用这种方法,我们提出了域名关注一致性网络,称为DAC网。关键设计是一个特征通道注意模块,旨在识别可转移功能(属性)。重要的是,注意模块受到一致性损失的监督,这对源极和目标域之间的信道注意权重的分布施加。此外,为了促进对目标数据的鉴别特征学习,我们将伪标记与类紧凑性丢失相结合,以最小化目标特征和分类器的权重向量之间的距离。在三个MSDA基准测试中进行了广泛的实验表明,我们的DAC-NET在所有这些中实现了新的最新性能。
translated by 谷歌翻译
Domain adaptation methods reduce domain shift typically by learning domain-invariant features. Most existing methods are built on distribution matching, e.g., adversarial domain adaptation, which tends to corrupt feature discriminability. In this paper, we propose Discriminative Radial Domain Adaptation (DRDR) which bridges source and target domains via a shared radial structure. It's motivated by the observation that as the model is trained to be progressively discriminative, features of different categories expand outwards in different directions, forming a radial structure. We show that transferring such an inherently discriminative structure would enable to enhance feature transferability and discriminability simultaneously. Specifically, we represent each domain with a global anchor and each category a local anchor to form a radial structure and reduce domain shift via structure matching. It consists of two parts, namely isometric transformation to align the structure globally and local refinement to match each category. To enhance the discriminability of the structure, we further encourage samples to cluster close to the corresponding local anchors based on optimal-transport assignment. Extensively experimenting on multiple benchmarks, our method is shown to consistently outperforms state-of-the-art approaches on varied tasks, including the typical unsupervised domain adaptation, multi-source domain adaptation, domain-agnostic learning, and domain generalization.
translated by 谷歌翻译
We propose a novel unsupervised domain adaptation framework based on domain-specific batch normalization in deep neural networks. We aim to adapt to both domains by specializing batch normalization layers in convolutional neural networks while allowing them to share all other model parameters, which is realized by a twostage algorithm. In the first stage, we estimate pseudolabels for the examples in the target domain using an external unsupervised domain adaptation algorithm-for example, MSTN [27] or CPUA [14]-integrating the proposed domain-specific batch normalization. The second stage learns the final models using a multi-task classification loss for the source and target domains. Note that the two domains have separate batch normalization layers in both stages. Our framework can be easily incorporated into the domain adaptation techniques based on deep neural networks with batch normalization layers. We also present that our approach can be extended to the problem with multiple source domains. The proposed algorithm is evaluated on multiple benchmark datasets and achieves the state-of-theart accuracy in the standard setting and the multi-source domain adaption scenario.
translated by 谷歌翻译
尽管最近在改善错误信息检测系统的性能方面取得了进展,但在看不见的领域中进行错误信息进行分类仍然是一个难以捉摸的挑战。为了解决这个问题,一种常见的方法是引入域名评论家并鼓励域不变的输入功能。但是,早期的错误信息通常证明了针对现有的错误信息数据(例如,COVID-19数据集中的类不平衡)的条件和标签转移,这使得这种方法在检测早期错误信息方面的有效性较小。在本文中,我们提出了早期错误信息检测(CANMD)的对比适应网络。具体而言,我们利用伪标签来生成高信心的目标示例,用于与源数据的联合培训。我们还设计了标签校正成分,以估算和校正源和目标域之间的标签移动(即类先验)。此外,对比度适应损失已集成在目标函数中,以减少类内部差异并扩大阶层间差异。因此,改编的模型学习了校正的类先验和两个域之间不变的条件分布,以改善目标数据分布的估计。为了证明所提出的CANMD的有效性,我们研究了Covid-19的早期错误信息检测的案例,并使用多个现实世界数据集进行了广泛的实验。结果表明,与最先进的基线相比,CANMD可以有效地将错误信息检测系统适应不见的Covid-19目标域,并有显着改进。
translated by 谷歌翻译
无监督的域适应(UDA)处理在标记数据仅适用于不同的源域时对未标记的目标域数据进行分类的问题。不幸的是,由于源数据和目标数据之间的域间隙,常用的分类方法无法充分实现这项任务。在本文中,我们提出了一种新颖的不确定性感知域适应设置,将不确定性模拟在特征空间中的多变量高斯分布。我们表明,我们提出的不确定性测量与其他常见的不确定性量化相关,并涉及平滑分类器的决策边界,从而提高泛化能力。我们在挑战UDA数据集中评估我们提出的管道,实现最先进的结果。我们的方法代码可用于https://gitlab.com/tringwald/cvp。
translated by 谷歌翻译
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WD-GRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
translated by 谷歌翻译
以前的无监督域适应性(UDA)方法旨在通过从富含标签的源域到未标记的目标域的单向知识转移来促进目标学习,而到目前为止,尚未共同考虑其从目标到源的反向适应性。实际上,在一些真正的教学实践中,老师帮助学生学习,同时在某种程度上也从学生那里获得晋升,这激发了我们探索域之间的双向知识转移,因此提出了双重校正适应网络(DUALCAN)在本文中。但是,由于跨域的不对称标签知识,从未标记的目标转移到标记的来源比共同的源与目标对应物更加困难。首先,由源预测的目标伪标签通常涉及模型偏差引起的噪音,因此在反向适应中,它们可能会损害源绩效并带来负目标转换。其次,源域通常包含先天噪声,这将不可避免地加剧目标噪声,从而导致跨域的噪声扩增。为此,我们进一步引入了噪声识别和校正(NIC)模块,以纠正和回收两个域中的噪声。据我们所知,这是对嘈杂UDA的双向适应的首次幼稚尝试,并且自然适用于无噪声UDA。给出理论理由以说明我们的直觉的合理性。经验结果证实了双can的有效性,其性能在最先进的方面具有显着的性能,尤其是对于极端嘈杂的任务(例如,PW-> pr和PR-> RW的办公室房屋)的有效性。
translated by 谷歌翻译
无监督的域适应性(UDA)已成功地应用于没有标签的标记源域转移到目标域的知识。最近引入了可转移的原型网络(TPN),进一步解决了班级条件比对。在TPN中,虽然在潜在空间中明确执行了源和目标域之间的类中心的接近度,但尚未完全研究基础的细颗粒亚型结构和跨域紧凑性。为了解决这个问题,我们提出了一种新方法,以适应性地执行细粒度的亚型意识对准,以提高目标域的性能,而无需两个域中的子类型标签。我们方法的见解是,由于不同的条件和标签变化,同类中未标记的亚型在亚型内具有局部接近性,同时表现出不同的特征。具体而言,我们建议通过使用中间伪标签同时执行亚型的紧凑度和阶级分离。此外,我们系统地研究了有或不具有亚型数字的各种情况,并建议利用基本的亚型结构。此外,开发了一个动态队列框架,以使用替代处理方案稳步地进化亚型簇质心。与最先进的UDA方法相比,使用多视图的先天性心脏病数据和VISDA和域进行了实验结果,显示了我们的亚型意识UDA的有效性和有效性。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. This work tackles a practical setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems. We propose a simple yet generic representation learning framework, named Source HypOthesis Transfer (SHOT). SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and selfsupervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks.
translated by 谷歌翻译