Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WD-GRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
translated by 谷歌翻译
虽然在许多域内生成并提供了大量的未标记数据,但对视觉数据的自动理解的需求高于以往任何时候。大多数现有机器学习模型通常依赖于大量标记的训练数据来实现高性能。不幸的是,在现实世界的应用中,不能满足这种要求。标签的数量有限,手动注释数据昂贵且耗时。通常需要将知识从现有标记域传输到新域。但是,模型性能因域之间的差异(域移位或数据集偏差)而劣化。为了克服注释的负担,域适应(DA)旨在在将知识从一个域转移到另一个类似但不同的域中时减轻域移位问题。无监督的DA(UDA)处理标记的源域和未标记的目标域。 UDA的主要目标是减少标记的源数据和未标记的目标数据之间的域差异,并在培训期间在两个域中学习域不变的表示。在本文中,我们首先定义UDA问题。其次,我们从传统方法和基于深度学习的方法中概述了不同类别的UDA的最先进的方法。最后,我们收集常用的基准数据集和UDA最先进方法的报告结果对视觉识别问题。
translated by 谷歌翻译
虽然无监督的域适应(UDA)算法,即,近年来只有来自源域的标记数据,大多数算法和理论结果侧重于单源无监督域适应(SUDA)。然而,在实际情况下,标记的数据通常可以从多个不同的源收集,并且它们可能不仅不同于目标域而且彼此不同。因此,来自多个源的域适配器不应以相同的方式进行建模。最近基于深度学习的多源无监督域适应(Muda)算法专注于通过在通用特征空间中的所有源极和目标域的分布对齐来提取所有域的公共域不变表示。但是,往往很难提取Muda中所有域的相同域不变表示。此外,这些方法匹配分布而不考虑类之间的域特定的决策边界。为了解决这些问题,我们提出了一个新的框架,具有两个对准阶段的Muda,它不仅将每对源和目标域的分布对齐,而且还通过利用域特定的分类器的输出对准决策边界。广泛的实验表明,我们的方法可以对图像分类的流行基准数据集实现显着的结果。
translated by 谷歌翻译
Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.
translated by 谷歌翻译
This paper addresses the problem of unsupervised domain adaption from theoretical and algorithmic perspectives. Existing domain adaptation theories naturally imply minimax optimization algorithms, which connect well with the domain adaptation methods based on adversarial learning. However, several disconnections still exist and form the gap between theory and algorithm. We extend previous theories (Mansour et al., 2009c;Ben-David et al., 2010) to multiclass classification in domain adaptation, where classifiers based on the scoring functions and margin loss are standard choices in algorithm design. We introduce Margin Disparity Discrepancy, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training. Our theory can be seamlessly transformed into an adversarial learning algorithm for domain adaptation, successfully bridging the gap between theory and algorithm. A series of empirical studies show that our algorithm achieves the state of the art accuracies on challenging domain adaptation tasks.
translated by 谷歌翻译
Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain.In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.
translated by 谷歌翻译
Deep learning has produced state-of-the-art results for a variety of tasks. While such approaches for supervised learning have performed well, they assume that training and testing data are drawn from the same distribution, which may not always be the case. As a complement to this challenge, single-source unsupervised domain adaptation can handle situations where a network is trained on labeled data from a source domain and unlabeled data from a related but different target domain with the goal of performing well at test-time on the target domain. Many single-source and typically homogeneous unsupervised deep domain adaptation approaches have thus been developed, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially-costly target data labels. This survey will compare these approaches by examining alternative methods, the unique and common elements, results, and theoretical insights. We follow this with a look at application areas and open research directions.
translated by 谷歌翻译
Due to the ability of deep neural nets to learn rich representations, recent advances in unsupervised domain adaptation have focused on learning domain-invariant features that achieve a small error on the source domain. The hope is that the learnt representation, together with the hypothesis learnt from the source domain, can generalize to the target domain. In this paper, we first construct a simple counterexample showing that, contrary to common belief, the above conditions are not sufficient to guarantee successful domain adaptation. In particular, the counterexample exhibits conditional shift: the class-conditional distributions of input features change between source and target domains. To give a sufficient condition for domain adaptation, we propose a natural and interpretable generalization upper bound that explicitly takes into account the aforementioned shift. Moreover, we shed new light on the problem by proving an information-theoretic lower bound on the joint error of any domain adaptation method that attempts to learn invariant representations. Our result characterizes a fundamental tradeoff between learning invariant representations and achieving small joint error on both domains when the marginal label distributions differ from source to target. Finally, we conduct experiments on real-world datasets that corroborate our theoretical findings. We believe these insights are helpful in guiding the future design of domain adaptation and representation learning algorithms.
translated by 谷歌翻译
Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.
translated by 谷歌翻译
Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large number of target domain data can be reduced for constructing target learners. Due to the wide application prospects, transfer learning has become a popular and promising area in machine learning. Although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances in transfer learning. Due to the rapid expansion of the transfer learning area, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing transfer learning researches, as well as to summarize and interpret the mechanisms and the strategies of transfer learning in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. Unlike previous surveys, this survey paper reviews more than forty representative transfer learning approaches, especially homogeneous transfer learning approaches, from the perspectives of data and model. The applications of transfer learning are also briefly introduced. In order to show the performance of different transfer learning models, over twenty representative transfer learning models are used for experiments. The models are performed on three different datasets, i.e., Amazon Reviews, Reuters-21578, and Office-31. And the experimental results demonstrate the importance of selecting appropriate transfer learning models for different applications in practice.
translated by 谷歌翻译
所有著名的机器学习算法构成了受监督和半监督的学习工作,只有在一个共同的假设下:培训和测试数据遵循相同的分布。当分布变化时,大多数统计模型必须从新收集的数据中重建,对于某些应用程序,这些数据可能是昂贵或无法获得的。因此,有必要开发方法,以减少在相关领域中可用的数据并在相似领域中进一步使用这些数据,从而减少需求和努力获得新的标签样品。这引起了一个新的机器学习框架,称为转移学习:一种受人类在跨任务中推断知识以更有效学习的知识能力的学习环境。尽管有大量不同的转移学习方案,但本调查的主要目的是在特定的,可以说是最受欢迎的转移学习中最受欢迎的次级领域,概述最先进的理论结果,称为域适应。在此子场中,假定数据分布在整个培训和测试数据中发生变化,而学习任务保持不变。我们提供了与域适应性问题有关的现有结果的首次最新描述,该结果涵盖了基于不同统计学习框架的学习界限。
translated by 谷歌翻译
在图像分类中,获得足够的标签通常昂贵且耗时。为了解决这个问题,域适应通常提供有吸引力的选择,给出了来自类似性质但不同域的大量标记数据。现有方法主要对准单个结构提取的表示的分布,并且表示可以仅包含部分信息,例如,仅包含部分饱和度,亮度和色调信息。在这一行中,我们提出了多代表性适应,这可以大大提高跨域图像分类的分类精度,并且特别旨在对准由名为Inception Adaption Adationation模块(IAM)提取的多个表示的分布。基于此,我们呈现多色自适应网络(MRAN)来通过多表示对准完成跨域图像分类任务,该任向性可以捕获来自不同方面的信息。此外,我们扩展了最大的平均差异(MMD)来计算适应损耗。我们的方法可以通过扩展具有IAM的大多数前进模型来轻松实现,并且网络可以通过反向传播有效地培训。在三个基准图像数据集上进行的实验证明了备的有效性。代码已在https://github.com/easezyc/deep-transfer -learning上获得。
translated by 谷歌翻译
Conventional unsupervised domain adaptation (UDA) assumes that training data are sampled from a single domain. This neglects the more practical scenario where training data are collected from multiple sources, requiring multi-source domain adaptation. We make three major contributions towards addressing this problem. First, we collect and annotate by far the largest UDA dataset, called DomainNet, which contains six domains and about 0.6 million images distributed among 345 categories, addressing the gap in data availability for multi-source UDA research. Second, we propose a new deep learning approach, Moment Matching for Multi-Source Domain Adaptation (M 3 SDA), which aims to transfer knowledge learned from multiple labeled source domains to an unlabeled target domain by dynamically aligning moments of their feature distributions. Third, we provide new theoretical insights specifically for moment matching approaches in both single and multiple source domain adaptation. Extensive experiments are conducted to demonstrate the power of our new dataset in benchmarking state-of-the-art multi-source domain adaptation methods, as well as the advantage of our proposed model. Dataset and Code are available at http://ai.bu.edu/M3SDA/
translated by 谷歌翻译
The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.
translated by 谷歌翻译
Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation. Existing adversarial domain adaptation methods may not effectively align different domains of multimodal distributions native in classification problems. In this paper, we present conditional adversarial domain adaptation, a principled framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions. Conditional domain adversarial networks (CDANs) are designed with two novel conditioning strategies: multilinear conditioning that captures the crosscovariance between feature representations and classifier predictions to improve the discriminability, and entropy conditioning that controls the uncertainty of classifier predictions to guarantee the transferability. With theoretical guarantees and a few lines of codes, the approach has exceeded state-of-the-art results on five datasets.
translated by 谷歌翻译
Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for the case when the target data have labels, including some that perform very well despite being "frustratingly easy" to implement. However, in practice, the target domain is often unlabeled, requiring unsupervised adaptation. We propose a simple, effective, and efficient method for unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Even though it is extraordinarily simple-it can be implemented in four lines of Matlab code-CORAL performs remarkably well in extensive evaluations on standard benchmark datasets."Everything should be made as simple as possible, but not simpler."
translated by 谷歌翻译
对抗性学习策略在处理单源域适应(DA)问题时表现出显着的性能,并且最近已应用于多源DA(MDA)问题。虽然大多数现有的MDA策略依赖于多个域歧视员设置,但其对潜伏空间表示的影响已经不知识。在这里,我们采用了一种信息 - 理论方法来识别和解决MDA上多个域鉴别器的潜在不利影响:域歧视信息的解体,有限的计算可扩展性以及培训期间损失梯度的大方差。我们在信息正规化的背景下通过情况进行对抗性DA来检查上述问题。这还提供了使用单一和统一域鉴别器的理论正当理由。基于这个想法,我们实施了一种名为多源信息正规化适应网络(MIAN)的新型神经结构。大规模实验表明,尽管其结构简洁,可靠,可显着优于其他最先进的方法。
translated by 谷歌翻译
深度学习已成为解决不同领域中现实世界中问题的首选方法,部分原因是它能够从数据中学习并在广泛的应用程序上实现令人印象深刻的性能。但是,它的成功通常取决于两个假设:(i)精确模型拟合需要大量标记的数据集,并且(ii)培训和测试数据是独立的且分布相同的。因此,不能保证它在看不见的目标域上的性能,尤其是在适应阶段遇到分布数据的数据时。目标域中数据的性能下降是部署深层神经网络的关键问题,这些网络已成功地在源域中的数据训练。通过利用标记的源域数据和未标记的目标域数据来执行目标域中的各种任务,提出了无监督的域适应(UDA)来对抗这一点。 UDA在自然图像处理,视频分析,自然语言处理,时间序列数据分析,医学图像分析等方面取得了令人鼓舞的结果。在本综述中,作为一个快速发展的主题,我们对其方法和应用程序进行了系统的比较。此外,还讨论了UDA与其紧密相关的任务的联系,例如域的概括和分布外检测。此外,突出显示了当前方法和可能有希望的方向的缺陷。
translated by 谷歌翻译
半监督域适应(SSDA)是一种具有挑战性的问题,需要克服1)以朝向域的较差的数据和2)分布换档的方法。不幸的是,由于培训数据偏差朝标标样本训练,域适应(DA)和半监督学习(SSL)方法的简单组合通常无法解决这两个目的。在本文中,我们介绍了一种自适应结构学习方法,以规范SSL和DA的合作。灵感来自多视图学习,我们建议的框架由共享特征编码器网络和两个分类器网络组成,用于涉及矛盾的目的。其中,其中一个分类器被应用于组目标特征以提高级别的密度,扩大了鲁棒代表学习的分类集群的间隙。同时,其他分类器作为符号器,试图散射源功能以增强决策边界的平滑度。目标聚类和源扩展的迭代使目标特征成为相应源点的扩张边界内的封闭良好。对于跨域特征对齐和部分标记的数据学习的联合地址,我们应用最大平均差异(MMD)距离最小化和自培训(ST)将矛盾结构投影成共享视图以进行可靠的最终决定。对标准SSDA基准的实验结果包括Domainnet和Office-Home,展示了我们对最先进的方法的方法的准确性和稳健性。
translated by 谷歌翻译
We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages.We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.
translated by 谷歌翻译