域的概括旨在学习一个通用模型,该模型在看不见的目标域上表现良好,并结合了来自多个源域的知识。在这项研究中,我们考虑了以下场景,在不同类别跨领域的条件分布之间发生不同的领域变化。当源域中的标记样品受到限制时,现有方法不足以鲁棒。为了解决这个问题,我们提出了一个新型的域泛化框架,称为Wasserstein分布在鲁棒域的概括(WDRDG),灵感来自分布稳健优化的概念。我们鼓励对特定于类的Wasserstein不确定性集中有条件分布的鲁棒性,并优化分类器在这些不确定性集上的最差性能。我们进一步开发了一个测试时间适应模块,利用最佳运输来量化未见目标域和源域之间的关系,以使目标数据适应性推断。旋转MNIST,PACS和VLCS数据集的实验表明,我们的方法可以有效地平衡挑战性概括场景中的鲁棒性和可区分性。
translated by 谷歌翻译
Domain generalization aims to learn a classification model from multiple source domains and generalize it to unseen target domains. A critical problem in domain generalization involves learning domaininvariant representations. Let X and Y denote the features and the labels, respectively. Under the assumption that the conditional distribution P (Y |X) remains unchanged across domains, earlier approaches to domain generalization learned the invariant representation T (X) by minimizing the discrepancy of the marginal distribution P (T (X)). However, such an assumption of stable P (Y |X) does not necessarily hold in practice. In addition, the representation learning function T (X) is usually constrained to a simple linear transformation or shallow networks. To address the above two drawbacks, we propose an end-to-end conditional invariant deep domain generalization approach by leveraging deep neural networks for domain-invariant representation learning. The domain-invariance property is guaranteed through a conditional invariant adversarial network that can learn domain-invariant representations w.r.t. the joint distribution P (T (X), Y ) if the target domain data are not severely class unbalanced. We perform various experiments to demonstrate the effectiveness of the proposed method.
translated by 谷歌翻译
学习域不变的表示已成为域适应/概括的最受欢迎的方法之一。在本文中,我们表明不变的表示可能不足以保证良好的概括,在考虑标签函数转移的情况下。受到这一点的启发,我们首先在经验风险上获得了新的概括上限,该概括风险明确考虑了标签函数移动。然后,我们提出了特定领域的风险最小化(DRM),该风险最小化(DRM)可以分别对不同域的分布移动进行建模,并为目标域选择最合适的域。对四个流行的域概括数据集(CMNIST,PACS,VLCS和域)进行了广泛的实验,证明了所提出的DRM对域泛化的有效性,具有以下优点:1)它的表现明显超过了竞争性盆地的表现; 2)与香草经验风险最小化(ERM)相比,所有训练领域都可以在所有训练领域中具有可比性或优越的精度; 3)在培训期间,它仍然非常简单和高效,4)与不变的学习方法是互补的。
translated by 谷歌翻译
域的概括(DG)旨在学习通过使用来自多个相关源域的数据,其在测试时间遇到的看不见的域的性能保持较高的模型。许多现有的DG算法降低了表示空间中源分布之间的差异,从而有可能使靠近来源的看不见的域对齐。这是由分析的动机,该分析解释了使用分布距离(例如Wasserstein距离)与来源的分布距离(例如Wasserstein距离)的概括。但是,由于DG目标的开放性,使用一些基准数据集对DG算法进行全面评估是一项挑战。特别是,我们证明了用DG方法训练的模型的准确性在未见的域中,从流行的基准数据集生成的未见域有很大差异。这强调了DG方法在一些基准数据集上的性能可能无法代表其在野外看不见的域上的性能。为了克服这一障碍,我们提出了一个基于分配强大优化(DRO)的通用认证框架,该框架可以有效地证明任何DG方法的最差性能。这使DG方法与基准数据集的经验评估互补的DG方法无关。此外,我们提出了一种培训算法,可以与任何DG方法一起使用,以改善其认证性能。我们的经验评估证明了我们方法在显着改善最严重的损失(即降低野生模型失败的风险)方面的有效性,而不会在基准数据集上产生显着的性能下降。
translated by 谷歌翻译
We are concerned with a worst-case scenario in model generalization, in the sense that a model aims to perform well on many unseen domains while there is only one single domain available for training. We propose a new method named adversarial domain augmentation to solve this Outof-Distribution (OOD) generalization problem. The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations, from which a model can learn to generalize with theoretical guarantees. To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint. Detailed theoretical analysis is provided to testify our formulation, while extensive experiments on multiple benchmark datasets indicate its superior performance in tackling single domain generalization.
translated by 谷歌翻译
所有著名的机器学习算法构成了受监督和半监督的学习工作,只有在一个共同的假设下:培训和测试数据遵循相同的分布。当分布变化时,大多数统计模型必须从新收集的数据中重建,对于某些应用程序,这些数据可能是昂贵或无法获得的。因此,有必要开发方法,以减少在相关领域中可用的数据并在相似领域中进一步使用这些数据,从而减少需求和努力获得新的标签样品。这引起了一个新的机器学习框架,称为转移学习:一种受人类在跨任务中推断知识以更有效学习的知识能力的学习环境。尽管有大量不同的转移学习方案,但本调查的主要目的是在特定的,可以说是最受欢迎的转移学习中最受欢迎的次级领域,概述最先进的理论结果,称为域适应。在此子场中,假定数据分布在整个培训和测试数据中发生变化,而学习任务保持不变。我们提供了与域适应性问题有关的现有结果的首次最新描述,该结果涵盖了基于不同统计学习框架的学习界限。
translated by 谷歌翻译
无监督域适应(UDA)的绝大多数现有算法都集中在以一次性的方式直接从标记的源域调整到未标记的目标域。另一方面,逐渐的域适应性(GDA)假设桥接源和目标的$(t-1)$未标记的中间域,并旨在通过利用中间的路径在目标域中提供更好的概括。在某些假设下,Kumar等人。 (2020)提出了一种简单的算法,逐渐自我训练,以及按$ e^{o(t)} \ left的顺序结合的概括(\ varepsilon_0+o \ of \ left(\ sqrt {log(log(log(t)/n log(t)/n) } \ right)\ right)$对于目标域错误,其中$ \ varepsilon_0 $是源域错误,$ n $是每个域的数据大小。由于指数因素,当$ t $仅适中时,该上限变得空虚。在这项工作中,我们在更一般和放松的假设下分析了逐步的自我训练,并证明概括为$ \ varepsilon_0 + o \ left(t \ delta + t/\ sqrt {n} {n} \ right) + \ widetilde { o} \ left(1/\ sqrt {nt} \ right)$,其中$ \ delta $是连续域之间的平均分配距离。与对$ t $作为乘法因素的指数依赖性的现有界限相比,我们的界限仅取决于$ t $线性和添加性。也许更有趣的是,我们的结果意味着存在最佳的$ t $的最佳选择,从而最大程度地减少了概括性错误,并且自然也暗示了一种构造中间域路径的最佳方法,以最大程度地减少累积路径长度$ t \ delta源和目标之间的$。为了证实我们理论的含义,我们检查了对多个半合成和真实数据集的逐步自我训练,这证实了我们的发现。我们相信我们的见解为未来GDA算法设计的途径提供了前进的途径。
translated by 谷歌翻译
我们关注模型概括中最坏的情况,因为一个模型旨在在许多看不见的域上表现良好,而只有一个单个域可供训练。我们提出基于元学习的对抗领域的增强,以解决此范围泛化问题。关键思想是利用对抗性训练来创建“虚构的”但“具有挑战性”的人群,模型可以从中学会通过理论保证进行概括。为了促进快速和理想的域增强,我们将模型训练施加在元学习方案中,并使用Wasserstein自动编码器放宽广泛使用的最坏情况的约束。我们通过整合有效域概括的不确定性定量来进一步改善我们的方法。在多个基准数据集上进行的广泛实验表明其在解决单个领域概括方面的出色性能。
translated by 谷歌翻译
我们提出了两个新颖的可传递性指标F-OTCE(基于快速最佳运输的条件熵)和JC-otce(联合通信OTCE),以评估源模型(任务)可以使目标任务的学习受益多少,并学习更可转移的表示形式。用于跨域交叉任务转移学习。与需要评估辅助任务的经验可转让性的现有指标不同,我们的指标是无辅助的,以便可以更有效地计算它们。具体而言,F-otce通过首先求解源和目标分布之间的最佳传输(OT)问题来估计可转移性,然后使用最佳耦合来计算源和目标标签之间的负条件熵。它还可以用作损失函数,以最大化目标任务填充源模型的可传递性。同时,JC-OTCE通过在OT问题中包含标签距离来提高F-otce的可转移性鲁棒性,尽管它可能会产生额外的计算成本。广泛的实验表明,F-otce和JC-otce优于最先进的无辅助指标,分别为18.85%和28.88%,与基础真相转移精度相关系数。通过消除辅助任务的训练成本,两个指标将前一个方法的总计算时间从43分钟减少到9.32s和10.78,用于一对任务。当用作损失函数时,F-otce在几个射击分类实验中显示出源模型的传输精度的一致性提高,精度增益高达4.41%。
translated by 谷歌翻译
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WD-GRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
translated by 谷歌翻译
域的适应性旨在将从源域获得的标记实例转移到目标域,以填补域之间的空白。大多数域适应方法都假定源和目标域具有相同的维度。当每个域中的特征数量不同时,都很少研究当适用的方法,尤其是当未给出从目标域获得的测试数据的标签信息时。在本文中,假定在两个域中都存在共同特征,并且在目标域中观察到额外的(新的)特征。因此,目标域的维度高于源域的维度。为了利用共同特征的均匀性,这些源和目标域之间的适应性被称为最佳运输(OT)问题。此外,得出了基于ot的方法的目标域中的学习结合。使用模拟和现实世界数据对所提出的算法进行验证。
translated by 谷歌翻译
机器学习系统通常假设训练和测试分布是相同的。为此,关键要求是开发可以概括到未经看不见的分布的模型。领域泛化(DG),即分销概括,近年来引起了越来越令人利益。域概括处理了一个具有挑战性的设置,其中给出了一个或几个不同但相关域,并且目标是学习可以概括到看不见的测试域的模型。多年来,域概括地区已经取得了巨大进展。本文提出了对该地区最近进步的首次审查。首先,我们提供了域泛化的正式定义,并讨论了几个相关领域。然后,我们彻底审查了与域泛化相关的理论,并仔细分析了泛化背后的理论。我们将最近的算法分为三个类:数据操作,表示学习和学习策略,并为每个类别详细介绍几种流行的算法。第三,我们介绍常用的数据集,应用程序和我们的开放源代码库进行公平评估。最后,我们总结了现有文学,并为未来提供了一些潜在的研究主题。
translated by 谷歌翻译
在本文中,我们提出了一种对无监督域适应的新方法,与最佳运输,学习概率措施和无监督学习的概念相关。所提出的方法Hot-DA基于最佳运输的分层制定,其利用了由地面度量捕获的几何信息,源和目标域中的结构信息更丰富的结构信息。通过根据其类标签将样本分组到结构中,本质地形成标记的源域中的附加信息。在探索未标记的目标域中的隐藏结构的同时,通过Wassersein BaryCenter的学习概率措施的问题,我们证明是等同于光谱聚类。具有可控复杂性的玩具数据集的实验和两个具有挑战性的视觉适应数据集显示了所提出的方法的优越性。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a well-labeled source domain to a different but related unlabeled target domain with identical label space. Currently, the main workhorse for solving UDA is domain alignment, which has proven successful. However, it is often difficult to find an appropriate source domain with identical label space. A more practical scenario is so-called partial domain adaptation (PDA) in which the source label set or space subsumes the target one. Unfortunately, in PDA, due to the existence of the irrelevant categories in the source domain, it is quite hard to obtain a perfect alignment, thus resulting in mode collapse and negative transfer. Although several efforts have been made by down-weighting the irrelevant source categories, the strategies used tend to be burdensome and risky since exactly which irrelevant categories are unknown. These challenges motivate us to find a relatively simpler alternative to solve PDA. To achieve this, we first provide a thorough theoretical analysis, which illustrates that the target risk is bounded by both model smoothness and between-domain discrepancy. Considering the difficulty of perfect alignment in solving PDA, we turn to focus on the model smoothness while discard the riskier domain alignment to enhance the adaptability of the model. Specifically, we instantiate the model smoothness as a quite simple intra-domain structure preserving (IDSP). To our best knowledge, this is the first naive attempt to address the PDA without domain alignment. Finally, our empirical results on multiple benchmark datasets demonstrate that IDSP is not only superior to the PDA SOTAs by a significant margin on some benchmarks (e.g., +10% on Cl->Rw and +8% on Ar->Rw ), but also complementary to domain alignment in the standard UDA
translated by 谷歌翻译
Recent work reported the label alignment property in a supervised learning setting: the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix. Inspired by this observation, we derive a regularization method for unsupervised domain adaptation. Instead of regularizing representation learning as done by popular domain adaptation methods, we regularize the classifier so that the target domain predictions can to some extent ``align" with the top singular vectors of the unsupervised data matrix from the target domain. In a linear regression setting, we theoretically justify the label alignment property and characterize the optimality of the solution of our regularization by bounding its distance to the optimal solution. We conduct experiments to show that our method can work well on the label shift problems, where classic domain adaptation methods are known to fail. We also report mild improvement over domain adaptation baselines on a set of commonly seen MNIST-USPS domain adaptation tasks and on cross-lingual sentiment analysis tasks.
translated by 谷歌翻译
对抗性学习策略在处理单源域适应(DA)问题时表现出显着的性能,并且最近已应用于多源DA(MDA)问题。虽然大多数现有的MDA策略依赖于多个域歧视员设置,但其对潜伏空间表示的影响已经不知识。在这里,我们采用了一种信息 - 理论方法来识别和解决MDA上多个域鉴别器的潜在不利影响:域歧视信息的解体,有限的计算可扩展性以及培训期间损失梯度的大方差。我们在信息正规化的背景下通过情况进行对抗性DA来检查上述问题。这还提供了使用单一和统一域鉴别器的理论正当理由。基于这个想法,我们实施了一种名为多源信息正规化适应网络(MIAN)的新型神经结构。大规模实验表明,尽管其结构简洁,可靠,可显着优于其他最先进的方法。
translated by 谷歌翻译
旨在概括在源域中训练的模型来看不见的目标域,域泛化(DG)最近引起了很多关注。 DG的关键问题是如何防止对观察到的源极域的过度接收,因为在培训期间目标域不可用。我们调查过度拟合不仅导致未经看不见的目标域的普遍推广能力,而且在测试阶段导致不稳定的预测。在本文中,我们观察到,在训练阶段采样多个任务并在测试阶段产生增强图像,很大程度上有利于泛化性能。因此,通过处理不同视图的任务和图像,我们提出了一种新颖的多视图DG框架。具体地,在训练阶段,为了提高泛化能力,我们开发了一种多视图正则化元学习算法,该算法采用多个任务在更新模型期间产生合适的优化方向。在测试阶段,为了减轻不稳定的预测,我们利用多个增强图像来产生多视图预测,这通过熔断测试图像的不同视图的结果显着促进了模型可靠性。三个基准数据集的广泛实验验证了我们的方法优于几种最先进的方法。
translated by 谷歌翻译
This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models. The main idea is to exploit the Siamese architecture to learn an embedding subspace that is discriminative, and where mapped visual domains are semantically aligned and yet maximally separated. The supervised setting becomes attractive especially when only few target data samples need to be labeled. In this scenario, alignment and separation of semantic probability distributions is difficult because of the lack of data. We found that by reverting to point-wise surrogates of distribution distances and similarities provides an effective solution. In addition, the approach has a high "speed" of adaptation, which requires an extremely low number of labeled target training samples, even one per category can be effective. The approach is extended to domain generalization. For both applications the experiments show very promising results.
translated by 谷歌翻译
虽然在许多域内生成并提供了大量的未标记数据,但对视觉数据的自动理解的需求高于以往任何时候。大多数现有机器学习模型通常依赖于大量标记的训练数据来实现高性能。不幸的是,在现实世界的应用中,不能满足这种要求。标签的数量有限,手动注释数据昂贵且耗时。通常需要将知识从现有标记域传输到新域。但是,模型性能因域之间的差异(域移位或数据集偏差)而劣化。为了克服注释的负担,域适应(DA)旨在在将知识从一个域转移到另一个类似但不同的域中时减轻域移位问题。无监督的DA(UDA)处理标记的源域和未标记的目标域。 UDA的主要目标是减少标记的源数据和未标记的目标数据之间的域差异,并在培训期间在两个域中学习域不变的表示。在本文中,我们首先定义UDA问题。其次,我们从传统方法和基于深度学习的方法中概述了不同类别的UDA的最先进的方法。最后,我们收集常用的基准数据集和UDA最先进方法的报告结果对视觉识别问题。
translated by 谷歌翻译
Deep Neural Networks (DNNs) suffer from domain shift when the test dataset follows a distribution different from the training dataset. Domain generalization aims to tackle this issue by learning a model that can generalize to unseen domains. In this paper, we propose a new approach that aims to explicitly remove domain-specific features for domain generalization. Following this approach, we propose a novel framework called Learning and Removing Domain-specific features for Generalization (LRDG) that learns a domain-invariant model by tactically removing domain-specific features from the input images. Specifically, we design a classifier to effectively learn the domain-specific features for each source domain, respectively. We then develop an encoder-decoder network to map each input image into a new image space where the learned domain-specific features are removed. With the images output by the encoder-decoder network, another classifier is designed to learn the domain-invariant features to conduct image classification. Extensive experiments demonstrate that our framework achieves superior performance compared with state-of-the-art methods.
translated by 谷歌翻译