我们提出了一种对域适应(DA)时间序列回归任务(Dannte)的侵权学习方法。回归旨在构建安装在燃气轮机上的传感器的虚拟副本,以代替可以在某些情况下缺失的物理传感器。我们的DA方法是搜索要素的域不变表示。学习者可以访问标记的源数据集和未标记的目标数据集(无调节DA),并在两者培训,在任务回归和域分类器神经网络之间利用MinMax游戏。两个模型共享相同的特征表示,由特征提取器学习。这项工作基于Ganin等人发布的结果。arxiv:1505.07818;实际上,我们提出了适合时间序列应用的延伸。与仅在源域上培训的基线模型相比,我们报告了回归性能的显着提高。
translated by 谷歌翻译
Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled targetdomain data is necessary).As the training progresses, the approach promotes the emergence of "deep" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation.Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-ofthe-art on Office datasets.
translated by 谷歌翻译
虽然在许多域内生成并提供了大量的未标记数据,但对视觉数据的自动理解的需求高于以往任何时候。大多数现有机器学习模型通常依赖于大量标记的训练数据来实现高性能。不幸的是,在现实世界的应用中,不能满足这种要求。标签的数量有限,手动注释数据昂贵且耗时。通常需要将知识从现有标记域传输到新域。但是,模型性能因域之间的差异(域移位或数据集偏差)而劣化。为了克服注释的负担,域适应(DA)旨在在将知识从一个域转移到另一个类似但不同的域中时减轻域移位问题。无监督的DA(UDA)处理标记的源域和未标记的目标域。 UDA的主要目标是减少标记的源数据和未标记的目标数据之间的域差异,并在培训期间在两个域中学习域不变的表示。在本文中,我们首先定义UDA问题。其次,我们从传统方法和基于深度学习的方法中概述了不同类别的UDA的最先进的方法。最后,我们收集常用的基准数据集和UDA最先进方法的报告结果对视觉识别问题。
translated by 谷歌翻译
在本文中,我们提出了一种使用域鉴别特征模块的双模块网络架构,以鼓励域不变的特征模块学习更多域不变的功能。该建议的架构可以应用于任何利用域不变功能的任何模型,用于无监督域适应,以提高其提取域不变特征的能力。我们在作为代表性算法的神经网络(DANN)模型的区域 - 对抗训练进行实验。在培训过程中,我们为两个模块提供相同的输入,然后分别提取它们的特征分布和预测结果。我们提出了差异损失,以找到预测结果的差异和两个模块之间的特征分布。通过对抗训练来最大化其特征分布和最小化其预测结果的差异,鼓励两个模块分别学习更多域歧视和域不变特征。进行了广泛的比较评估,拟议的方法在大多数无监督的域适应任务中表现出最先进的。
translated by 谷歌翻译
Contemporary domain adaptation methods are very effective at aligning feature distributions of source and target domains without any target supervision. However, we show that these techniques perform poorly when even a few labeled examples are available in the target domain. To address this semi-supervised domain adaptation (SSDA) setting, we propose a novel Minimax Entropy (MME) approach that adversarially optimizes an adaptive few-shot model. Our base model consists of a feature encoding network, followed by a classification layer that computes the features' similarity to estimated prototypes (representatives of each class). Adaptation is achieved by alternately maximizing the conditional entropy of unlabeled target data with respect to the classifier and minimizing it with respect to the feature encoder. We empirically demonstrate the superiority of our method over many baselines, including conventional feature alignment and few-shot methods, setting a new state of the art for SSDA. Our code is available at http://cs-people. bu.edu/keisaito/research/MME.html.
translated by 谷歌翻译
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WD-GRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
translated by 谷歌翻译
基于对抗性学习的现有无监督的域适应方法在多个医学成像任务中取得了良好的表现。但是,这些方法仅着眼于全局分布适应,而忽略了类别级别的分布约束,这将导致次级适应性的性能。本文基于类别级别的正则化提出了一个无监督的域适应框架,该框架从三个角度正规化了类别分布。具体而言,对于域间类别的正则化,提出了一个自适应原型比对模块,以使源和目标域中同一类别的特征原型对齐。此外,对于域内类别的正则化,我们分别针对源和目标域定制了正则化技术。在源域中,提出了原型引导的判别性损失,以通过执行阶层内紧凑性和类间的分离性来学习更多的判别特征表示,并作为对传统监督损失的补充。在目标域中,提出了增强的一致性类别的正则化损失,以迫使该模型为增强/未增强目标图像提供一致的预测,这鼓励在语义上相似的区域给予相同的标签。在两个公共底面数据集上进行的广泛实验表明,所提出的方法显着优于其他最先进的比较算法。
translated by 谷歌翻译
Deep learning has produced state-of-the-art results for a variety of tasks. While such approaches for supervised learning have performed well, they assume that training and testing data are drawn from the same distribution, which may not always be the case. As a complement to this challenge, single-source unsupervised domain adaptation can handle situations where a network is trained on labeled data from a source domain and unlabeled data from a related but different target domain with the goal of performing well at test-time on the target domain. Many single-source and typically homogeneous unsupervised deep domain adaptation approaches have thus been developed, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially-costly target data labels. This survey will compare these approaches by examining alternative methods, the unique and common elements, results, and theoretical insights. We follow this with a look at application areas and open research directions.
translated by 谷歌翻译
目的:随着具有非传统电极配置的可穿戴睡眠监测设备的快速升高,需要自动算法,可以在具有少量标记数据的配置上执行睡眠暂存。转移学习具有从源模态(例如标准电极配置)到新的目标模态(例如非传统电极配置)的神经网络权重。方法:我们提出功能匹配,一个新的转移学习策略作为常用的芬降方法的替代方案。该方法包括培训具有来自源模态的大量数据的模型,以及源头和目标模态的成对样本很少。对于那些配对的样本,模型提取目标模态的特征,与来自源模态的相应样本相匹配。结果:我们将特征与三种不同的目标域的FineTuning进行比较,具有两个不同的神经网络架构,以及不同数量的培训数据。特别是在小型队列(即,在非传统的记录设置中标记的记录)上,具有系统地匹配的特征,具有平均相对差异的精度为不同场景和数据集的0.4%至4.7%。结论:我们的研究结果表明,特征符合FineTuning作为转移学习方法的特征,特别是在非常低的数据制度中。意义:因此,我们得出结论,特征匹配是具有新颖设备可穿戴睡眠分段的有希望的新方法。
translated by 谷歌翻译
Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL[1] is a "frustratingly easy" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance.
translated by 谷歌翻译
无监督的域适应性(UDA)是解决一个问题的关键技术之一,很难获得监督学习所需的地面真相标签。通常,UDA假设在培训过程中可以使用来自源和目标域中的所有样本。但是,在涉及数据隐私问题的应用下,这不是现实的假设。为了克服这一限制,最近提出了无源数据的UDA,即无源无监督的域适应性(SFUDA)。在这里,我们提出了一种用于医疗图像分割的SFUDA方法。除了在UDA中通常使用的熵最小化方法外,我们还引入了一个损失函数,以避免目标域中的特征规范和在保留目标器官的形状约束之前。我们使用数据集进行实验,包括多种类型的源目标域组合,以显示我们方法的多功能性和鲁棒性。我们确认我们的方法优于所有数据集中的最先进。
translated by 谷歌翻译
近年来,深度学习已成为遥感科学家最有效的计算机视觉工具之一。但是,遥感数据集缺乏培训标签,这意味着科学家需要解决域适应性问题,以缩小卫星图像数据集之间的差异。结果,随后训练的图像分割模型可以更好地概括并使用现有的一组标签,而不需要新的标签。这项工作提出了一个无监督的域适应模型,该模型可在样式转移阶段保留图像的语义一致性和每个像素质量。本文的主要贡献是提出了SEMI2I模型的改进体系结构,该模型显着提高了所提出的模型的性能,并使其与最先进的Cycada模型具有竞争力。第二个贡献是在遥感多波段数据集(例如Worldview-2和Spot-6)上测试Cycada模型。提出的模型可在样式传递阶段保留图像的语义一致性和每个像素质量。因此,与SEMI2I模型相比,经过适应图像的训练的语义分割模型显示出可观的性能增长,并达到与最先进的Cycada模型相似的结果。所提出方法的未来开发可能包括生态领域转移,{\ em先验}对数据分布的质量评估,或探索域自适应模型的内部体系结构。
translated by 谷歌翻译
Source-free domain adaptation aims to adapt a source model trained on fully-labeled source domain data to a target domain with unlabeled target domain data. Source data is assumed inaccessible due to proprietary or privacy reasons. Existing works use the source model to pseudolabel target data, but the pseudolabels are unreliable due to data distribution shift between source and target domain. In this work, we propose to leverage an ImageNet pre-trained feature extractor in a new co-learning framework to improve target pseudolabel quality for finetuning the source model. Benefits of the ImageNet feature extractor include that it is not source-biased and it provides an alternate view of features and classification decisions different from the source model. Such pre-trained feature extractors are also publicly available, which allows us to readily leverage modern network architectures that have strong representation learning ability. After co-learning, we sharpen predictions of non-pseudolabeled samples by entropy minimization. Evaluation on 3 benchmark datasets show that our proposed method can outperform existing source-free domain adaptation methods, as well as unsupervised domain adaptation methods which assume joint access to source and target data.
translated by 谷歌翻译
虽然无监督的域适应(UDA)算法,即,近年来只有来自源域的标记数据,大多数算法和理论结果侧重于单源无监督域适应(SUDA)。然而,在实际情况下,标记的数据通常可以从多个不同的源收集,并且它们可能不仅不同于目标域而且彼此不同。因此,来自多个源的域适配器不应以相同的方式进行建模。最近基于深度学习的多源无监督域适应(Muda)算法专注于通过在通用特征空间中的所有源极和目标域的分布对齐来提取所有域的公共域不变表示。但是,往往很难提取Muda中所有域的相同域不变表示。此外,这些方法匹配分布而不考虑类之间的域特定的决策边界。为了解决这些问题,我们提出了一个新的框架,具有两个对准阶段的Muda,它不仅将每对源和目标域的分布对齐,而且还通过利用域特定的分类器的输出对准决策边界。广泛的实验表明,我们的方法可以对图像分类的流行基准数据集实现显着的结果。
translated by 谷歌翻译
现有域适应(DA)算法训练目标模型,然后使用目标模型对目标数据集中的所有样本进行分类。虽然这种方法试图解决源和目标数据来自不同分布的问题,但它无法认识到目标域内的可能性,某些样本比目标域更接近源域的分布领域。在本文中,我们开发了一种新颖的DA算法,即强制转移,该算法涉及这种情况。解决这一难题的一个直接但有效的想法是,使用分布外检测算法来决定在测试阶段,给定样品是否更接近源域,目标域或两者都不接近。在第一种情况下,该样本将提供给对源样本培训的机器学习分类器。在第二种情况下,该样本将提供给对目标样本训练的机器学习分类器。在第三种情况下,该样本被丢弃,因为既不是在源训练的ML模型,也不是在目标上训练的ML模型不适合对其进行分类。众所周知,神经网络中的前几个层提取了低级特征,因此可以从三种不同情况下对样品进行分类,以在三种不同情况下经验确定的层后进行样品的激活分类。强制转移实现了这个想法。在三种类型的DA任务上,我们优于与之相比的最新算法。
translated by 谷歌翻译
The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We suggest that explicitly modeling what is unique to each domain can improve a model's ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained not only to perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process.
translated by 谷歌翻译
无监督的域适应性(UDA)引起了相当大的关注,这将知识从富含标签的源域转移到相关但未标记的目标域。减少域间差异一直是提高UDA性能的关键因素,尤其是对于源域和目标域之间存在较大差距的任务。为此,我们提出了一种新颖的风格感知功能融合方法(SAFF),以弥合大域间隙和转移知识,同时减轻阶级歧视性信息的丧失。受到人类传递推理和学习能力的启发,研究了一种新颖的风格感知的自我互化领域(SSID),通过一系列中级辅助综合概念将两个看似无关的概念联系起来。具体而言,我们提出了一种新颖的SSID学习策略,该策略从源和目标域中选择样本作为锚点,然后随机融合这些锚的对象和样式特征,以生成具有标记和样式丰富的中级辅助功能以进行知识转移。此外,我们设计了一个外部存储库来存储和更新指定的标记功能,以获得稳定的类功能和班级样式功能。基于提议的内存库,内部和域间损耗功能旨在提高类识别能力和特征兼容性。同时,我们通过无限抽样模拟SSID的丰富潜在特征空间,并通过数学理论模拟损失函数的收敛性。最后,我们对常用的域自适应基准测试进行了全面的实验,以评估所提出的SAFF,并且实验结果表明,所提出的SAFF可以轻松地与不同的骨干网络结合在一起,并获得更好的性能作为插入插型模块。
translated by 谷歌翻译
最近3D点云学习一直是计算机视觉和自主驾驶中的热门话题。由于事实上,难以手动注释一个定性的大型3D点云数据集,无监督的域适应(UDA)在3D点云学习中流行,旨在将学习知识从标记的源域转移到未标记的目标领域。然而,具有简单学习模型引起的域转移引起的泛化和重建误差是不可避免的,这基本上阻碍了模型的学习良好表示的能力。为了解决这些问题,我们提出了一个结束到底自组合网络(SEN),用于3D云域适应任务。一般来说,我们的森林度假前的含义教师和半监督学习的优势,并引入了软的分类损失和一致性损失,旨在实现一致的泛化和准确的重建。在森中,学生网络以具有监督的学习和自我监督学习的协作方式,教师网络进行时间一致性,以学习有用的表示,并确保点云重建的质量。在几个3D点云UDA基准上的广泛实验表明,我们的SEN在分类和分段任务中表现出最先进的方法。此外,进一步的分析表明,我们的森也实现了更好的重建结果。
translated by 谷歌翻译
Domain adaptation (DA) approaches address domain shift and enable networks to be applied to different scenarios. Although various image DA approaches have been proposed in recent years, there is limited research towards video DA. This is partly due to the complexity in adapting the different modalities of features in videos, which includes the correlation features extracted as long-term dependencies of pixels across spatiotemporal dimensions. The correlation features are highly associated with action classes and proven their effectiveness in accurate video feature extraction through the supervised action recognition task. Yet correlation features of the same action would differ across domains due to domain shift. Therefore we propose a novel Adversarial Correlation Adaptation Network (ACAN) to align action videos by aligning pixel correlations. ACAN aims to minimize the distribution of correlation information, termed as Pixel Correlation Discrepancy (PCD). Additionally, video DA research is also limited by the lack of cross-domain video datasets with larger domain shifts. We, therefore, introduce a novel HMDB-ARID dataset with a larger domain shift caused by a larger statistical difference between domains. This dataset is built in an effort to leverage current datasets for dark video classification. Empirical results demonstrate the state-of-the-art performance of our proposed ACAN for both existing and the new video DA datasets.
translated by 谷歌翻译
无监督的域适应(UDA)显示出近年来工作条件下的轴承故障诊断的显着结果。但是,大多数UDA方法都不考虑数据的几何结构。此外,通常应用全局域适应技术,这忽略了子域之间的关系。本文通过呈现新的深亚域适应图卷积神经网络(DSAGCN)来解决提到的挑战,具有两个关键特性:首先,采用图形卷积神经网络(GCNN)来模拟数据结构。二,对抗域适应和局部最大平均差异(LMMD)方法同时应用,以对准子域的分布并降低相关子域和全局域之间的结构差异。 CWRU和Paderborn轴承数据集用于验证DSAGCN方法的比较模型之间的效率和优越性。实验结果表明,将结构化子域与域适应方法对准,以获得无监督故障诊断的准确数据驱动模型。
translated by 谷歌翻译