当机器学习模型将其应用于与最初训练的数据相似但不同的域中的数据时,它的性能会降低。为了减轻此域移位问题,域Adaptation(DA)技术搜索了最佳转换,该转换将(当前)输入数据从源域转换为目标域,以学习域名不变的表示,以减少域差异。本文根据两个步骤提出了一个新颖的监督DA。首先,我们从几个样本中搜索从源到目标域的最佳类依赖性转换。我们考虑了最佳的运输方法,例如地球搬运工的距离,凹痕传输和相关对准。其次,我们使用嵌入相似技术在推理时选择相应的转换。我们使用相关指标和高阶矩匹配技术。我们对具有域移动的时间序列数据集进行了广泛的评估,包括模拟和各种在线手写数据集,以演示性能。
translated by 谷歌翻译
虽然在许多域内生成并提供了大量的未标记数据,但对视觉数据的自动理解的需求高于以往任何时候。大多数现有机器学习模型通常依赖于大量标记的训练数据来实现高性能。不幸的是,在现实世界的应用中,不能满足这种要求。标签的数量有限,手动注释数据昂贵且耗时。通常需要将知识从现有标记域传输到新域。但是,模型性能因域之间的差异(域移位或数据集偏差)而劣化。为了克服注释的负担,域适应(DA)旨在在将知识从一个域转移到另一个类似但不同的域中时减轻域移位问题。无监督的DA(UDA)处理标记的源域和未标记的目标域。 UDA的主要目标是减少标记的源数据和未标记的目标数据之间的域差异,并在培训期间在两个域中学习域不变的表示。在本文中,我们首先定义UDA问题。其次,我们从传统方法和基于深度学习的方法中概述了不同类别的UDA的最先进的方法。最后,我们收集常用的基准数据集和UDA最先进方法的报告结果对视觉识别问题。
translated by 谷歌翻译
Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.
translated by 谷歌翻译
在图像分类中,获得足够的标签通常昂贵且耗时。为了解决这个问题,域适应通常提供有吸引力的选择,给出了来自类似性质但不同域的大量标记数据。现有方法主要对准单个结构提取的表示的分布,并且表示可以仅包含部分信息,例如,仅包含部分饱和度,亮度和色调信息。在这一行中,我们提出了多代表性适应,这可以大大提高跨域图像分类的分类精度,并且特别旨在对准由名为Inception Adaption Adationation模块(IAM)提取的多个表示的分布。基于此,我们呈现多色自适应网络(MRAN)来通过多表示对准完成跨域图像分类任务,该任向性可以捕获来自不同方面的信息。此外,我们扩展了最大的平均差异(MMD)来计算适应损耗。我们的方法可以通过扩展具有IAM的大多数前进模型来轻松实现,并且网络可以通过反向传播有效地培训。在三个基准图像数据集上进行的实验证明了备的有效性。代码已在https://github.com/easezyc/deep-transfer -learning上获得。
translated by 谷歌翻译
Domain adaptation aims at generalizing a high-performance learner on a target domain via utilizing the knowledge distilled from a source domain which has a different but related data distribution. One solution to domain adaptation is to learn domain invariant feature representations while the learned representations should also be discriminative in prediction. To learn such representations, domain adaptation frameworks usually include a domain invariant representation learning approach to measure and reduce the domain discrepancy, as well as a discriminator for classification. Inspired by Wasserstein GAN, in this paper we propose a novel approach to learn domain invariant feature representations, namely Wasserstein Distance Guided Representation Learning (WD-GRL). WDGRL utilizes a neural network, denoted by the domain critic, to estimate empirical Wasserstein distance between the source and target samples and optimizes the feature extractor network to minimize the estimated Wasserstein distance in an adversarial manner. The theoretical advantages of Wasserstein distance for domain adaptation lie in its gradient property and promising generalization bound. Empirical studies on common sentiment and image classification adaptation datasets demonstrate that our proposed WDGRL outperforms the state-of-the-art domain invariant representation learning approaches.
translated by 谷歌翻译
Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.
translated by 谷歌翻译
Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain.In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets.
translated by 谷歌翻译
Deep learning has produced state-of-the-art results for a variety of tasks. While such approaches for supervised learning have performed well, they assume that training and testing data are drawn from the same distribution, which may not always be the case. As a complement to this challenge, single-source unsupervised domain adaptation can handle situations where a network is trained on labeled data from a source domain and unlabeled data from a related but different target domain with the goal of performing well at test-time on the target domain. Many single-source and typically homogeneous unsupervised deep domain adaptation approaches have thus been developed, combining the powerful, hierarchical representations from deep learning with domain adaptation to reduce reliance on potentially-costly target data labels. This survey will compare these approaches by examining alternative methods, the unique and common elements, results, and theoretical insights. We follow this with a look at application areas and open research directions.
translated by 谷歌翻译
在计算机视觉中,面对域转移是很常见的:具有相同类但采集条件不同的图像。在域适应性(DA)中,人们希望使用源标记的图像对未标记的目标图像进行分类。不幸的是,在源训练集中训练的深度神经网络在不属于训练领域的目标图像上表现不佳。改善这些性能的一种策略是使用最佳传输(OT)在嵌入式空间中对齐源和目标图像分布。但是,OT会导致负转移,即与不同标签的样品对齐,这导致过度拟合,尤其是在域之间存在标签移动的情况下。在这项工作中,我们通过将其解释为针对目标图像的嘈杂标签分配来减轻负相位。然后,我们通过适当的正则化来减轻其效果。我们建议将混合正则化\ citep {zhang2018mixup}与噪音标签强大的损失,以提高域的适应性性能。我们在一项广泛的消融研究中表明,这两种技术的结合对于提高性能至关重要。最后,我们在几个基准和现实世界DA问题上评估了称为\ textsc {mixunbot}的方法。
translated by 谷歌翻译
深度学习模型的最新发展,捕捉作物物候的复杂的时间模式有卫星图像时间序列(坐在),大大高级作物分类。然而,当施加到目标区域从训练区空间上不同的,这些模型差没有任何目标标签由于作物物候区域之间的时间位移进行。为了解决这个无人监督跨区域适应环境,现有方法学域不变特征没有任何目标的监督,而不是时间偏移本身。因此,这些技术提供了SITS只有有限的好处。在本文中,我们提出TimeMatch,一种新的无监督领域适应性方法SITS直接占时移。 TimeMatch由两个部分组成:1)时间位移的估计,其估计具有源极训练模型的未标记的目标区域的时间偏移,和2)TimeMatch学习,它结合了时间位移估计与半监督学习到一个分类适应未标记的目标区域。我们还引进了跨区域适应的开放式访问的数据集与来自欧洲四个不同区域的旁边。在此数据集,我们证明了TimeMatch优于所有竞争的方法,通过11%的在五个不同的适应情景F1-得分,创下了新的国家的最先进的跨区域适应性。
translated by 谷歌翻译
预后和健康管理(PHM)是一个新兴领域,由于其带来的好处和效率,它引起了制造业的广泛关注。剩余的使用寿命(RUR)预测是任何PHM系统的核心。最新数据驱动的研究要求大量标记的培训数据可以在有监督的学习范式下培训表现模型之前。在这里,转移学习(TL)和域适应(DA)方法介入并使我们有可能将监督模型概括为具有不同数据分布的其他没有标记数据的其他域。在本文中,我们提出了一种基于编码的模型(变压器),该模型(变压器)具有诱导的瓶颈,使用最大平均差异(MMD)的潜在对齐,并提出了歧管学习,以解决无监督的同质域的问题适应Rul预测。 \ textit {lama-net}使用NASA使用C-Mapss Turbofan引擎数据集验证,并将其与DA的其他最新技术进行了比较。结果表明,所提出的方法提供了一种有希望的方法来在RUL预测中进行域适应。一旦纸张退出审查,将提供代码。
translated by 谷歌翻译
目的。手写是日常生活中最常见的模式之一,由于它具有挑战性的应用,例如手写识别(HWR),作家识别和签名验证。与仅使用空间信息(即图像)的离线HWR相反,在线HWR(ONHWR)使用更丰富的时空信息(即轨迹数据或惯性数据)。尽管存在许多离线HWR数据集,但只有很少的数据可用于开发纸质上的ONHWR方法,因为它需要硬件集成的笔。方法。本文为实时序列到序列(SEQ2SEQ)学习和基于单个字符的识别提供了数据和基准模型。我们的数据由传感器增强的圆珠笔记录,从三轴加速度计,陀螺仪,磁力计和力传感器100 \,\ textit {hz}产生传感器数据流。我们建议各种数据集,包括与作者依赖和作者无关的任务的方程式和单词。我们的数据集允许在平板电脑上的经典ONHWR与传感器增强笔之间进行比较。我们使用经常性和时间卷积网络和变压器与连接派时间分类(CTC)损失(CTC)损失(CE)损失,为SEQ2SEQ和基于单个字符的HWR提供了评估基准。结果。我们的卷积网络与Bilstms相结合,优于基于变压器的架构,与基于序列的分类任务的启动时间相提并论,并且与28种最先进的技术相比,结果更好。时间序列扩展方法改善了基于序列的任务,我们表明CE变体可以改善单个分类任务。
translated by 谷歌翻译
虽然无监督的域适应(UDA)算法,即,近年来只有来自源域的标记数据,大多数算法和理论结果侧重于单源无监督域适应(SUDA)。然而,在实际情况下,标记的数据通常可以从多个不同的源收集,并且它们可能不仅不同于目标域而且彼此不同。因此,来自多个源的域适配器不应以相同的方式进行建模。最近基于深度学习的多源无监督域适应(Muda)算法专注于通过在通用特征空间中的所有源极和目标域的分布对齐来提取所有域的公共域不变表示。但是,往往很难提取Muda中所有域的相同域不变表示。此外,这些方法匹配分布而不考虑类之间的域特定的决策边界。为了解决这些问题,我们提出了一个新的框架,具有两个对准阶段的Muda,它不仅将每对源和目标域的分布对齐,而且还通过利用域特定的分类器的输出对准决策边界。广泛的实验表明,我们的方法可以对图像分类的流行基准数据集实现显着的结果。
translated by 谷歌翻译
This work provides a unified framework for addressing the problem of visual supervised domain adaptation and generalization with deep models. The main idea is to exploit the Siamese architecture to learn an embedding subspace that is discriminative, and where mapped visual domains are semantically aligned and yet maximally separated. The supervised setting becomes attractive especially when only few target data samples need to be labeled. In this scenario, alignment and separation of semantic probability distributions is difficult because of the lack of data. We found that by reverting to point-wise surrogates of distribution distances and similarities provides an effective solution. In addition, the approach has a high "speed" of adaptation, which requires an extremely low number of labeled target training samples, even one per category can be effective. The approach is extended to domain generalization. For both applications the experiments show very promising results.
translated by 谷歌翻译
Process monitoring and control are essential in modern industries for ensuring high quality standards and optimizing production performance. These technologies have a long history of application in production and have had numerous positive impacts, but also hold great potential when integrated with Industry 4.0 and advanced machine learning, particularly deep learning, solutions. However, in order to implement these solutions in production and enable widespread adoption, the scalability and transferability of deep learning methods have become a focus of research. While transfer learning has proven successful in many cases, particularly with computer vision and homogenous data inputs, it can be challenging to apply to heterogeneous data. Motivated by the need to transfer and standardize established processes to different, non-identical environments and by the challenge of adapting to heterogeneous data representations, this work introduces the Domain Adaptation Neural Network with Cyclic Supervision (DBACS) approach. DBACS addresses the issue of model generalization through domain adaptation, specifically for heterogeneous data, and enables the transfer and scalability of deep learning-based statistical control methods in a general manner. Additionally, the cyclic interactions between the different parts of the model enable DBACS to not only adapt to the domains, but also match them. To the best of our knowledge, DBACS is the first deep learning approach to combine adaptation and matching for heterogeneous data settings. For comparison, this work also includes subspace alignment and a multi-view learning that deals with heterogeneous representations by mapping data into correlated latent feature spaces. Finally, DBACS with its ability to adapt and match, is applied to a virtual metrology use case for an etching process run on different machine types in semiconductor manufacturing.
translated by 谷歌翻译
无监督域适应(UDA)已成功解决了可视应用程序的域移位问题。然而,由于以下原因,这些方法可能对时间序列数据的性能有限。首先,它们主要依赖于用于源预制的大规模数据集(即,ImageNet),这不适用于时间序列数据。其次,它们在域对齐步骤期间忽略源极限和目标域的特征空间上的时间维度。最后,最先前的UDA方法中的大多数只能对齐全局特征而不考虑目标域的细粒度分布。为了解决这些限制,我们提出了一个自我监督的自回归域适应(Slarda)框架。特别是,我们首先设计一个自我监督的学习模块,它利用预测作为辅助任务以提高源特征的可转换性。其次,我们提出了一种新的自回归域自适应技术,其包括在域对齐期间源和目标特征的时间依赖性。最后,我们开发了一个集合教师模型,通过自信的伪标记方法对准目标域中的类明智分发。已经在三个现实世界时间序列应用中进行了广泛的实验,具有30个跨域方案。结果表明,我们所提出的杆状方法明显优于时序序列域适应的最先进的方法。
translated by 谷歌翻译
最近,深度神经网络在时间序列的预测中越来越受欢迎。他们成功的主要原因是他们有效捕获多个相关时间序列的复杂时间动态的能力。这些深度预测者的优势才开始在有足够数量的数据的情况下开始出现。这对实践中的典型预测问题提出了挑战,在实践中,每个时间序列的时间序列或观察值有限,或者两者兼而有之。为了应对这些数据稀缺问题,我们提出了一个新颖的域适应框架,域适应预报员(DAF)。 DAF利用具有丰富数据样本(源)的相关领域的统计强度,以通过有限的数据(目标)提高感兴趣域的性能。特别是,我们使用基于注意力的共享模块,该模块与跨域跨域和私人模块的域歧视器一起使用。我们同时诱导域不变的潜在特征(查询和密钥)和重新培训特定特征(值),以使源和目标域上的预报员的联合训练。一个主要的见解是,我们对齐密钥的设计使目标域即使具有不同的特征也可以利用源时间序列。对各个领域的广泛实验表明,我们提出的方法在合成和现实世界数据集上优于最先进的基准,而消融研究验证了我们的设计选择的有效性。
translated by 谷歌翻译
深度学习已成为解决不同领域中现实世界中问题的首选方法,部分原因是它能够从数据中学习并在广泛的应用程序上实现令人印象深刻的性能。但是,它的成功通常取决于两个假设:(i)精确模型拟合需要大量标记的数据集,并且(ii)培训和测试数据是独立的且分布相同的。因此,不能保证它在看不见的目标域上的性能,尤其是在适应阶段遇到分布数据的数据时。目标域中数据的性能下降是部署深层神经网络的关键问题,这些网络已成功地在源域中的数据训练。通过利用标记的源域数据和未标记的目标域数据来执行目标域中的各种任务,提出了无监督的域适应(UDA)来对抗这一点。 UDA在自然图像处理,视频分析,自然语言处理,时间序列数据分析,医学图像分析等方面取得了令人鼓舞的结果。在本综述中,作为一个快速发展的主题,我们对其方法和应用程序进行了系统的比较。此外,还讨论了UDA与其紧密相关的任务的联系,例如域的概括和分布外检测。此外,突出显示了当前方法和可能有希望的方向的缺陷。
translated by 谷歌翻译
解决无监督域的适应性的主要方法是将源和目标域的数据点映射到嵌入式空间中,该空间被建模为共享深层编码器的输出空间。对编码器进行了训练,以使嵌入式空间域 - 敏捷剂,以使源训练的分类器可在目标域上推广。进一步提高UDA性能的次要机制是使源域分布更加紧凑,以提高模型的通用性。我们证明,增加嵌入空间中的阶级边缘可以帮助开发具有改善性能的UDA算法。我们估计源域的内部学习的多模式分布,该分布是由于预处理而学到的,并使用它来增加源域中的类间分离以减少域移位的效果。我们证明,使用我们的方法导致在四个标准基准UDA图像分类数据集上提高模型的通用性,并与退出方法进行了有利的比较。
translated by 谷歌翻译
最近的智能故障诊断(IFD)的进展大大依赖于深度代表学习和大量标记数据。然而,机器通常以各种工作条件操作,或者目标任务具有不同的分布,其中包含用于训练的收集数据(域移位问题)。此外,目标域中的新收集的测试数据通常是未标记的,导致基于无监督的深度转移学习(基于UDTL为基础的)IFD问题。虽然它已经实现了巨大的发展,但标准和开放的源代码框架以及基于UDTL的IFD的比较研究尚未建立。在本文中,我们根据不同的任务,构建新的分类系统并对基于UDTL的IFD进行全面审查。对一些典型方法和数据集的比较分析显示了基于UDTL的IFD中的一些开放和基本问题,这很少研究,包括特征,骨干,负转移,物理前导等的可转移性,强调UDTL的重要性和再现性 - 基于IFD,整个测试框架将发布给研究界以促进未来的研究。总之,发布的框架和比较研究可以作为扩展界面和基本结果,以便对基于UDTL的IFD进行新的研究。代码框架可用于\ url {https:/github.com/zhaozhibin/udtl}。
translated by 谷歌翻译