可以用表面肌电图定制人类味道感觉。但是,在一个主题(源域)上培训的模式识别模型在其他主题(目标域)上不概括。为了提高使用SEMG数据开发的味觉感觉模型的普遍性和可转移性,在本研究中创新了两种方法:域正则化分析(DRCA)和与缩小质心(CPSC)的共形预测。在具有来自目标域的未标记数据的未标记数据中独立研究了这两种方法的有效性,并且在六个受试者上进行了相同的交叉用户适应管道。结果表明,与仅与源域数据培训的基线模型相比,DRCA改善了六个受试者的分类准确性;,虽然CPSC不保证准确性改进。此外,DRCA和CPSC的组合在六个受试者上呈现了统计学上显着的改进(P <0.05)。结合DRCA和CPSC的拟议策略表明其在解决SEMG的味觉识别应用中的交叉用户数据分布漂移方面的有效性。它还显示了更多交叉用户适应应用程序的潜力。
translated by 谷歌翻译
A systematic review on machine-learning strategies for improving generalizability (cross-subjects and cross-sessions) electroencephalography (EEG) based in emotion classification was realized. In this context, the non-stationarity of EEG signals is a critical issue and can lead to the Dataset Shift problem. Several architectures and methods have been proposed to address this issue, mainly based on transfer learning methods. 418 papers were retrieved from the Scopus, IEEE Xplore and PubMed databases through a search query focusing on modern machine learning techniques for generalization in EEG-based emotion assessment. Among these papers, 75 were found eligible based on their relevance to the problem. Studies lacking a specific cross-subject and cross-session validation strategy and making use of other biosignals as support were excluded. On the basis of the selected papers' analysis, a taxonomy of the studies employing Machine Learning (ML) methods was proposed, together with a brief discussion on the different ML approaches involved. The studies with the best results in terms of average classification accuracy were identified, supporting that transfer learning methods seem to perform better than other approaches. A discussion is proposed on the impact of (i) the emotion theoretical models and (ii) psychological screening of the experimental sample on the classifier performances.
translated by 谷歌翻译
脑电图(EEG)解码旨在识别基于非侵入性测量的脑活动的神经处理的感知,语义和认知含量。当应用于在静态,受控的实验室环境中获取的数据时,传统的EEG解码方法取得了适度的成功。然而,开放世界的环境是一个更现实的环境,在影响EEG录音的情况下,可以意外地出现,显着削弱了现有方法的鲁棒性。近年来,由于其在特征提取的卓越容量,深入学习(DL)被出现为潜在的解决方案。它克服了使用浅架构提取的“手工制作”功能或功能的限制,但通常需要大量的昂贵,专业标记的数据 - 并不总是可获得的。结合具有域特定知识的DL可能允许开发即使具有小样本数据,也可以开发用于解码大脑活动的鲁棒方法。虽然已经提出了各种DL方法来解决EEG解码中的一些挑战,但目前缺乏系统的教程概述,特别是对于开放世界应用程序。因此,本文为开放世界EEG解码提供了对DL方法的全面调查,并确定了有前途的研究方向,以激发现实世界应用中的脑电图解码的未来研究。
translated by 谷歌翻译
许多利用移动设备中的传感器的应用以及应用机器学习以提供新颖的服务。然而,诸如不同的用户,设备,环境和超参数之类的各种因素影响了这种应用的性能,从而使域移位(即,来自训练源数据集的目标用户的分发偏移)是一个重要问题。虽然最近的域适应技术试图解决这个问题,但各种因素之间的复杂相互作用通常会限制其有效性。我们认为,准确估算未训练的域中的性能可能会显着降低性能不确定性。我们呈现Dapper(域适配性能估计器),其估计目标域中的适应性能,只有未标记的目标数据。我们的直觉是目标数据上模型的输出提供了模型在目标域中的实际性能的线索。 Dapper不需要昂贵的标签成本,也不需要在部署后涉及额外的培训。与四个基线相比,我们与四个真实世界传感数据集进行了评估,表明,估计精度平均17%平均占据了基线的表现。此外,我们的On-Device实验表明,与基线相比,Dapper达到了多达216倍的计算开销。
translated by 谷歌翻译
Wearable sensors for measuring head kinematics can be noisy due to imperfect interfaces with the body. Mouthguards are used to measure head kinematics during impacts in traumatic brain injury (TBI) studies, but deviations from reference kinematics can still occur due to potential looseness. In this study, deep learning is used to compensate for the imperfect interface and improve measurement accuracy. A set of one-dimensional convolutional neural network (1D-CNN) models was developed to denoise mouthguard kinematics measurements along three spatial axes of linear acceleration and angular velocity. The denoised kinematics had significantly reduced errors compared to reference kinematics, and reduced errors in brain injury criteria and tissue strain and strain rate calculated via finite element modeling. The 1D-CNN models were also tested on an on-field dataset of college football impacts and a post-mortem human subject dataset, with similar denoising effects observed. The models can be used to improve detection of head impacts and TBI risk evaluation, and potentially extended to other sensors measuring kinematics.
translated by 谷歌翻译
最近,面部生物识别是对传统认证系统的方便替代的巨大关注。因此,检测恶意尝试已经发现具有重要意义,导致面部抗欺骗〜(FAS),即面部呈现攻击检测。与手工制作的功能相反,深度特色学习和技术已经承诺急剧增加FAS系统的准确性,解决了实现这种系统的真实应用的关键挑战。因此,处理更广泛的发展以及准确的模型的新研究区越来越多地引起了研究界和行业的关注。在本文中,我们为自2017年以来对与基于深度特征的FAS方法相关的文献综合调查。在这一主题上阐明,基于各种特征和学习方法的语义分类。此外,我们以时间顺序排列,其进化进展和评估标准(数据集内集和数据集互联集合中集)覆盖了FAS的主要公共数据集。最后,我们讨论了开放的研究挑战和未来方向。
translated by 谷歌翻译
人类行为越来越多地在移动设备上捕获,从而增加了对自动人类活动识别的兴趣。但是,现有数据集通常由脚本运动组成。我们的长期目标是在自然环境中执行移动活动识别。我们收集一个数据集,以支持与下游任务(例如健康监测和干预)相关的活动类别。由于人类行为中存在巨大的差异,因此我们收集了两个不同年龄段的许多参与者的数据。由于人类行为会随着时间的流逝而改变,因此我们还在一个月的时间内收集参与者的数据以捕捉时间漂移。我们假设移动活动识别可以受益于无监督的域适应算法。为了满足这一需求并检验这一假设,我们分析了整个人和整个时间的域适应性的性能。然后,我们通过对比度学习来增强无监督的域适应性,并在可用标签比例时进行弱监督。该数据集可在https://github.com/wsu-casas/smartwatch-data上找到
translated by 谷歌翻译
Wearable sensor-based human activity recognition (HAR) has emerged as a principal research area and is utilized in a variety of applications. Recently, deep learning-based methods have achieved significant improvement in the HAR field with the development of human-computer interaction applications. However, they are limited to operating in a local neighborhood in the process of a standard convolution neural network, and correlations between different sensors on body positions are ignored. In addition, they still face significant challenging problems with performance degradation due to large gaps in the distribution of training and test data, and behavioral differences between subjects. In this work, we propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED), that accounts for individual sensor orientations and spatial and temporal features. The proposed method is capable of learning cross-domain embedding feature representations from multiple subjects datasets using adversarial learning and the maximum mean discrepancy (MMD) regularization to align the data distribution over multiple domains. In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition. Experimental results show that TASKED not only outperforms state-of-the-art methods on the four real-world public HAR datasets (alone or combined) but also improves the subject generalization effectively.
translated by 谷歌翻译
EEG-based tinnitus classification is a valuable tool for tinnitus diagnosis, research, and treatments. Most current works are limited to a single dataset where data patterns are similar. But EEG signals are highly non-stationary, resulting in model's poor generalization to new users, sessions or datasets. Thus, designing a model that can generalize to new datasets is beneficial and indispensable. To mitigate distribution discrepancy across datasets, we propose to achieve Disentangled and Side-aware Unsupervised Domain Adaptation (DSUDA) for cross-dataset tinnitus diagnosis. A disentangled auto-encoder is developed to decouple class-irrelevant information from the EEG signals to improve the classifying ability. The side-aware unsupervised domain adaptation module adapts the class-irrelevant information as domain variance to a new dataset and excludes the variance to obtain the class-distill features for the new dataset classification. It also align signals of left and right ears to overcome inherent EEG pattern difference. We compare DSUDA with state-of-the-art methods, and our model achieves significant improvements over competitors regarding comprehensive evaluation criteria. The results demonstrate our model can successfully generalize to a new dataset and effectively diagnose tinnitus.
translated by 谷歌翻译
域适应性(DA)是基于现代机器学习的医学数据分析的重要技术,旨在减少不同医疗数据集之间的分布差异。适当的域适应方法可以通过从多个站点/中心获取的数据来显着增强统计能力。为此,我们开发了用于医疗数据分析(DomainAtm)的域适应工具箱 - 一个开放式软件包,旨在快速促进和轻松自定义用于医疗数据分析的域适应方法。该域名在MATLAB中实现,并具有用户友好的图形接口,它由一系列流行的数据适应算法组成,这些算法已广泛应用于医学图像分析和计算机视觉。借助域名,研究人员能够促进对医学数据分析的不同适应方法的快速特征级别和图像级适应,可视化和性能评估。更重要的是,域名使用户能够通过脚本编写,大大增强其效用和可扩展性来开发和测试自己的适应方法。三个示例实验表明了概述和域的概述特征和用法,并证明了其有效性,简单性和灵活性。该软件,源代码和手册可在线获得。
translated by 谷歌翻译
最近的智能故障诊断(IFD)的进展大大依赖于深度代表学习和大量标记数据。然而,机器通常以各种工作条件操作,或者目标任务具有不同的分布,其中包含用于训练的收集数据(域移位问题)。此外,目标域中的新收集的测试数据通常是未标记的,导致基于无监督的深度转移学习(基于UDTL为基础的)IFD问题。虽然它已经实现了巨大的发展,但标准和开放的源代码框架以及基于UDTL的IFD的比较研究尚未建立。在本文中,我们根据不同的任务,构建新的分类系统并对基于UDTL的IFD进行全面审查。对一些典型方法和数据集的比较分析显示了基于UDTL的IFD中的一些开放和基本问题,这很少研究,包括特征,骨干,负转移,物理前导等的可转移性,强调UDTL的重要性和再现性 - 基于IFD,整个测试框架将发布给研究界以促进未来的研究。总之,发布的框架和比较研究可以作为扩展界面和基本结果,以便对基于UDTL的IFD进行新的研究。代码框架可用于\ url {https:/github.com/zhaozhibin/udtl}。
translated by 谷歌翻译
近年来,由渠道状态信息(CSI)启用了基于WiFi的智能人类传感技术(CSI)。但是,在不同的环境中部署时,基于CSI的传感系统会遭受性能降解。现有作品通过使用新环境中的大量未标记的高质量数据来通过域的适应来解决这一问题,这在实践中通常不可用。在本文中,我们提出了一种新颖的增强环境不变的鲁棒wifi wifi识别系统,名为Airfi,该系统从新的角度涉及环境依赖问题。 Airfi是一个新颖的领域泛化框架,无论环境如何,都可以学习CSI的关键部分,并将模型推广到看不见的场景,不需要收集任何数据以适应新环境。 Airfi从几个培训环境环境中提取了共同的功能,并最大程度地减少了它们之间的分布差异。该功能将进一步增强,以使环境更强大。此外,可以通过几次学习技术进一步改进该系统。与最先进的方法相比,Airfi能够在不同的环境环境中工作,而无需从新环境中获取任何CSI数据。实验结果表明,我们的系统在新环境中保持强大,并优于比较系统。
translated by 谷歌翻译
由于无频率,隐私保护和RF信号的广泛覆盖性质,设备自由人的手势识别已得到赞誉。然而,在应用于新域时,从特定域收集的数据训练以识别的神经网络模型受到显着的性能下降。为了解决这一挑战,我们通过有效使用未标记的目标域数据,为设备免费手势识别提出了无监督的域适应框架。具体而言,我们使用伪标签和一致性正则化,并在目标域数据上进行详细设计,以生成伪标签并对齐目标域的实例特征。然后,我们通过随机擦除输入数据来设计两个数据增强方法以增强模型的稳健性。此外,我们应用置信控制约束来解决过度频繁问题。我们对公共WiFi数据集和公共毫米波雷达数据集进行了广泛的实验。实验结果表明了所提出的框架的优越效果。
translated by 谷歌翻译
Process monitoring and control are essential in modern industries for ensuring high quality standards and optimizing production performance. These technologies have a long history of application in production and have had numerous positive impacts, but also hold great potential when integrated with Industry 4.0 and advanced machine learning, particularly deep learning, solutions. However, in order to implement these solutions in production and enable widespread adoption, the scalability and transferability of deep learning methods have become a focus of research. While transfer learning has proven successful in many cases, particularly with computer vision and homogenous data inputs, it can be challenging to apply to heterogeneous data. Motivated by the need to transfer and standardize established processes to different, non-identical environments and by the challenge of adapting to heterogeneous data representations, this work introduces the Domain Adaptation Neural Network with Cyclic Supervision (DBACS) approach. DBACS addresses the issue of model generalization through domain adaptation, specifically for heterogeneous data, and enables the transfer and scalability of deep learning-based statistical control methods in a general manner. Additionally, the cyclic interactions between the different parts of the model enable DBACS to not only adapt to the domains, but also match them. To the best of our knowledge, DBACS is the first deep learning approach to combine adaptation and matching for heterogeneous data settings. For comparison, this work also includes subspace alignment and a multi-view learning that deals with heterogeneous representations by mapping data into correlated latent feature spaces. Finally, DBACS with its ability to adapt and match, is applied to a virtual metrology use case for an etching process run on different machine types in semiconductor manufacturing.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.
translated by 谷歌翻译
工作记忆(WM)表示在脑海中存储的信息,是人类认知领域的一个基本研究主题。可以监测大脑的电活动的脑电图(EEG)已被广泛用于测量WM的水平。但是,关键的挑战之一是个体差异可能会导致无效的结果,尤其是当既定模型符合陌生主题时。在这项工作中,我们提出了一个具有空间注意力(CS-DASA)的跨主题深层适应模型,以概括跨科目的工作负载分类。首先,我们将EEG时间序列转换为包含空间,光谱和时间信息的多帧EEG图像。首先,CS-DASA中的主题共享模块从源和目标主题中接收多帧的EEG图像数据,并学习了共同的特征表示。然后,在特定于主题的模块中,实现了最大平均差异,以测量重现的内核希尔伯特空间中的域分布差异,这可以为域适应增加有效的罚款损失。此外,采用主题对象的空间注意机制专注于目标图像数据的判别空间特征。在包含13个受试者的公共WM EEG数据集上进行的实验表明,所提出的模型能够达到比现有最新方法更好的性能。
translated by 谷歌翻译
Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large number of target domain data can be reduced for constructing target learners. Due to the wide application prospects, transfer learning has become a popular and promising area in machine learning. Although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances in transfer learning. Due to the rapid expansion of the transfer learning area, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing transfer learning researches, as well as to summarize and interpret the mechanisms and the strategies of transfer learning in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. Unlike previous surveys, this survey paper reviews more than forty representative transfer learning approaches, especially homogeneous transfer learning approaches, from the perspectives of data and model. The applications of transfer learning are also briefly introduced. In order to show the performance of different transfer learning models, over twenty representative transfer learning models are used for experiments. The models are performed on three different datasets, i.e., Amazon Reviews, Reuters-21578, and Office-31. And the experimental results demonstrate the importance of selecting appropriate transfer learning models for different applications in practice.
translated by 谷歌翻译
虽然在许多域内生成并提供了大量的未标记数据,但对视觉数据的自动理解的需求高于以往任何时候。大多数现有机器学习模型通常依赖于大量标记的训练数据来实现高性能。不幸的是,在现实世界的应用中,不能满足这种要求。标签的数量有限,手动注释数据昂贵且耗时。通常需要将知识从现有标记域传输到新域。但是,模型性能因域之间的差异(域移位或数据集偏差)而劣化。为了克服注释的负担,域适应(DA)旨在在将知识从一个域转移到另一个类似但不同的域中时减轻域移位问题。无监督的DA(UDA)处理标记的源域和未标记的目标域。 UDA的主要目标是减少标记的源数据和未标记的目标数据之间的域差异,并在培训期间在两个域中学习域不变的表示。在本文中,我们首先定义UDA问题。其次,我们从传统方法和基于深度学习的方法中概述了不同类别的UDA的最先进的方法。最后,我们收集常用的基准数据集和UDA最先进方法的报告结果对视觉识别问题。
translated by 谷歌翻译
Concept drift describes unforeseeable changes in the underlying distribution of streaming data over time. Concept drift research involves the development of methodologies and techniques for drift detection, understanding and adaptation. Data analysis has revealed that machine learning in a concept drift environment will result in poor learning results if the drift is not addressed. To help researchers identify which research topics are significant and how to apply related techniques in data analysis tasks, it is necessary that a high quality, instructive review of current research developments and trends in the concept drift field is conducted. In addition, due to the rapid development of concept drift in recent years, the methodologies of learning under concept drift have become noticeably systematic, unveiling a framework which has not been mentioned in literature. This paper reviews over 130 high quality publications in concept drift related research areas, analyzes up-to-date developments in methodologies and techniques, and establishes a framework of learning under concept drift including three main components: concept drift detection, concept drift understanding, and concept drift adaptation. This paper lists and discusses 10 popular synthetic datasets and 14 publicly available benchmark datasets used for evaluating the performance of learning algorithms aiming at handling concept drift. Also, concept drift related research directions are covered and discussed. By providing state-of-the-art knowledge, this survey will directly support researchers in their understanding of research developments in the field of learning under concept drift.
translated by 谷歌翻译