Human Activity Recognition (HAR) using on-body devices identifies specific human actions in unconstrained environments. HAR is challenging due to the inter and intra-variance of human movements; moreover, annotated datasets from on-body devices are scarce. This problem is mainly due to the difficulty of data creation, i.e., recording, expensive annotation, and lack of standard definitions of human activities. Previous works demonstrated that transfer learning is a good strategy for addressing scenarios with scarce data. However, the scarcity of annotated on-body device datasets remains. This paper proposes using datasets intended for human-pose estimation as a source for transfer learning; specifically, it deploys sequences of annotated pixel coordinates of human joints from video datasets for HAR and human pose estimation. We pre-train a deep architecture on four benchmark video-based source datasets. Finally, an evaluation is carried out on three on-body device datasets improving HAR performance.
translated by 谷歌翻译
使用诸如嵌入惯性测量单元(IMU)传感器的可穿戴设备(如智能手表)的人类活动识别(Har)具有与我们日常生活相关的各种应用,例如锻炼跟踪和健康监控。在本文中,我们使用在不同身体位置佩戴的多个IMU传感器提出了一种基于人类活动识别的新颖性方法。首先,设计传感器设计特征提取模块以提取具有卷积神经网络(CNNS)的各个传感器的最辨别特征。其次,开发了一种基于注意的融合机制,以了解不同车身位置处的传感器的重要性,并产生细节特征表示。最后,应用传感器间特征提取模块来学习与分类器连接的传感器间相关性以输出预测的活动。所提出的方法是使用五个公共数据集进行评估,并且在各种活动类别上优于最先进的方法。
translated by 谷歌翻译
人类堕落是非常关键的健康问题之一,尤其是对于长老和残疾人而言。在全球范围内,老年人口的数量正在稳步增加。因此,人类的跌倒发现已成为为这些人辅助生活的有效技术。为了辅助生活,大量使用了深度学习和计算机视觉。在这篇评论文章中,我们讨论了基于深度学习(DL)的最先进的非侵入性(基于视觉的)秋季检测技术。我们还提出了有关秋季检测基准数据集的调查。为了清楚理解,我们简要讨论用于评估秋季检测系统性能的不同指标。本文还为基于视觉的人类跌落检测技术提供了未来的指导。
translated by 谷歌翻译
Wearable sensor-based human activity recognition (HAR) has emerged as a principal research area and is utilized in a variety of applications. Recently, deep learning-based methods have achieved significant improvement in the HAR field with the development of human-computer interaction applications. However, they are limited to operating in a local neighborhood in the process of a standard convolution neural network, and correlations between different sensors on body positions are ignored. In addition, they still face significant challenging problems with performance degradation due to large gaps in the distribution of training and test data, and behavioral differences between subjects. In this work, we propose a novel Transformer-based Adversarial learning framework for human activity recognition using wearable sensors via Self-KnowledgE Distillation (TASKED), that accounts for individual sensor orientations and spatial and temporal features. The proposed method is capable of learning cross-domain embedding feature representations from multiple subjects datasets using adversarial learning and the maximum mean discrepancy (MMD) regularization to align the data distribution over multiple domains. In the proposed method, we adopt the teacher-free self-knowledge distillation to improve the stability of the training procedure and the performance of human activity recognition. Experimental results show that TASKED not only outperforms state-of-the-art methods on the four real-world public HAR datasets (alone or combined) but also improves the subject generalization effectively.
translated by 谷歌翻译
设计可以成功部署在日常生活环境中的活动检测系统需要构成现实情况典型挑战的数据集。在本文中,我们介绍了一个新的未修剪日常生存数据集,该数据集具有几个现实世界中的挑战:Toyota Smarthome Untrimmed(TSU)。 TSU包含以自发方式进行的各种活动。数据集包含密集的注释,包括基本的,复合活动和涉及与对象相互作用的活动。我们提供了对数据集所需的现实世界挑战的分析,突出了检测算法的开放问题。我们表明,当前的最新方法无法在TSU数据集上实现令人满意的性能。因此,我们提出了一种新的基线方法,以应对数据集提供的新挑战。此方法利用一种模态(即视线流)生成注意力权重,以指导另一种模态(即RGB)以更好地检测活动边界。这对于检测以高时间差异为特征的活动特别有益。我们表明,我们建议在TSU和另一个受欢迎的挑战数据集Charades上优于最先进方法的方法。
translated by 谷歌翻译
We propose a CNN-based approach for 3D human body pose estimation from single RGB images that addresses the issue of limited generalizability of models trained solely on the starkly limited publicly available 3D pose data. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performance on established benchmarks through transfer of learned features, while also generalizing to in-the-wild scenes. We further introduce a new training set for human body pose estimation from monocular images of real humans that has the ground truth captured with a multi-camera marker-less motion capture system. It complements existing corpora with greater diversity in pose, human appearance, clothing, occlusion, and viewpoints, and enables an increased scope of augmentation. We also contribute a new benchmark that covers outdoor and indoor scenes, and demonstrate that our 3D pose dataset shows better in-the-wild performance than existing annotated data, which is further improved in conjunction with transfer learning from 2D pose data. All in all, we argue that the use of transfer learning of representations in tandem with algorithmic and data contributions is crucial for general 3D body pose estimation.
translated by 谷歌翻译
在本文中,我们提出了一种新的方法来增强从单个可佩戴相机捕获的视频计算的人的3D身体姿势估计。关键的想法是利用在联合嵌入空间中链接第一和第三次视图的高级功能。为了了解这样的嵌入空间,我们介绍了First2第三姿势,这是一个近2,000个视频的新配对同步数据集,描绘了从第一和第三视角捕获的人类活动。我们明确地考虑了空间和运动域功能,同时使用以自我监督的方式培训的半暹罗架构。实验结果表明,使用我们的数据集学习的联合多视图嵌入式空间可用于从任意单视图的自拍视频中提取歧视特征,而无需需要域适应,也不知道相机参数。在三种监督最先进的方法中,我们在两个无约束数据集中实现了重大改善了两个无约束的数据集。我们的数据集和代码将可用于研究目的。
translated by 谷歌翻译
我们为环境辅助生活(AAL)提出了一种新型的多模式传感器融合方法,该方法利用了使用特权信息(LUPI)学习的优势。我们解决了标准多模式方法的两个主要缺点,有限的面积覆盖率和降低的可靠性。我们的新框架将模幻幻觉的概念与三胞胎学习融合在一起,以训练具有不同模态的模型,以在推理时处理缺失的传感器。我们使用RGB视频和骨骼作为特权模式评估了来自可穿戴加速度计设备的惯性数据的拟议模型,并在UTD-MHAD数据集中表现出平均6.6%的准确性,平均为5.5%,伯克利MHAD MHAD DATASET的准确性为5.5%,在这些数据集上达到新的最新唯一分类精度。我们通过几项消融研究来验证我们的框架。
translated by 谷歌翻译
人类行动识别是计算机视觉中的重要应用领域。它的主要目的是准确地描述人类的行为及其相互作用,从传感器获得的先前看不见的数据序列中。识别,理解和预测复杂人类行动的能力能够构建许多重要的应用,例如智能监视系统,人力计算机界面,医疗保健,安全和军事应用。近年来,计算机视觉社区特别关注深度学习。本文使用深度学习技术的视频分析概述了当前的动作识别最新识别。我们提出了识别人类行为的最重要的深度学习模型,并分析它们,以提供用于解决人类行动识别问题的深度学习算法的当前进展,以突出其优势和缺点。基于文献中报道的识别精度的定量分析,我们的研究确定了动作识别中最新的深层体系结构,然后为该领域的未来工作提供当前的趋势和开放问题。
translated by 谷歌翻译
Human activity recognition (HAR) using drone-mounted cameras has attracted considerable interest from the computer vision research community in recent years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is comprised of two parts. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26%, 18.56%, and 2.94%, respectively.
translated by 谷歌翻译
移动设备上的人类活动识别(HAR)已证明可以通过从用户的惯性测量单元(IMU)生成的数据中学到的轻量级神经模型来实现。基于Instanced HAR的大多数方法都使用卷积神经网络(CNN),长期记忆(LSTMS)或两者组合以实现实时性能来实现最新结果。最近,在语言处理域中,然后在视觉域中的变形金刚体系结构进一步推动了对古典体系结构的最先进。但是,这种变形金刚在计算资源中是重量级的,它不适合在Pervasive Computing域中找到HAR的嵌入式应用程序。在这项研究中,我们提出了人类活动识别变压器(HART),这是一种轻巧的,传感器的变压器结构,已专门适用于嵌入移动设备上的IMU的域。我们对HAR任务的实验具有几个公开可用的数据集,表明HART使用较少的每秒浮点操作(FLOPS)和参数,同时超过了当前的最新结果。此外,我们在各种体系结构中对它们在异质环境中的性能进行了评估,并表明我们的模型可以更好地推广到不同的感应设备或体内位置。
translated by 谷歌翻译
To properly assist humans in their needs, human activity recognition (HAR) systems need the ability to fuse information from multiple modalities. Our hypothesis is that multimodal sensors, visual and non-visual tend to provide complementary information, addressing the limitations of other modalities. In this work, we propose a multi-modal framework that learns to effectively combine features from RGB Video and IMU sensors, and show its robustness for MMAct and UTD-MHAD datasets. Our model is trained in two-stage, where in the first stage, each input encoder learns to effectively extract features, and in the second stage, learns to combine these individual features. We show significant improvements of 22% and 11% compared to video only and IMU only setup on UTD-MHAD dataset, and 20% and 12% on MMAct datasets. Through extensive experimentation, we show the robustness of our model on zero shot setting, and limited annotated data setting. We further compare with state-of-the-art methods that use more input modalities and show that our method outperforms significantly on the more difficult MMact dataset, and performs comparably in UTD-MHAD dataset.
translated by 谷歌翻译
随着越来越多的长者独自生活,从远处提供护理就成为了迫切的需求,尤其是为了安全。当发生异常行为或异常活动时,实时监测和行动识别对于及时提高警觉至关重要。尽管可穿戴传感器被广泛认为是有前途的解决方案,但高度取决于用户的能力和意愿,使其效率低下。相比之下,通过非接触式光学相机收集的视频流提供了更丰富的信息,并释放了老年人的负担。在本文中,利用独立的神经网络(INDRNN),我们提出了一种基于轻量级人类行动识别(HAR)技术的新型实时老年人监测高级安全(REMS)。使用捕获的骨架图像,REMS方案能够识别异常行为或动作并保留用户的隐私。为了获得高精度,使用多个数据库对HAR模块进行了训练和微调。一项广泛的实验研究验证了REMS系统可以准确,及时执行动作识别。 REMS作为保存隐私的老年安全监控系统实现了设计目标,并具有在各种智能监控系统中采用的潜力。
translated by 谷歌翻译
狗主人通常能够识别出揭示其狗的主观状态的行为线索,例如疼痛。但是自动识别疼痛状态非常具有挑战性。本文提出了一种基于视频的新型,两流深的神经网络方法,以解决此问题。我们提取和预处理身体关键点,并在视频中计算关键点和RGB表示的功能。我们提出了一种处理自我十分和缺少关键点的方法。我们还提出了一个由兽医专业人员收集的独特基于视频的狗行为数据集,并注释以进行疼痛,并通过建议的方法报告良好的分类结果。这项研究是基于机器学习的狗疼痛状态估计的第一批作品之一。
translated by 谷歌翻译
我们提出了一种新的深度学习方法,用于实时3D人类行动从骨骼数据识别,并将其应用于开发基于视觉的智能监视系统。给定骨骼序列,我们建议将骨骼姿势及其运动编码为单个RGB图像。然后将自适应直方图均衡(AHE)算法应用于颜色图像上,以增强其局部模式并产生更多的判别特征。为了学习和分类任务,我们根据密度连接的卷积体系结构(Densenet)设计深神经网络,以从增强色彩图像中提取特征并将其分类为类。两个具有挑战性的数据集的实验结果表明,所提出的方法达到了最先进的准确性,同时需要培训和推理的计算时间较低。本文还介绍了Cemest,Cemest是一种新的RGB-D数据集,描绘了公共交通中的客运行为。它由203个未经修剪的现实世界监视视频,记录了现实的正常事件和异常事件。在支持数据增强和转移学习技术的支持下,我们在该数据集的实际条件下取得了令人鼓舞的结果。这使基于深度学习的现实应用程序的构建可以增强公共交通中的监控和安全性。
translated by 谷歌翻译
The dichotomy between the challenging nature of obtaining annotations for activities, and the more straightforward nature of data collection from wearables, has resulted in significant interest in the development of techniques that utilize large quantities of unlabeled data for learning representations. Contrastive Predictive Coding (CPC) is one such method, learning effective representations by leveraging properties of time-series data to setup a contrastive future timestep prediction task. In this work, we propose enhancements to CPC, by systematically investigating the encoder architecture, the aggregator network, and the future timestep prediction, resulting in a fully convolutional architecture, thereby improving parallelizability. Across sensor positions and activities, our method shows substantial improvements on four of six target datasets, demonstrating its ability to empower a wide range of application scenarios. Further, in the presence of very limited labeled data, our technique significantly outperforms both supervised and self-supervised baselines, positively impacting situations where collecting only a few seconds of labeled data may be possible. This is promising, as CPC does not require specialized data transformations or reconstructions for learning effective representations.
translated by 谷歌翻译
In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, corresponding to an error reduction of 11%, and the model also shows significant improvements on HumanEva-I. Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semisupervised settings where labeled data is scarce. Code and models are available at https://github.com/ facebookresearch/VideoPose3D
translated by 谷歌翻译
呼吸率(RR)是重要的生物标志物,因为RR变化可以反映严重的医学事件,例如心脏病,肺部疾病和睡眠障碍。但是,不幸的是,标准手动RR计数容易出现人为错误,不能连续执行。这项研究提出了一种连续估计RR,RRWAVENET的方法。该方法是一种紧凑的端到端深度学习模型,不需要特征工程,可以将低成本的原始光摄影学(PPG)用作输入信号。对RRWAVENET进行了独立于主题的测试,并与三个数据集(BIDMC,Capnobase和Wesad)中的基线进行了比较,并使用三个窗口尺寸(16、32和64秒)进行了比较。 RRWAVENET优于最佳窗口大小为1.66 \ pm 1.01、1.59 \ pm 1.08的最佳绝对错误的最新方法,每个数据集每分钟每分钟呼吸0.96。在远程监视设置(例如在WESAD数据集中),我们将传输学习应用于其他两个ICU数据集,将MAE降低到1.52 \ pm每分钟0.50呼吸,显示此模型可以准确且实用的RR对负担得起的可穿戴设备进行准确估算。我们的研究表明,在远程医疗和家里,远程RR监测的可行性。
translated by 谷歌翻译
人类相互作用的分析是人类运动分析的一个重要研究主题。它已经使用第一人称视觉(FPV)或第三人称视觉(TPV)进行了研究。但是,到目前为止,两种视野的联合学习几乎没有引起关注。原因之一是缺乏涵盖FPV和TPV的合适数据集。此外,FPV或TPV的现有基准数据集具有多个限制,包括样本数量有限,参与者,交互类别和模态。在这项工作中,我们贡献了一个大规模的人类交互数据集,即FT-HID数据集。 FT-HID包含第一人称和第三人称愿景的成对对齐的样本。该数据集是从109个不同受试者中收集的,并具有三种模式的90K样品。该数据集已通过使用几种现有的动作识别方法验证。此外,我们还引入了一种新型的骨骼序列的多视图交互机制,以及针对第一人称和第三人称视野的联合学习多流框架。两种方法都在FT-HID数据集上产生有希望的结果。可以预期,这一视力一致的大规模数据集的引入将促进FPV和TPV的发展,以及他们用于人类行动分析的联合学习技术。该数据集和代码可在\ href {https://github.com/endlichere/ft-hid} {here} {herefichub.com/endlichere.com/endlichere}中获得。
translated by 谷歌翻译
多个摄像机制造的视频录制的可用性越来越多,为姿势和运动重建方法中的减少和深度歧义提供了新的方法。然而,多视图算法强烈依赖于相机参数;特别地,相机之间的相对介绍。在不受控制的设置中,这种依赖变为一旦转移到动态捕获一次。我们介绍Flex(免费多视图重建),一个端到端的无参数多视图模型。 Flex是无意义的参数,即它不需要任何相机参数,都不是内在的也不是外在的。我们的关键思想是骨架部件和骨长之间的3D角度是不变的相机位置。因此,学习3D旋转和骨长而不是位置允许预测所有相机视图的公共值。我们的网络采用多个视频流,学习通过新型多视图融合层的融合深度特征,并重建单一一致的骨架,其具有时间上相干的关节旋转。我们展示了人类3.6M和KTH多视图足球II数据集的定量和定性结果,以及动态摄像头捕获的合成多人视频流。我们将模型与最先进的方法进行比较,这些方法没有参与参数,并在没有相机参数的情况下显示,我们在获得相机参数可用时获取可比结果的同时优于较大的余量。我们的项目页面上可以使用代码,培训的模型,视频示例和更多材料。
translated by 谷歌翻译