双边姿势对称性是自闭症谱系障碍(ASD)的潜在风险标志物以及婴儿中先天性肌肉核核糖(CMT)的症状的关键作用,但是当前评估对称性的方法需要费力的临床专家评估。在本文中,我们开发了一个基于计算机视觉的婴儿对称评估系统,利用婴儿的3D人姿势估计。通过对人类角度和对称性评级的调查,我们的发现对我们的系统进行评估和校准,使这种评级表现出较低的评价者可靠性。为了纠正这一点,我们开发了一个贝叶斯的估计量,该估计量是从可犯错的人类评估者的概率图形模型中得出的。我们显示,在预测贝叶斯骨料标签方面,3D婴儿姿势估计模型可以在接收器工作特征曲线性能下实现68%的面积,而2D婴儿姿势估计模型仅为61%,而3D成人姿势估计模型的61%和60% ,强调了3D姿势和婴儿领域知识在评估婴儿身体对称性方面的重要性。我们的调查分析还表明,人类评分易受较高的偏见和不一致性的影响,因此,我们的最终基于3D姿势的对称评估系统是校准的,但没有直接受到贝叶斯汇总人类评分的直接监督,从而产生了更高的一致性和较低水平的水平和​​较低的水平。 LIMB间评估偏见。
translated by 谷歌翻译
We apply computer vision pose estimation techniques developed expressly for the data-scarce infant domain to the study of torticollis, a common condition in infants for which early identification and treatment is critical. Specifically, we use a combination of facial landmark and body joint estimation techniques designed for infants to estimate a range of geometric measures pertaining to face and upper body symmetry, drawn from an array of sources in the physical therapy and ophthalmology research literature in torticollis. We gauge performance with a range of metrics and show that the estimates of most these geometric measures are successful, yielding strong to very strong Spearman's $\rho$ correlation with ground truth values. Furthermore, we show that these estimates, derived from pose estimation neural networks designed for the infant domain, cleanly outperform estimates derived from more widely known networks designed for the adult domain
translated by 谷歌翻译
自发运动评估可以预测高风险婴儿的长期发育障碍。为了开发用于后续疾病的自动预测的算法,需要通过婴儿姿势估计的段和关节的高精度定位。训练了四种类型的卷积神经网络,并在新颖的婴儿姿势数据集上进行培训,并从临床国际社会中涵盖了1224个视频的大变化。将网络的本地化性能评估为估计的关键点位置和人类专家注释之间的偏差。还评估了计算效率,以确定神经网络在临床实践中的可行性。表现最佳的神经网络对人类专家注释的帧间扩散具有类似的本地化误差,同时仍然有效地运行。总体而言,我们的研究结果表明,婴儿自发运动的姿势估计有巨大的潜力,支持研究潜冲在早期检测儿童发育疾病的潜在脑损伤的发育障碍,这些脑卒中与人为水平绩效的录像量。
translated by 谷歌翻译
3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-thewild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-ofthe art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes.* This work was performed while J. Romero and F. Bogo were with the MPI-IS 2 ; P. V. Gehler with the BCCN 1 and MPI-IS 2 .
translated by 谷歌翻译
内部的姿势估计显示出在医院患者监测,睡眠研究和智能家居等领域的价值。在本文中,我们探讨了借助现有的姿势估计器,从高度模棱两可的压力数据中检测身体姿势的不同策略。我们通过直接使用或通过在两个压力数据集上对其进行重新训练来检查预训练的姿势估计器的性能。我们还利用可学习的预处理域适应步骤探索了其他策略,该步骤将模糊的压力图转换为更接近共同目的姿势估计模块的预期输入空间的表示。因此,我们使用了具有多个尺度的完全卷积网络,以向预训练的姿势估计模块提供压力图的姿势特异性特征。我们对不同方法的完整分析表明,在压力数据上,可学习的预处理模块的组合以及重新训练基于图像的姿势估计器能够克服诸如高度模糊的压力点之类的问题,以实现很高的姿势估计准确性。
translated by 谷歌翻译
We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state of the art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.
translated by 谷歌翻译
瑜伽是全球广受好评的,广泛推荐的健康生活实践。在执行瑜伽时保持正确的姿势至关重要。在这项工作中,我们采用了从人类姿势估计模型中的转移学习来提取整个人体的136个关键点,以训练一个随机的森林分类器,该分类器用于估算瑜伽室。在内部收集的内部收集的瑜伽视频数据库中评估了结果,该数据库是从4个不同的相机角度记录的51个主题。我们提出了一个三步方案,用于通过对1)看不见的帧,2)看不见的受试者进行测试来评估瑜伽分类器的普遍性。我们认为,对于大多数应用程序,对看不见的主题的验证精度和看不见的摄像头是最重要的。我们经验分析了三个公共数据集,转移学习的优势以及目标泄漏的可能性。我们进一步证明,分类精度在很大程度上取决于所采用的交叉验证方法,并且通常会产生误导。为了促进进一步的研究,我们已公开提供关键点数据集和代码。
translated by 谷歌翻译
我们提出了一个基于按键的对象级别的SLAM框架,该框架可以为对称和不对称对象提供全球一致的6DOF姿势估计。据我们所知,我们的系统是最早利用来自SLAM的相机姿势信息的系统之一,以提供先验知识,以跟踪对称对象的关键点 - 确保新测量与当前的3D场景一致。此外,我们的语义关键点网络经过训练,可以预测捕获预测的真实错误的关键点的高斯协方差,因此不仅可以作为系统优化问题中残留物的权重,而且还可以作为检测手段有害的统计异常值,而无需选择手动阈值。实验表明,我们的方法以6DOF对象姿势估算和实时速度为最先进的状态提供了竞争性能。我们的代码,预培训模型和关键点标签可用https://github.com/rpng/suo_slam。
translated by 谷歌翻译
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
translated by 谷歌翻译
音频条件的舞蹈运动合成图的生成模型音乐特征到舞蹈运动。训练模型将运动模式与音频模式相关联,通常没有明确的人体知识。这种方法取决于一些假设:强烈的音乐舞蹈相关性,受控运动数据和相对简单的姿势和运动。在所有现有的舞蹈运动合成数据集中都可以找到这些特征,并且实际上最近的方法可以取得良好的结果。我们引入了一个新的数据集,旨在挑战这些常见的假设,并编译了一组动态舞蹈序列,显示出复杂的人类姿势。我们专注于具有杂技动作和纠结姿势的脱节。我们从红牛BC One竞赛视频中获取数据。由于舞蹈的复杂性以及多个移动的相机录制设置,因此很难从这些视频中估算人类关键点。我们采用混合标签管道利用深度估计模型以及手动注释,以降低的成本获得高质量的关键点序列。我们的努力生产了支架数据集,该数据集包含3个小时30分钟的密集注释姿势。我们在支撑上测试了最新方法,在复杂序列上评估时显示了它们的局限性。我们的数据集可以很容易地促进舞蹈运动合成。有了复杂的姿势和迅速的动作,模型被迫超越学习方式与理性之间的映射,以更有效地了解身体结构和运动。
translated by 谷歌翻译
在3D人类姿势估计任务中存在挑战性问题,例如由遮挡和自我封闭引起的性能差。最近,IMU-Vision传感器融合被认为对于解决这些问题很有价值。但是,先前关于IMU和视觉数据的融合的研究(异质性)无法充分利用IMU原始数据或可靠的高级视觉功能。为了促进更有效的传感器融合,在这项工作中,我们提出了一个在参数人运动模型下的框架,称为\ emph {fusepose}。具体而言,我们汇总了IMU或视觉数据的不同信息,并引入了三种独特的传感器融合方法:NaiveFuse,Kinefuse和AdadeEpfuse。 NaiveFuse服务器是一种基本方法,仅融合简化的IMU数据并估计欧几里得空间中的3D姿势。在运动学空间中,KineFuse能够将校准和对齐的IMU原始数据与转换后的3D姿势参数集成在一起。 AdadeEpfuse进一步将这种运动学融合过程发展为一种适应性和端到端的训练方式。进行消融研究的综合实验表明了所提出的框架的合理性和优越性。与基线结果相比,3D人姿势估计的性能得到了提高。在Total Capture数据集上,KineFuse超过了先前的最新技术,该最新仅用于测试8.6 \%。 AdadeEpfuse超过了最新的,该技术使用IMU进行培训和测试的最新时间为8.5 \%。此外,我们通过对人类360万数据集的实验来验证框架的概括能力。
translated by 谷歌翻译
传统的3D人姿态估计依赖于首次检测2D身体键盘,然后求解2D到3D对应问题。提高有希望的结果,该学习范例高度依赖于2D关键点检测器的质量,这不可避免地易于闭塞和堵塞-of-image缺席。在本文中,我们提出了一种新颖的姿势定向网(PONET),其能够仅通过学习方向估计3D姿势,因此在没有图像证据的情况下绕过错误易于keypoint检测器。对于具有部分不可见的四肢的图像,Ponet通过利用本地图像证据来恢复3D姿势来估计这些肢体的3D方向。通过利用完全看不见的四肢来说,Ponet甚至可以从完全看不见的四肢的图像中推断出完整的3D姿势。可见肢体之间的取向相关性以补充估计的姿势,进一步提高了3D姿态估计的鲁棒性。我们在多个数据集中评估我们的方法,包括Human3.6M,MPII,MPI-INF-3DHP和3DPW。我们的方法在理想设置中实现了与最先进的技术的结果,但显着消除了对关键点检测器和相应的计算负担的依赖性。在截断和擦除等方面的高度挑战性方案中,我们的方法稳健地表现得非常强大,与本领域的状态相比,展示其对现实世界应用的可能性。
translated by 谷歌翻译
婴儿运动分析是在儿童早期开发研究中具有重要意义的主题。然而,虽然人类姿势估计的应用变得越来越宽,但是在大规模成年姿势数据集上培训的模型几乎不能在估计婴幼儿姿势,因为它们的身体比率显着差异以及它们的构成的多功能性。此外,隐私和安全考虑因素阻碍了从头划痕培训强大模型所需的适当婴儿姿势数据的可用性。为了解决这个问题,本文提出(1)建立和公开发布具有小但不同实际婴儿图像的混合综合和真正的婴儿姿势(Syrip)数据集以及生成的合成婴儿姿势和(2)多级不变表示学习策略可以将知识从成人姿势和合成婴儿图像的相邻域和综合性婴儿图像转移到我们的微调域适应婴儿姿势(FIDEP)估计模型中。在我们的消融研究中,具有相同的网络结构,在SyRip数据集上培训的模型对唯一的其他公共婴儿姿势数据集接受过的培训明显改进。与具有不同复杂性的姿势估计骨干网络集成,FIDEP比这些模型的微调版本始终如一。我们最先进的暗影模型上最好的婴儿姿势估计表演者显示了93.6的平均平均精度(MAP)。
translated by 谷歌翻译
大多数实时人类姿势估计方法都基于检测接头位置。使用检测到的关节位置,可以计算偏差和肢体的俯仰。然而,由于这种旋转轴仍然不观察,因此不能计算沿着肢体沿着肢体至关重要的曲折,这对于诸如体育分析和计算机动画至关重要。在本文中,我们引入了方向关键点,一种用于估计骨骼关节的全位置和旋转的新方法,仅使用单帧RGB图像。灵感来自Motion-Capture Systems如何使用一组点标记来估计全骨骼旋转,我们的方法使用虚拟标记来生成足够的信息,以便准确地推断使用简单的后处理。旋转预测改善了接头角度最佳报告的平均误差48%,并且在15个骨骼旋转中实现了93%的精度。该方法还通过MPJPE在原理数据集上测量,通过MPJPE测量,该方法还改善了当前的最新结果14%,并概括为野外数据集。
translated by 谷歌翻译
情绪分析中最突出的任务是为文本分配情绪,并了解情绪如何在语言中表现出来。自然语言处理的一个重要观察结果是,即使没有明确提及情感名称,也可以通过单独参考事件来隐式传达情绪。在心理学中,被称为评估理论的情感理论类别旨在解释事件与情感之间的联系。评估可以被形式化为变量,通过他们认为相关的事件的人们的认知评估来衡量认知评估。其中包括评估事件是否是新颖的,如果该人认为自己负责,是否与自己的目标以及许多其他人保持一致。这样的评估解释了哪些情绪是基于事件开发的,例如,新颖的情况会引起惊喜或不确定后果的人可能引起恐惧。我们在文本中分析了评估理论对情绪分析的适用性,目的是理解注释者是否可以可靠地重建评估概念,如果可以通过文本分类器预测,以及评估概念是否有助于识别情感类别。为了实现这一目标,我们通过要求人们发短信描述触发特定情绪并披露其评估的事件来编译语料库。然后,我们要求读者重建文本中的情感和评估。这种设置使我们能够衡量是否可以纯粹从文本中恢复情绪和评估,并为判断模型的绩效指标提供人体基准。我们将文本分类方法与人类注释者的比较表明,两者都可以可靠地检测出具有相似性能的情绪和评估。我们进一步表明,评估概念改善了文本中情绪的分类。
translated by 谷歌翻译
Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark "MPII Human Pose" 1 that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.
translated by 谷歌翻译
狗主人通常能够识别出揭示其狗的主观状态的行为线索,例如疼痛。但是自动识别疼痛状态非常具有挑战性。本文提出了一种基于视频的新型,两流深的神经网络方法,以解决此问题。我们提取和预处理身体关键点,并在视频中计算关键点和RGB表示的功能。我们提出了一种处理自我十分和缺少关键点的方法。我们还提出了一个由兽医专业人员收集的独特基于视频的狗行为数据集,并注释以进行疼痛,并通过建议的方法报告良好的分类结果。这项研究是基于机器学习的狗疼痛状态估计的第一批作品之一。
translated by 谷歌翻译
Realtime multi-person 2D pose estimation is a key component in enabling machines to have an understanding of people in images and videos. In this work, we present a realtime approach to detect the 2D pose of multiple people in an image. The proposed method uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. This bottom-up system achieves high accuracy and realtime performance, regardless of the number of people in the image. In previous work, PAFs and body part location estimation were refined simultaneously across training stages. We demonstrate that a PAF-only refinement rather than both PAF and body part location refinement results in a substantial increase in both runtime performance and accuracy. We also present the first combined body and foot keypoint detector, based on an internal annotated foot dataset that we have publicly released. We show that the combined detector not only reduces the inference time compared to running them sequentially, but also maintains the accuracy of each component individually. This work has culminated in the release of OpenPose, the first open-source realtime system for multi-person 2D pose detection, including body, foot, hand, and facial keypoints.
translated by 谷歌翻译
Its numerous applications make multi-human 3D pose estimation a remarkably impactful area of research. Nevertheless, assuming a multiple-view system composed of several regular RGB cameras, 3D multi-pose estimation presents several challenges. First of all, each person must be uniquely identified in the different views to separate the 2D information provided by the cameras. Secondly, the 3D pose estimation process from the multi-view 2D information of each person must be robust against noise and potential occlusions in the scenario. In this work, we address these two challenges with the help of deep learning. Specifically, we present a model based on Graph Neural Networks capable of predicting the cross-view correspondence of the people in the scenario along with a Multilayer Perceptron that takes the 2D points to yield the 3D poses of each person. These two models are trained in a self-supervised manner, thus avoiding the need for large datasets with 3D annotations.
translated by 谷歌翻译
3D姿势估计对于分析和改善人体机器人相互作用的人体工程学和降低肌肉骨骼疾病的风险很重要。基于视觉的姿势估计方法容易出现传感器和模型误差以及遮挡,而姿势估计仅来自相互作用的机器人的轨迹,却遭受了模棱两可的解决方案。为了从两种方法的优势中受益并改善了它们的弊端,我们引入了低成本,非侵入性和遮挡刺激性多感应3D姿势估计算法中的物理人类手机相互作用。我们在单个相机上使用openpose的2D姿势,以及人类执行任务时相互作用的机器人的轨迹。我们将问题建模为部分观察的动力学系统,并通过粒子滤波器推断3D姿势。我们介绍了远程操作的工作,但可以将其推广到其他人类机器人互动的其他应用。我们表明,我们的多感官系统比仅使用机器人的轨迹仅使用openpose或姿势估计的姿势估计来更好地解决人运动冗余。与金标准运动捕获姿势相比,这将提高估计姿势的准确性。此外,当使用Rula评估工具进行姿势评估时,我们的方法也比其他单一感觉方法更好。
translated by 谷歌翻译