无价值运动捕获已成为近年来计算机视觉研究的积极研究领域。其广泛的应用在各种各样的领域中是已知的,包括计算机动画,人类运动分析,生物医学研究,虚拟现实和体育科学。估计人类姿势最近在计算机视觉界中提高了越来越长,但由于不确定性的深度和缺乏合成数据集,这是一个具有挑战性的任务。最近提出了各种方法来解决这个问题,其中许多是基于深度学习。它们主要专注于提高现有基准的性能,具有重要进展,特别是2D图像。基于强大的深度学习技术和最近收集的现实数据集,我们探讨了一个模型,可以完全基于2D图像预测动画的骨架。使用不同的身体形状从易于复杂的不同身体形状产生的不同现实世界数据集生成的帧。实施过程在自己的数据集上使用DeePlabCut来执行许多必要的步骤,然后使用输入帧训练模型。输出是人类运动的动画骨架。复合数据集和其他结果是深层模型的“地面真相”。
translated by 谷歌翻译
Recently developed methods for video analysis, especially models for pose estimation and behavior classification, are transforming behavioral quantification to be more precise, scalable, and reproducible in fields such as neuroscience and ethology. These tools overcome long-standing limitations of manual scoring of video frames and traditional "center of mass" tracking algorithms to enable video analysis at scale. The expansion of open-source tools for video acquisition and analysis has led to new experimental approaches to understand behavior. Here, we review currently available open-source tools for video analysis and discuss how to set up these methods for labs new to video recording. We also discuss best practices for developing and using video analysis methods, including community-wide standards and critical needs for the open sharing of datasets and code, more widespread comparisons of video analysis methods, and better documentation for these methods especially for new users. We encourage broader adoption and continued development of these tools, which have tremendous potential for accelerating scientific progress in understanding the brain and behavior.
translated by 谷歌翻译
We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state of the art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.
translated by 谷歌翻译
and ACCAD [5] datasets. The input is sparse markers and the output is SMPL body models.
translated by 谷歌翻译
在本文中,我们提出了一种新的方法来增强从单个可佩戴相机捕获的视频计算的人的3D身体姿势估计。关键的想法是利用在联合嵌入空间中链接第一和第三次视图的高级功能。为了了解这样的嵌入空间,我们介绍了First2第三姿势,这是一个近2,000个视频的新配对同步数据集,描绘了从第一和第三视角捕获的人类活动。我们明确地考虑了空间和运动域功能,同时使用以自我监督的方式培训的半暹罗架构。实验结果表明,使用我们的数据集学习的联合多视图嵌入式空间可用于从任意单视图的自拍视频中提取歧视特征,而无需需要域适应,也不知道相机参数。在三种监督最先进的方法中,我们在两个无约束数据集中实现了重大改善了两个无约束的数据集。我们的数据集和代码将可用于研究目的。
translated by 谷歌翻译
This paper proposes a novel application system for the generation of three-dimensional (3D) character animation driven by markerless human body motion capturing. The entire pipeline of the system consists of five stages: 1) the capturing of motion data using multiple cameras, 2) detection of the two-dimensional (2D) human body joints, 3) estimation of the 3D joints, 4) calculation of bone transformation matrices, and 5) generation of character animation. The main objective of this study is to generate a 3D skeleton and animation for 3D characters using multi-view images captured by ordinary cameras. The computational complexity of the 3D skeleton reconstruction based on 3D vision has been reduced as needed to achieve frame-by-frame motion capturing. The experimental results reveal that our system can effectively and efficiently capture human actions and use them to animate 3D cartoon characters in real-time.
translated by 谷歌翻译
多代理行为建模旨在了解代理之间发生的交互。我们从行为神经科学,Caltech鼠标社交交互(CALMS21)数据集中提供了一个多代理数据集。我们的数据集由社交交互的轨迹数据组成,从标准居民入侵者测定中自由行为小鼠的视频记录。为了帮助加速行为研究,CALMS21数据集提供基准,以评估三种设置中自动行为分类方法的性能:(1)用于培训由单个注释器的所有注释,(2)用于风格转移以进行学习互动在特定有限培训数据的新行为学习的行为定义和(3)的注释差异。 DataSet由600万个未标记的追踪姿势的交互小鼠组成,以及超过100万帧,具有跟踪的姿势和相应的帧级行为注释。我们的数据集的挑战是能够使用标记和未标记的跟踪数据准确地对行为进行分类,以及能够概括新设置。
translated by 谷歌翻译
With the continuously thriving popularity around the world, fitness activity analytic has become an emerging research topic in computer vision. While a variety of new tasks and algorithms have been proposed recently, there are growing hunger for data resources involved in high-quality data, fine-grained labels, and diverse environments. In this paper, we present FLAG3D, a large-scale 3D fitness activity dataset with language instruction containing 180K sequences of 60 categories. FLAG3D features the following three aspects: 1) accurate and dense 3D human pose captured from advanced MoCap system to handle the complex activity and large movement, 2) detailed and professional language instruction to describe how to perform a specific activity, 3) versatile video resources from a high-tech MoCap system, rendering software, and cost-effective smartphones in natural environments. Extensive experiments and in-depth analysis show that FLAG3D contributes great research value for various challenges, such as cross-domain human action recognition, dynamic human mesh recovery, and language-guided human action generation. Our dataset and source code will be publicly available at https://andytang15.github.io/FLAG3D.
translated by 谷歌翻译
瑜伽是全球广受好评的,广泛推荐的健康生活实践。在执行瑜伽时保持正确的姿势至关重要。在这项工作中,我们采用了从人类姿势估计模型中的转移学习来提取整个人体的136个关键点,以训练一个随机的森林分类器,该分类器用于估算瑜伽室。在内部收集的内部收集的瑜伽视频数据库中评估了结果,该数据库是从4个不同的相机角度记录的51个主题。我们提出了一个三步方案,用于通过对1)看不见的帧,2)看不见的受试者进行测试来评估瑜伽分类器的普遍性。我们认为,对于大多数应用程序,对看不见的主题的验证精度和看不见的摄像头是最重要的。我们经验分析了三个公共数据集,转移学习的优势以及目标泄漏的可能性。我们进一步证明,分类精度在很大程度上取决于所采用的交叉验证方法,并且通常会产生误导。为了促进进一步的研究,我们已公开提供关键点数据集和代码。
translated by 谷歌翻译
婴儿运动分析是在儿童早期开发研究中具有重要意义的主题。然而,虽然人类姿势估计的应用变得越来越宽,但是在大规模成年姿势数据集上培训的模型几乎不能在估计婴幼儿姿势,因为它们的身体比率显着差异以及它们的构成的多功能性。此外,隐私和安全考虑因素阻碍了从头划痕培训强大模型所需的适当婴儿姿势数据的可用性。为了解决这个问题,本文提出(1)建立和公开发布具有小但不同实际婴儿图像的混合综合和真正的婴儿姿势(Syrip)数据集以及生成的合成婴儿姿势和(2)多级不变表示学习策略可以将知识从成人姿势和合成婴儿图像的相邻域和综合性婴儿图像转移到我们的微调域适应婴儿姿势(FIDEP)估计模型中。在我们的消融研究中,具有相同的网络结构,在SyRip数据集上培训的模型对唯一的其他公共婴儿姿势数据集接受过的培训明显改进。与具有不同复杂性的姿势估计骨干网络集成,FIDEP比这些模型的微调版本始终如一。我们最先进的暗影模型上最好的婴儿姿势估计表演者显示了93.6的平均平均精度(MAP)。
translated by 谷歌翻译
实时3D人姿势估计对于人类计算机相互作用至关重要。仅从单眼视频中估算3D人类姿势是便宜且实用的。然而,最近基于骨剪接的3D人姿势估计方法带来了累积错误的问题。在本文中,提出了虚拟骨头的概念来解决这一挑战。虚拟骨头是非粘合关节之间的虚骨。它们在现实中并不存在,但它们为3D人类关节的估计带来了新的循环限制。本文提出的网络同时预测了真实的骨骼和虚拟骨骼。由预测的真实骨骼和虚拟骨骼构造的环的最终长度受到限制和学习。此外,考虑了连续帧中关节的运动约束。提议将网络预测的2D投影位置位移与摄像机捕获的真实2D位移之间的一致性是用于学习3D人姿势的新投影一致性损失。人类360万数据集的实验证明了该方法的良好性能。消融研究证明了拟议的框架间投影一致性约束和框内循环约束的有效性。
translated by 谷歌翻译
多摄像机跟踪系统在需要高质量跟踪结果的应用中获得普及,例如摩擦结账,因为单眼多物体跟踪(MOT)系统由于闭塞而在杂乱和拥挤的环境中经常失败。通过恢复部分3D信息,多个高度重叠的相机可以显着减轻问题。但是,使用不同的相机设置和背景创建高质量多摄像头跟踪数据集的成本在该域中的数据集比例限制了数据集尺度。在本文中,我们在自动注释系统的帮助下提供了五种不同环境的大型密集标记的多摄像头跟踪数据集。该系统使用重叠和校准的深度和RGB相机来构建高性能3D跟踪器,可自动生成3D跟踪结果。使用摄像机参数将3D跟踪结果投影到每个RGB摄像头视图以创建2D跟踪结果。然后,我们手动检查并更正3D跟踪结果以确保标签质量,比完全手动注释便宜得多。我们使用两个实时多相机跟踪器和具有不同设置的人重新识别(REID)模型进行了广泛的实验。该数据集在杂乱和拥挤的环境中提供了更可靠的多摄像头,多目标跟踪系统的基准。此外,我们的结果表明,在此数据集中调整跟踪器和REID模型显着提高了它们的性能。我们的数据集将在接受这项工作后公开发布。
translated by 谷歌翻译
Human pose estimation has been widely applied in various industries. While recent decades have witnessed the introduction of many advanced two-dimensional (2D) human pose estimation solutions, three-dimensional (3D) human pose estimation is still an active research field in computer vision. Generally speaking, 3D human pose estimation methods can be divided into two categories: single-stage and two-stage. In this paper, we focused on the 2D-to-3D lifting process in the two-stage methods and proposed a more advanced baseline model for 3D human pose estimation, based on the existing solutions. Our improvements include optimization of machine learning models and multiple parameters, as well as introduction of a weighted loss to the training model. Finally, we used the Human3.6M benchmark to test the final performance and it did produce satisfactory results.
translated by 谷歌翻译
Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.
translated by 谷歌翻译
动物姿势估计和跟踪(APT)是从一系列视频帧中检测和跟踪动物关键的基本任务。以前与动物有关的数据集专注于动物跟踪或单帧动物姿势估计,而从未在这两个方面上进行。缺乏APT数据集​​阻碍了基于视频的动物姿势估计和跟踪方法的开发和评估,限制了现实世界中的应用,例如了解野生动物保护中的动物行为。为了填补这一空白,我们迈出了第一步,并提出了APT-36K,即第一个用于动物姿势估计和跟踪的大规模基准。具体而言,APT-36K由2,400个视频剪辑组成,并从30种动物物种中收集并过滤,每个视频为15帧,总共产生36,000帧。在手动注释和仔细的双重检查之后,为所有动物实例提供了高质量的关键点和跟踪注释。基于APT-36K,我们在以下三个曲目上基准了几个代表性模型:(1)在内部和域间传输学习设置下,在单个框架上进行监督的动物姿势估计,(2)未见的种间域域内概括测试动物,(3)动物跟踪的动物姿势估计。根据实验结果,我们获得了一些经验见解,并表明APT-36K提供了有价值的动物姿势估计和跟踪基准,为未来的研究提供了新的挑战和机会。该代码和数据集将在https://github.com/pandorgan/apt-36k上公​​开提供。
translated by 谷歌翻译
3D pose estimation is a challenging problem in computer vision. Most of the existing neural-network-based approaches address color or depth images through convolution networks (CNNs). In this paper, we study the task of 3D human pose estimation from depth images. Different from the existing CNN-based human pose estimation method, we propose a deep human pose network for 3D pose estimation by taking the point cloud data as input data to model the surface of complex human structures. We first cast the 3D human pose estimation from 2D depth images to 3D point clouds and directly predict the 3D joint position. Our experiments on two public datasets show that our approach achieves higher accuracy than previous state-of-art methods. The reported results on both ITOP and EVAL datasets demonstrate the effectiveness of our method on the targeted tasks.
translated by 谷歌翻译
由于价格合理的可穿戴摄像头和大型注释数据集的可用性,在过去几年中,Egintric Vision(又名第一人称视觉-FPV)的应用程序在过去几年中蓬勃发展。可穿戴摄像机的位置(通常安装在头部上)允许准确记录摄像头佩戴者在其前面的摄像头,尤其是手和操纵物体。这种内在的优势可以从多个角度研究手:将手及其部分定位在图像中;了解双手涉及哪些行动和活动;并开发依靠手势的人类计算机界面。在这项调查中,我们回顾了使用以自我为中心的愿景专注于手的文献,将现有方法分类为:本地化(其中的手或部分在哪里?);解释(手在做什么?);和应用程序(例如,使用以上为中心的手提示解决特定问题的系统)。此外,还提供了带有手基注释的最突出的数据集的列表。
translated by 谷歌翻译
3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-thewild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-ofthe art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes.* This work was performed while J. Romero and F. Bogo were with the MPI-IS 2 ; P. V. Gehler with the BCCN 1 and MPI-IS 2 .
translated by 谷歌翻译
在本文中,我们介绍了一条神经渲染管道,用于将一个人在源视频中的面部表情,头部姿势和身体运动转移到目标视频中的另一个人。我们将方法应用于手语视频的具有挑战性的案例:给定手语用户的源视频,我们可以忠实地传输执行的手册(例如握手,棕榈方向,运动,位置)和非手术(例如,眼睛凝视,凝视,面部表情,头部移动)以照片真实的方式标志着目标视频。为了有效捕获上述提示,这些线索对于手语交流至关重要,我们以最近引入的最健壮和最可靠的深度学习方法的有效组合来建立。使用3D感知表示,将身体部位的估计运动组合并重新定位到目标签名者。然后将它们作为我们的视频渲染网络的条件输入,从而生成时间一致和照片现实的视频。我们进行了详细的定性和定量评估和比较,这些评估和比较证明了我们的方法的有效性及其对现有方法的优势。我们的方法产生了前所未有的现实主义的有希望的结果,可用于手语匿名。此外,它很容易适用于重新制定其他类型的全身活动(舞蹈,表演,锻炼等)以及手语生产系统的合成模块。
translated by 谷歌翻译
本文认为共同解决估计3D人体的高度相关任务,并从RGB图像序列预测未来的3D运动。基于Lie代数姿势表示,提出了一种新的自投影机制,自然保留了人类运动运动学。通过基于编码器 - 解码器拓扑的序列到序列的多任务架构进一步促进了这一点,这使我们能够利用两个任务共享的公共场所。最后,提出了一个全球细化模块来提高框架的性能。我们的方法称为PoMomemet的效力是通过消融测试和人文3.6M和Humaneva-I基准的实证评估,从而获得与最先进的竞争性能。
translated by 谷歌翻译