The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译
We present a new method for generating controllable, dynamically responsive, and photorealistic human animations. Given an image of a person, our system allows the user to generate Physically plausible Upper Body Animation (PUBA) using interaction in the image space, such as dragging their hand to various locations. We formulate a reinforcement learning problem to train a dynamic model that predicts the person's next 2D state (i.e., keypoints on the image) conditioned on a 3D action (i.e., joint torque), and a policy that outputs optimal actions to control the person to achieve desired goals. The dynamic model leverages the expressiveness of 3D simulation and the visual realism of 2D videos. PUBA generates 2D keypoint sequences that achieve task goals while being responsive to forceful perturbation. The sequences of keypoints are then translated by a pose-to-image generator to produce the final photorealistic video.
translated by 谷歌翻译
Estimating 3D human motion from an egocentric video sequence is critical to human behavior understanding and applications in VR/AR. However, naively learning a mapping between egocentric videos and human motions is challenging, because the user's body is often unobserved by the front-facing camera placed on the head of the user. In addition, collecting large-scale, high-quality datasets with paired egocentric videos and 3D human motions requires accurate motion capture devices, which often limit the variety of scenes in the videos to lab-like environments. To eliminate the need for paired egocentric video and human motions, we propose a new method, Ego-Body Pose Estimation via Ego-Head Pose Estimation (EgoEgo), that decomposes the problem into two stages, connected by the head motion as an intermediate representation. EgoEgo first integrates SLAM and a learning approach to estimate accurate head motion. Then, taking the estimated head pose as input, it leverages conditional diffusion to generate multiple plausible full-body motions. This disentanglement of head and body pose eliminates the need for training datasets with paired egocentric videos and 3D human motion, enabling us to leverage large-scale egocentric video datasets and motion capture datasets separately. Moreover, for systematic benchmarking, we develop a synthetic dataset, AMASS-Replica-Ego-Syn (ARES), with paired egocentric videos and human motion. On both ARES and real data, our EgoEgo model performs significantly better than the state-of-the-art.
translated by 谷歌翻译
跌倒是致命和非致命伤害的主要原因,尤其是对于老年人。身体内部原因(例如疾病)或外部原因(例如主动或被动扰动)可能导致不平衡。主动扰动是将外力施加到人的结果,而被动扰动是由于人类运动与静态障碍相互作用而导致的。这项工作提出了一个指标,该指标允许监视躯干及其与主动和被动扰动的相关性。我们表明,躯干摇摆的巨大变化可以与主动扰动密切相关。我们还表明,通过调节过去的轨迹,躯干运动和周围场景的预期路径和躯干摇摆,我们可以合理地预测躯干摇摆的未来路径和预期变化。这将有直接的预防应用程序。结果表明,躯干摇摆与扰动密切相关。而且我们的模型能够利用全景图中介绍的视觉提示并相应地调节预测。
translated by 谷歌翻译
为了充分利用多指灵敏机器人手的多功能性进行对象抓握,必须满足手动对象相互作用和对象几何在GRASP计划期间引入的复杂物理约束。我们提出了一种组合生成模型和双重优化的综合方法,以计算新颖看不见的对象的多样化掌握。首先,从仅在六个YCB对象上训练的条件变异自动编码器获得了掌握预测。然后,通过共同求解碰撞感知的逆运动学,力闭合和摩擦约束作为一种非凸双弯曲曲线优化,将预测投射到运动学和动态可行的grasps的歧管上。我们通过成功抓住各种看不见的家庭物体,包括对其他类型的机器人抓手的挑战,来证明我们方法对硬件的有效性。我们的结果的视频摘要可在https://youtu.be/9dtrimbn99i上获得。
translated by 谷歌翻译
预测人类运动对于辅助机器人和AR/VR应用至关重要,在这种机器人和AR/VR应用中,与人类的互动需要安全舒适。同时,准确的预测取决于理解场景上下文和人类意图。尽管许多作品研究场景 - 意识到人类的运动预测,但由于缺乏以自我为中心的观点,这些观点揭示了人类意图以及运动和场景的多样性有限,因此后者在很大程度上并没有得到充实的影响。为了减少差距,我们提出了一个大规模的人类运动数据集,该数据集可提供高质量的身体姿势序列,场景扫描以及以自我为中心的视图,目光注视,这是推断人类意图的代孕。通过使用惯性传感器进行运动捕获,我们的数据收集与特定场景无关,这进一步增强了从主题中观察到的运动动力学。我们对利用眼睛目光进行以自我为中心的人类运动预测的优势进行了广泛的研究,并进行了各种最新的架构。此外,为了实现目光的全部潜力,我们提出了一种新型的网络体系结构,该架构可以在目光和运动分支之间进行双向交流。我们的网络在拟议的数据集上实现了人类运动预测的最高性能,这要归功于眼睛凝视的意图信息以及动作调制的DeNocied Ceaze特征。代码和数据可以在https://github.com/y-zheng18/gimo上找到。
translated by 谷歌翻译
一组稀疏(例如六个)可穿戴的IMU提供的实时人类运动重建提供了一种非侵入性和经济的运动捕获方法。没有直接从IMU中获取位置信息的能力,最近的作品采用了数据驱动的方法,这些方法利用大型人类运动数据集解决了这一不确定的问题。尽管如此,挑战仍然存在,例如时间一致性,全球和关节动作的漂移以及各种地形上运动类型的各种覆盖范围。我们提出了一种同时估计全身运动的新方法,并实时从六个IMU传感器中产生合理的访问地形。我们的方法包含1.有条件的变压器解码器模型通过明确推理预测历史记录提供一致的预测,2。一个简单而通用的学习目标,称为“固定体点”(SBP),可以由变压器模型稳定地预测并通过分析例程使用要纠正关节和全球漂移,以及3.算法从嘈杂的SBP预测产生正则地形高度图,进而可以纠正嘈杂的全球运动估计。我们对合成和真实的IMU数据以及实时实时演示进行了广泛的评估框架,并显示出优于强基线方法的性能。
translated by 谷歌翻译
最近在体现AI中的研究已经通过使用模拟环境来开发和培训机器人学习方法。然而,使用模拟已经引起了只需要机器人模拟器可以模拟的任务:运动和物理接触的任务。我们呈现IGIBSON 2.0,一个开源仿真环境,通过三个关键创新支持模拟更多样化的家庭任务。首先,IGIBSON 2.0支持对象状态,包括温度,湿度水平,清洁度和切割和切片状态,以涵盖更广泛的任务。其次,IGIBSON 2.0实现了一组谓词逻辑函数,该逻辑函数将模拟器状态映射到烹饪或浸泡等逻辑状态。另外,给定逻辑状态,IGIBSON 2.0可以对满足它的有效物理状态进行示例。此功能可以以最少的努力从用户生成潜在的无限实例。采样机制允许我们的场景在语义有意义的位置中的小对象更密集地填充。第三,IGIBSON 2.0包括虚拟现实(VR)界面,以将人类浸入其场景以收集示威操作。因此,我们可以从这些新型任务中收集人类的示威活动,并使用它们进行模仿学习。我们评估了IGIBSON 2.0的新功能,以实现新的任务的机器人学习,希望能够展示这一新模拟器的潜力来支持体现AI的新研究。 IGIBSON 2.0及其新数据集可在http://svl.stanford.edu/igibson/上公开提供。
translated by 谷歌翻译
准确地对现实世界进行建模接触行为,对于现有的刚体物理模拟器而言,近刚毛的材料仍然是一个巨大的挑战。本文介绍了一个数据增强的接触模型,该模型将分析解决方案与观察到的数据结合在一起,以预测3D接触脉冲,这可能会导致刚体在各个方向上弹跳,滑动或旋转。我们的方法通过从观察到的数据中学习接触行为来增强标准库仑接触模型的表现力,同时尽可能保留基本的接触约束。例如,对分类器进行了训练,以近似静态摩擦和动态摩擦之间的过渡,而在碰撞过程中的非渗透约束在分析中执行。我们的方法计算整个刚体的触点的汇总效果,而不是分别预测每个接触点的接触力,而保持相同的模拟速度,而与接触点的数量增加了详细的几何形状。补充视频:https://shorturl.at/eilwx关键字:物理模拟算法,动态学习,联系人学习
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译