This work addresses the problem of generating 3D holistic body motions from human speech. Given a speech recording, we synthesize sequences of 3D body poses, hand gestures, and facial expressions that are realistic and diverse. To achieve this, we first build a high-quality dataset of 3D holistic body meshes with synchronous speech. We then define a novel speech-to-motion generation framework in which the face, body, and hands are modeled separately. The separated modeling stems from the fact that face articulation strongly correlates with human speech, while body poses and hand gestures are less correlated. Specifically, we employ an autoencoder for face motions, and a compositional vector-quantized variational autoencoder (VQ-VAE) for the body and hand motions. The compositional VQ-VAE is key to generating diverse results. Additionally, we propose a cross-conditional autoregressive model that generates body poses and hand gestures, leading to coherent and realistic motions. Extensive experiments and user studies demonstrate that our proposed approach achieves state-of-the-art performance both qualitatively and quantitatively. Our novel dataset and code will be released for research purposes at https://talkshow.is.tue.mpg.de.
translated by 谷歌翻译
Animating portraits using speech has received growing attention in recent years, with various creative and practical use cases. An ideal generated video should have good lip sync with the audio, natural facial expressions and head motions, and high frame quality. In this work, we present SPACE, which uses speech and a single image to generate high-resolution, and expressive videos with realistic head pose, without requiring a driving video. It uses a multi-stage approach, combining the controllability of facial landmarks with the high-quality synthesis power of a pretrained face generator. SPACE also allows for the control of emotions and their intensities. Our method outperforms prior methods in objective metrics for image quality and facial motions and is strongly preferred by users in pair-wise comparisons. The project website is available at https://deepimagination.cc/SPACE/
translated by 谷歌翻译
Speech-driven 3D facial animation has been widely explored, with applications in gaming, character animation, virtual reality, and telepresence systems. State-of-the-art methods deform the face topology of the target actor to sync the input audio without considering the identity-specific speaking style and facial idiosyncrasies of the target actor, thus, resulting in unrealistic and inaccurate lip movements. To address this, we present Imitator, a speech-driven facial expression synthesis method, which learns identity-specific details from a short input video and produces novel facial expressions matching the identity-specific speaking style and facial idiosyncrasies of the target actor. Specifically, we train a style-agnostic transformer on a large facial expression dataset which we use as a prior for audio-driven facial expressions. Based on this prior, we optimize for identity-specific speaking style based on a short reference video. To train the prior, we introduce a novel loss function based on detected bilabial consonants to ensure plausible lip closures and consequently improve the realism of the generated expressions. Through detailed experiments and a user study, we show that our approach produces temporally coherent facial expressions from input audio while preserving the speaking style of the target actors.
translated by 谷歌翻译
虽然先前以语音为导向的说话面部生成方法在改善合成视频的视觉质量和唇部同步质量方面取得了重大进展,但它们对唇部运动的关注较少,从而极大地破坏了说话面部视频的真实性。是什么导致运动烦恼,以及如何减轻问题?在本文中,我们基于最先进的管道对运动抖动问题进行系统分析,该管道使用3D面表示桥接输入音频和输出视频,并通过一系列有效的设计来改善运动稳定性。我们发现,几个问题可能会导致综合说话的面部视频中的烦恼:1)输入3D脸部表示的烦恼; 2)训练推导不匹配; 3)视频帧之间缺乏依赖建模。因此,我们提出了三种有效的解决方案来解决此问题:1)我们提出了一个基于高斯的自适应平滑模块,以使3D面部表征平滑以消除输入中的抖动; 2)我们在训练中对神经渲染器的输入数据增加了增强的侵蚀,以模拟推理中的变形以减少不匹配; 3)我们开发了一个音频融合的变压器生成器,以模拟视频帧之间的依赖性。此外,考虑到没有现成的指标来测量说话面部视频中的运动抖动,我们设计了一个客观的度量标准(运动稳定性指数,MSI),可以通过计算方差加速度的倒数来量化运动抖动。广泛的实验结果表明,我们方法对运动稳定的面部视频生成的优越性,其质量比以前的系统更好。
translated by 谷歌翻译
由于人称复杂的几何形状以及3D视听数据的可用性有限,语音驱动的3D面部动画是挑战。事先作品通常专注于使用有限的上下文学习短音频窗口的音素级功能,偶尔会导致不准确的唇部运动。为了解决这一限制,我们提出了一种基于变压器的自回归模型,脸形式,它们编码了长期音频上下文,并自动预测了一系列动画3D面网格。要应对数据稀缺问题,我们整合了自我监督的预训练的语音表示。此外,我们设计了两个偏置的注意机制,该机制非常适合于该特定任务,包括偏置横向模态多头(MH)的注意力,并且具有周期性位置编码策略的偏置因果MH自我关注。前者有效地对准音频运动模型,而后者则提供给更长音频序列的能力。广泛的实验和感知用户学习表明,我们的方法优于现有的现有最先进。代码将可用。
translated by 谷歌翻译
Co-speech gesture is crucial for human-machine interaction and digital entertainment. While previous works mostly map speech audio to human skeletons (e.g., 2D keypoints), directly generating speakers' gestures in the image domain remains unsolved. In this work, we formally define and study this challenging problem of audio-driven co-speech gesture video generation, i.e., using a unified framework to generate speaker image sequence driven by speech audio. Our key insight is that the co-speech gestures can be decomposed into common motion patterns and subtle rhythmic dynamics. To this end, we propose a novel framework, Audio-driveN Gesture vIdeo gEneration (ANGIE), to effectively capture the reusable co-speech gesture patterns as well as fine-grained rhythmic movements. To achieve high-fidelity image sequence generation, we leverage an unsupervised motion representation instead of a structural human body prior (e.g., 2D skeletons). Specifically, 1) we propose a vector quantized motion extractor (VQ-Motion Extractor) to summarize common co-speech gesture patterns from implicit motion representation to codebooks. 2) Moreover, a co-speech gesture GPT with motion refinement (Co-Speech GPT) is devised to complement the subtle prosodic motion details. Extensive experiments demonstrate that our framework renders realistic and vivid co-speech gesture video. Demo video and more resources can be found in: https://alvinliu0.github.io/projects/ANGIE
translated by 谷歌翻译
Figure 1: Given challenging in-the-wild videos, a recent state-of-the-art video-pose-estimation approach [31] (top), fails to produce accurate 3D body poses. To address this, we exploit a large-scale motion-capture dataset to train a motion discriminator using an adversarial approach. Our model (VIBE) (bottom) is able to produce realistic and accurate pose and shape, outperforming previous work on standard benchmarks.
translated by 谷歌翻译
人的言语通常伴随着包括手臂和手势在内的身体手势。我们提出了一种方法,该方法将与目标语音音频相匹配的手势重新效果。我们方法的关键思想是通过编码剪辑之间的有效过渡的新型视频运动图从参考视频中拆分和重新组装剪辑。为了在重演中无缝连接不同的剪辑,我们提出了一个姿势感知的视频混合网络,该网络综合了两个剪辑之间的缝线框架周围的视频帧。此外,我们开发了一种基于音频的手势搜索算法,以找到重新成型帧的最佳顺序。我们的系统生成的重演与音频节奏和语音内容一致。我们定量,用户研究对综合视频质量进行评估,并证明我们的方法与以前的工作和基线相比,我们的方法与目标音频的质量和一致性更高。
translated by 谷歌翻译
我们的目标是从规定的行动类别中解决从规定的行动类别创造多元化和自然人动作视频的有趣但具有挑战性的问题。关键问题在于能够在视觉外观中综合多种不同的运动序列。在本文中通过两步过程实现,该两步处理维持内部3D姿势和形状表示,Action2Motion和Motion2Video。 Action2Motion随机生成规定的动作类别的合理的3D姿势序列,该类别由Motion2Video进行处理和呈现,以形成2D视频。具体而言,Lie代数理论从事人类运动学的物理法之后代表自然人动作;开发了一种促进输出运动的分集的时间变化自动编码器(VAE)。此外,给定衣服人物的额外输入图像,提出了整个管道以提取他/她的3D详细形状,并在视频中呈现来自不同视图的合理运动。这是通过改进从单个2D图像中提取3D人类形状和纹理,索引,动画和渲染的现有方法来实现这一点,以形成人类运动的2D视频。它还需要3D人类运动数据集的策策和成果进行培训目的。彻底的经验实验,包括消融研究,定性和定量评估表现出我们的方法的适用性,并展示了解决相关任务的竞争力,其中我们的方法的组成部分与最先进的方式比较。
translated by 谷歌翻译
虽然深度神经网络的最近进步使得可以呈现高质量的图像,产生照片 - 现实和个性化的谈话头部仍然具有挑战性。通过给定音频,解决此任务的关键是同步唇部运动,同时生成头部移动和眼睛闪烁等个性化属性。在这项工作中,我们观察到输入音频与唇部运动高度相关,而与其他个性化属性的较少相关(例如,头部运动)。受此启发,我们提出了一种基于神经辐射场的新颖框架,以追求高保真和个性化的谈话。具体地,神经辐射场将唇部运动特征和个性化属性作为两个解除态条件采用,其中从音频输入直接预测唇部移动以实现唇部同步的生成。同时,从概率模型采样个性化属性,我们设计了从高斯过程中采样的基于变压器的变差自动码器,以学习合理的和自然的头部姿势和眼睛闪烁。在几个基准上的实验表明,我们的方法比最先进的方法达到了更好的结果。
translated by 谷歌翻译
已经普遍研究了具有精确唇部同步的语音驱动的3D面部动画。然而,在演讲中为整个面部的综合制造动作很少被探索。在这项工作中,我们介绍了一个联合音频文本模型,用于捕捉表达语音驱动的3D面部动画的上下文信息。收集现有数据集以覆盖尽可能多的不同音素而不是句子,从而限制了基于音频的模型的能力,以了解更多不同的上下文。为解决此问题,我们建议利用从强大的预先培训的语言模型中提取的上下文文本嵌入,该模型从大规模文本数据中学习了丰富的上下文表示。我们的假设是文本特征可以消除上面表达式的变化,这与音频没有强烈相关。与从文本中学习音素级别功能的先前方法相比,我们调查语音驱动3D面部动画的高级上下文文本特征。我们表明,组合的声学和文本方式可以在维持抖动同步的同时综合现实的面部表达式。我们进行定量和定性评估以及感知用户学习。结果展示了我们模型对现有最先进的方法的卓越表现。
translated by 谷歌翻译
Can we make virtual characters in a scene interact with their surrounding objects through simple instructions? Is it possible to synthesize such motion plausibly with a diverse set of objects and instructions? Inspired by these questions, we present the first framework to synthesize the full-body motion of virtual human characters performing specified actions with 3D objects placed within their reach. Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters and outputs diverse sequences of full-body motions. This is in contrast to existing work, where full-body action synthesis methods generally do not consider object interactions, and human-object interaction methods focus mainly on synthesizing hand or finger movements for grasping objects. We accomplish our objective by designing an intent-driven full-body motion generator, which uses a pair of decoupled conditional variational autoencoders (CVAE) to learn the motion of the body parts in an autoregressive manner. We also optimize for the positions of the objects with six degrees of freedom (6DoF) such that they plausibly fit within the hands of the synthesized characters. We compare our proposed method with the existing methods of motion synthesis and establish a new and stronger state-of-the-art for the task of intent-driven motion synthesis. Through a user study, we further show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios compared to the current state-of-the-art methods, and are perceived to be as good as the ground truth on several occasions.
translated by 谷歌翻译
由于深度学习的出现,图像数据的最新技术对单眼3D面对重建的重建取得了令人印象深刻的进步。但是,它主要集中于来自单个RGB图像的输入,忽略以下重要因素:a)如今,感兴趣的绝大多数面部图像数据不是来自单个图像,而是来自包含丰富动态信息的视频。 。 b)此外,这些视频通常以某种形式的口头交流捕捉个人(公众对话,电视会议,视听人类计算机的互动,访谈,电影中的独白/对话等)。当在此类视频中应用现有的3D面部重建方法时,重建口腔区域的形状和运动中的伪影通常很严重,因为它们与语音音频不太匹配。为了克服上述局限性,我们提出了3D口表达的视觉语音感知重建的第一种方法。我们通过提出“口语”损失来做到这一点,该损失指导拟合过程,从而使3D重建的说话头的感知与原始录像相似。我们证明,有趣的是,与传统的具有里程碑意义的损失,甚至直接3D监督相比,口头损失更适合3D重建嘴运动。此外,设计的方法不依赖于任何文本转录或相应的音频,因此非常适合在未标记的数据集中培训。我们通过对三个大规模数据集的详尽客观评估以及通过两种基于网络的用户研究进行主观评估来验证方法的效率。
translated by 谷歌翻译
生产级别的工作流程用于产生令人信服的3D动态人体面孔长期以来依赖各种劳动密集型工具用于几何和纹理生成,运动捕获和索具以及表达合成。最近的神经方法可以使单个组件自动化,但是相应的潜在表示不能像常规工具一样为艺术家提供明确的控制。在本文中,我们提出了一种新的基于学习的,视频驱动的方法,用于生成具有高质量基于物理资产的动态面部几何形状。对于数据收集,我们构建了一个混合多视频测量捕获阶段,与超快速摄像机耦合以获得原始的3D面部资产。然后,我们着手使用单独的VAE对面部表达,几何形状和基于物理的纹理进行建模,我们在各个网络的潜在范围内强加了基于全局MLP的表达映射,以保留各个属性的特征。我们还将增量信息建模为基于物理的纹理的皱纹图,从而达到高质量的4K动态纹理。我们展示了我们在高保真表演者特异性面部捕获和跨认同面部运动重新定位中的方法。此外,我们的基于多VAE的神经资产以及快速适应方案也可以部署以处理内部视频。此外,我们通过提供具有较高现实主义的各种有希望的基于身体的编辑结果来激发我们明确的面部解散策略的实用性。综合实验表明,与以前的视频驱动的面部重建和动画方法相比,我们的技术提供了更高的准确性和视觉保真度。
translated by 谷歌翻译
InputOutput Input Output Fig. 1. Unlike current face reenactment approaches that only modify the expression of a target actor in a video, our novel deep video portrait approach enables full control over the target by transferring the rigid head pose, facial expression and eye motion with a high level of photorealism.We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the irst to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modiied target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network ś thus taking full control of the
translated by 谷歌翻译
我们提出了Styletalker,这是一种新颖的音频驱动的会说话的头部生成模型,可以从单个参考图像中综合一个会说话的人的视频,并具有准确的音频同步的唇形,逼真的头姿势和眼睛眨眼。具体而言,通过利用预验证的图像生成器和图像编码器,我们估计了会说话的头视频的潜在代码,这些代码忠实地反映了给定的音频。通过几个新设计的组件使这成为可能:1)一种用于准确唇部同步的对比性唇部同步鉴别剂,2)一种条件顺序的连续变异自动编码器,该差异自动编码器了解从唇部运动中解散的潜在运动空间,以便我们可以独立地操纵运动运动的运动。和唇部运动,同时保留身份。 3)自动回归事先增强,并通过标准化流量来学习复杂的音频到运动多模式潜在空间。配备了这些组件,Styletalker不仅可以在给出另一个运动源视频时以动作控制的方式生成说话的头视频,而且还可以通过从输入音频中推断出现实的动作,以完全由音频驱动的方式生成。通过广泛的实验和用户研究,我们表明我们的模型能够以令人印象深刻的感知质量合成会说话的头部视频,这些视频与输入音频相符,可以准确地唇部同步,这在很大程度上要优于先进的基线。
translated by 谷歌翻译
人类抓握合成具有许多应用,包括AR / VR,视频游戏和机器人。虽然已经提出了一些方法来为对象抓握和操纵产生现实的手对象交互,但通常只考虑手动与对象交互。在这项工作中,我们的目标是综合全身掌握运动。鉴于3D对象,我们的目标是产生多样化和自然的全身人类动作,方法和掌握物体。这项任务是具有挑战性的,因为它需要建模全身动态和灵巧的手指运动。为此,我们提出了由两个关键部件组成的Saga(随机全身抓取):(a)静态全身抓取姿势。具体地,我们提出了一种多任务生成模型,共同学习静态全身抓姿和人对象触点。 (b)抓住运动infilling。鉴于初始姿势和产生的全身抓握姿势作为运动的起始和结束姿势,我们设计了一种新的联络感知生成运动infilling模块,以产生各种掌握的掌握运动。我们展示了我们方法是第一代生物和表达全身运动的第一代框架,该方法是随机放置并掌握未经看的对象的逼真和表达全身运动。代码和视频可用于:https://jiahaoplus.github.io/saga/saga.html。
translated by 谷歌翻译
To facilitate the analysis of human actions, interactions and emotions, we compute a 3D model of human body pose, hand pose, and facial expression from a single monocular image. To achieve this, we use thousands of 3D scans to train a new, unified, 3D model of the human body, SMPL-X, that extends SMPL with fully articulated hands and an expressive face. Learning to regress the parameters of SMPL-X directly from images is challenging without paired images and 3D ground truth. Consequently, we follow the approach of SMPLify, which estimates 2D features and then optimizes model parameters to fit the features. We improve on SMPLify in several significant ways: (1) we detect 2D features corresponding to the face, hands, and feet and fit the full SMPL-X model to these; (2) we train a new neural network pose prior using a large MoCap dataset; (3) we define a new interpenetration penalty that is both fast and accurate; (4) we automatically detect gender and the appropriate body models (male, female, or neutral); (5) our PyTorch implementation achieves a speedup of more than 8× over Chumpy. We use the new method, SMPLify-X, to fit SMPL-X to both controlled images and images in the wild. We evaluate 3D accuracy on a new curated dataset comprising 100 images with pseudo ground-truth. This is a step towards automatic expressive human capture from monocular RGB data. The models, code, and data are available for research purposes at https://smpl-x.is.tue.mpg.de.
translated by 谷歌翻译
人类性能捕获是一种非常重要的计算机视觉问题,在电影制作和虚拟/增强现实中具有许多应用。许多以前的性能捕获方法需要昂贵的多视图设置,或者没有恢复具有帧到帧对应关系的密集时空相干几何。我们提出了一种新颖的深度致密人体性能捕获的深层学习方法。我们的方法是基于多视图监督的弱监督方式培训,完全删除了使用3D地面真理注释的培训数据的需求。网络架构基于两个单独的网络,将任务解散为姿势估计和非刚性表面变形步骤。广泛的定性和定量评估表明,我们的方法在质量和稳健性方面优于现有技术。这项工作是DeepCAP的扩展版本,在那里我们提供更详细的解释,比较和结果以及应用程序。
translated by 谷歌翻译
推断人类场景接触(HSC)是了解人类如何与周围环境相互作用的第一步。尽管检测2D人类对象的相互作用(HOI)和重建3D人姿势和形状(HPS)已经取得了重大进展,但单个图像的3D人习惯接触的推理仍然具有挑战性。现有的HSC检测方法仅考虑几种类型的预定义接触,通常将身体和场景降低到少数原语,甚至忽略了图像证据。为了预测单个图像的人类场景接触,我们从数据和算法的角度解决了上述局限性。我们捕获了一个名为“真实场景,互动,联系和人类”的新数据集。 Rich在4K分辨率上包含多视图室外/室内视频序列,使用无标记运动捕获,3D身体扫描和高分辨率3D场景扫描捕获的地面3D人体。 Rich的一个关键特征是它还包含身体上精确的顶点级接触标签。使用Rich,我们训练一个网络,该网络可预测单个RGB图像的密集车身场景接触。我们的主要见解是,接触中的区域总是被阻塞,因此网络需要能够探索整个图像以获取证据。我们使用变压器学习这种非本地关系,并提出新的身体场景接触变压器(BSTRO)。很少有方法探索3D接触;那些只专注于脚的人,将脚接触作为后处理步骤,或从身体姿势中推断出无需看现场的接触。据我们所知,BSTRO是直接从单个图像中直接估计3D身体场景接触的方法。我们证明,BSTRO的表现明显优于先前的艺术。代码和数据集可在https://rich.is.tue.mpg.de上获得。
translated by 谷歌翻译