面对面对话期间的响应声是社会互动的关键要素,在心理学研究中得到了很好的建立。通过非言语信号响应扬声器的话语,语调或行为实时,听众展示了它们如何从事对话。在这项工作中,我们构建了响应声侦听器数据集(RLD),从公共资源收集的对话视频语料库,其中包括67个扬声器,76个听众,具有三种不同的态度。我们将响应声聆听头生成任务定义为具有运动的运动和表达式的非言语头的合成,包括扬声器的音频和视觉信号。与言语驱动的手势或谈话主管不同,我们在这项任务中介绍了更多的模态,希望有利于几个研究领域,包括人类互动,视频到视频转换,跨模型理解和生成。此外,我们释放了一种态度调节的听力头生成基线。项目页面:\ url {https://project.mhzhou.com/rld}。
translated by 谷歌翻译
音频驱动的单次谈话脸生成方法通常培训各种人的视频资源。然而,他们创建的视频经常遭受不自然的口腔形状和异步嘴唇,因为这些方法努力学习来自不同扬声器的一致语音风格。我们观察到从特定扬声器学习一致的语音风格会更容易,这导致正宗的嘴巴运动。因此,我们通过从特定扬声器探讨音频和视觉运动之间的一致相关性,然后将音频驱动的运动场转移到参考图像来提出一种新颖的单次谈论的谈话脸。具体地,我们开发了一种视听相关变压器(AVCT),其旨在从输入音频推断由基于KeyPoint基的密集运动场表示的谈话运动。特别是,考虑到音频可能来自部署中的不同身份,我们将音素合并以表示音频信号。以这种方式,我们的AVCT可以本质地推广其他身份的音频。此外,由于面部键点用于表示扬声器,AVCT对训练扬声器的外观不可知,因此允许我们容易地操纵不同标识的面部图像。考虑到不同的面形状导致不同的运动,利用运动场传输模块来减少训练标识和一次性参考之间的音频驱动的密集运动场间隙。一旦我们获得了参考图像的密集运动场,我们就会使用图像渲染器从音频剪辑生成其谈话脸视频。由于我们学识到的一致口语风格,我们的方法会产生真正的口腔形状和生动的运动。广泛的实验表明,在视觉质量和唇部同步方面,我们的合成视频优于现有技术。
translated by 谷歌翻译
尽管已经对音频驱动的说话的面部生成取得了重大进展,但现有方法要么忽略面部情绪,要么不能应用于任意主题。在本文中,我们提出了情感感知的运动模型(EAMM),以通过涉及情感源视频来产生一次性的情感谈话面孔。具体而言,我们首先提出了一个Audio2Facial-Dynamics模块,该模块从音频驱动的无监督零和一阶密钥点运动中进行说话。然后,通过探索运动模型的属性,我们进一步提出了一个隐性的情绪位移学习者,以表示与情绪相关的面部动力学作为对先前获得的运动表示形式的线性添加位移。全面的实验表明,通过纳入两个模块的结果,我们的方法可以在具有现实情感模式的任意主题上产生令人满意的说话面部结果。
translated by 谷歌翻译
本文报告了我们针对多媒体VICO 2022对话式头部生成挑战的解决方案,该挑战旨在根据音频和参考图像生成生动的面对面对话视频。我们的解决方案专注于使用正则化并组装高视觉质量渲染器的广义音频对手驱动器。我们仔细调整了行为的音频模型,并使用我们的前后背景融合模块进行后制作视频。我们在官方排名中的Talking Head Generation Track中获得了聆听校长曲目的第一名。我们的代码将发布。
translated by 谷歌翻译
Given a piece of text, a video clip and a reference audio, the movie dubbing (also known as visual voice clone V2C) task aims to generate speeches that match the speaker's emotion presented in the video using the desired speaker voice as reference. V2C is more challenging than conventional text-to-speech tasks as it additionally requires the generated speech to exactly match the varying emotions and speaking speed presented in the video. Unlike previous works, we propose a novel movie dubbing architecture to tackle these problems via hierarchical prosody modelling, which bridges the visual information to corresponding speech prosody from three aspects: lip, face, and scene. Specifically, we align lip movement to the speech duration, and convey facial expression to speech energy and pitch via attention mechanism based on valence and arousal representations inspired by recent psychology findings. Moreover, we design an emotion booster to capture the atmosphere from global video scenes. All these embeddings together are used to generate mel-spectrogram and then convert to speech waves via existing vocoder. Extensive experimental results on the Chem and V2C benchmark datasets demonstrate the favorable performance of the proposed method. The source code and trained models will be released to the public.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
已经普遍研究了具有精确唇部同步的语音驱动的3D面部动画。然而,在演讲中为整个面部的综合制造动作很少被探索。在这项工作中,我们介绍了一个联合音频文本模型,用于捕捉表达语音驱动的3D面部动画的上下文信息。收集现有数据集以覆盖尽可能多的不同音素而不是句子,从而限制了基于音频的模型的能力,以了解更多不同的上下文。为解决此问题,我们建议利用从强大的预先培训的语言模型中提取的上下文文本嵌入,该模型从大规模文本数据中学习了丰富的上下文表示。我们的假设是文本特征可以消除上面表达式的变化,这与音频没有强烈相关。与从文本中学习音素级别功能的先前方法相比,我们调查语音驱动3D面部动画的高级上下文文本特征。我们表明,组合的声学和文本方式可以在维持抖动同步的同时综合现实的面部表达式。我们进行定量和定性评估以及感知用户学习。结果展示了我们模型对现有最先进的方法的卓越表现。
translated by 谷歌翻译
Speech-driven 3D facial animation has been widely explored, with applications in gaming, character animation, virtual reality, and telepresence systems. State-of-the-art methods deform the face topology of the target actor to sync the input audio without considering the identity-specific speaking style and facial idiosyncrasies of the target actor, thus, resulting in unrealistic and inaccurate lip movements. To address this, we present Imitator, a speech-driven facial expression synthesis method, which learns identity-specific details from a short input video and produces novel facial expressions matching the identity-specific speaking style and facial idiosyncrasies of the target actor. Specifically, we train a style-agnostic transformer on a large facial expression dataset which we use as a prior for audio-driven facial expressions. Based on this prior, we optimize for identity-specific speaking style based on a short reference video. To train the prior, we introduce a novel loss function based on detected bilabial consonants to ensure plausible lip closures and consequently improve the realism of the generated expressions. Through detailed experiments and a user study, we show that our approach produces temporally coherent facial expressions from input audio while preserving the speaking style of the target actors.
translated by 谷歌翻译
Animating portraits using speech has received growing attention in recent years, with various creative and practical use cases. An ideal generated video should have good lip sync with the audio, natural facial expressions and head motions, and high frame quality. In this work, we present SPACE, which uses speech and a single image to generate high-resolution, and expressive videos with realistic head pose, without requiring a driving video. It uses a multi-stage approach, combining the controllability of facial landmarks with the high-quality synthesis power of a pretrained face generator. SPACE also allows for the control of emotions and their intensities. Our method outperforms prior methods in objective metrics for image quality and facial motions and is strongly preferred by users in pair-wise comparisons. The project website is available at https://deepimagination.cc/SPACE/
translated by 谷歌翻译
由于缺乏可用的数据集,模型和标准评估指标,因此以多模式数据为条件的现实,生动和类似人类的合成对话手势仍然是一个未解决的问题。为了解决这个问题,我们构建了人体表达式 - aauio-Text数据集,Beat,它具有76小时,高质量的,高质量的多模式数据,这些数据从30位扬声器中捕获了八种不同的情绪,用四种不同的语言,ii)32数以百万计的框架级别的情感和语义相关注释。我们对BEAT的统计分析表明,除了与音频,文本和说话者身份的已知相关性外,对话式手势与面部表情,情感和语义的相关性。基于此观察结果,我们提出了一个基线模型,即级联运动网络(CAMN),该模型由以上六种模式组成,该模式在级联的架构中建模以进行手势合成。为了评估语义相关性,我们引入了指标,语义相关性召回(SRGR)。定性和定量实验证明了指标的有效性,地面真相数据质量以及基线的最先进性能。据我们所知,BEAT是用于研究人类手势的最大运动捕获数据集,这可能有助于许多不同的研究领域,包括可控的手势合成,跨模式分析和情感手势识别。数据,代码和模型可在https://pantomatrix.github.io/beat/上获得。
translated by 谷歌翻译
This work addresses the problem of generating 3D holistic body motions from human speech. Given a speech recording, we synthesize sequences of 3D body poses, hand gestures, and facial expressions that are realistic and diverse. To achieve this, we first build a high-quality dataset of 3D holistic body meshes with synchronous speech. We then define a novel speech-to-motion generation framework in which the face, body, and hands are modeled separately. The separated modeling stems from the fact that face articulation strongly correlates with human speech, while body poses and hand gestures are less correlated. Specifically, we employ an autoencoder for face motions, and a compositional vector-quantized variational autoencoder (VQ-VAE) for the body and hand motions. The compositional VQ-VAE is key to generating diverse results. Additionally, we propose a cross-conditional autoregressive model that generates body poses and hand gestures, leading to coherent and realistic motions. Extensive experiments and user studies demonstrate that our proposed approach achieves state-of-the-art performance both qualitatively and quantitatively. Our novel dataset and code will be released for research purposes at https://talkshow.is.tue.mpg.de.
translated by 谷歌翻译
当我们讲话时,可以从嘴唇的运动中推断出演讲的韵律和内容。在这项工作中,我们探讨了唇部综合的唇部任务,即,仅考虑说话者的唇部运动,我们将学习言语的唇部运动,我们专注于学习准确的唇部,以在不受限制的大型词汇环境中为多个说话者提供语音映射。我们通过其面部特征,即年龄,性别,种族和嘴唇动作来捕捉说话者的声音身份,即产生说话者身份的言语。为此,我们提出了一种新颖的方法“ lip2speech”,并采用关键设计选择,以实现无约束场景中语音合成的准确唇部。我们还使用定量,定性指标和人类评估进行了各种实验和广泛的评估。
translated by 谷歌翻译
虽然深度神经网络的最近进步使得可以呈现高质量的图像,产生照片 - 现实和个性化的谈话头部仍然具有挑战性。通过给定音频,解决此任务的关键是同步唇部运动,同时生成头部移动和眼睛闪烁等个性化属性。在这项工作中,我们观察到输入音频与唇部运动高度相关,而与其他个性化属性的较少相关(例如,头部运动)。受此启发,我们提出了一种基于神经辐射场的新颖框架,以追求高保真和个性化的谈话。具体地,神经辐射场将唇部运动特征和个性化属性作为两个解除态条件采用,其中从音频输入直接预测唇部移动以实现唇部同步的生成。同时,从概率模型采样个性化属性,我们设计了从高斯过程中采样的基于变压器的变差自动码器,以学习合理的和自然的头部姿势和眼睛闪烁。在几个基准上的实验表明,我们的方法比最先进的方法达到了更好的结果。
translated by 谷歌翻译
In this paper, we introduce a simple and novel framework for one-shot audio-driven talking head generation. Unlike prior works that require additional driving sources for controlled synthesis in a deterministic manner, we instead probabilistically sample all the holistic lip-irrelevant facial motions (i.e. pose, expression, blink, gaze, etc.) to semantically match the input audio while still maintaining both the photo-realism of audio-lip synchronization and the overall naturalness. This is achieved by our newly proposed audio-to-visual diffusion prior trained on top of the mapping between audio and disentangled non-lip facial representations. Thanks to the probabilistic nature of the diffusion prior, one big advantage of our framework is it can synthesize diverse facial motion sequences given the same audio clip, which is quite user-friendly for many real applications. Through comprehensive evaluations on public benchmarks, we conclude that (1) our diffusion prior outperforms auto-regressive prior significantly on almost all the concerned metrics; (2) our overall system is competitive with prior works in terms of audio-lip synchronization but can effectively sample rich and natural-looking lip-irrelevant facial motions while still semantically harmonized with the audio input.
translated by 谷歌翻译
人类之间的自然对话通常涉及大量的非语言细微差别表达,在整个谈话过程中都在关键时段显示。理解并能够建模这些复杂的互动对于在虚拟世界或物理世界中建立现实的人类代理交流至关重要。随着社会机器人和智能化身的流行和效用,能够在整个对话中实际建模并产生这些动态表达是至关重要的。我们开发了一个概率模型,以在面对面的设置中捕获参与者对之间的相互作用动力学,从而可以编码对话者之间的同步表达式。然后,在预测一个代理的未来动力学时,该相互作用的编码将用于影响生成,以另一个人的当前动态为条件。火焰功能是从包含受试者之间自然对话的视频中提取的,以训练我们的交互模型。我们通过定量指标和定性指标成功地评估了我们提出的模型的功效,并表明它成功捕获了一对相互作用的二元组的动力学。我们还使用从未见过的父母侵入的数据集测试模型,该数据集由二元组之间的两种不同的通信模式组成,并表明我们的模型基于它们的交互动力学在模式之间成功地描绘了这种模式。
translated by 谷歌翻译
由于人称复杂的几何形状以及3D视听数据的可用性有限,语音驱动的3D面部动画是挑战。事先作品通常专注于使用有限的上下文学习短音频窗口的音素级功能,偶尔会导致不准确的唇部运动。为了解决这一限制,我们提出了一种基于变压器的自回归模型,脸形式,它们编码了长期音频上下文,并自动预测了一系列动画3D面网格。要应对数据稀缺问题,我们整合了自我监督的预训练的语音表示。此外,我们设计了两个偏置的注意机制,该机制非常适合于该特定任务,包括偏置横向模态多头(MH)的注意力,并且具有周期性位置编码策略的偏置因果MH自我关注。前者有效地对准音频运动模型,而后者则提供给更长音频序列的能力。广泛的实验和感知用户学习表明,我们的方法优于现有的现有最先进。代码将可用。
translated by 谷歌翻译
主动演讲者的检测和语音增强已成为视听场景中越来越有吸引力的主题。根据它们各自的特征,独立设计的体系结构方案已被广泛用于与每个任务的对应。这可能导致模型特定于任务所学的表示形式,并且不可避免地会导致基于多模式建模的功能缺乏概括能力。最近的研究表明,建立听觉和视觉流之间的跨模式关系是针对视听多任务学习挑战的有前途的解决方案。因此,作为弥合视听任务中多模式关联的动机,提出了一个统一的框架,以通过在本研究中通过联合学习视听模型来实现目标扬声器的检测和语音增强。
translated by 谷歌翻译
配音是重新录制演员对话的后期生产过程,广泛用于电影制作和视频制作。它通常由专业的语音演员手动进行,他用适当的韵律读取行,以及与预先录制的视频同步。在这项工作中,我们提出了神经翻译,第一个神经网络模型来解决新型自动视频配音(AVD)任务:合成与来自文本给定视频同步的人类语音。神经杜布斯是一种多模态文本到语音(TTS)模型,它利用视频中的唇部运动来控制所生成的语音的韵律。此外,为多扬声器设置开发了一种基于图像的扬声器嵌入(ISE)模块,这使得神经Dubber能够根据扬声器的脸部产生具有合理的Timbre的语音。化学讲座的实验单扬声器数据集和LRS2多扬声器数据集显示,神经杜布斯可以在语音质量方面产生与最先进的TTS模型的语音声音。最重要的是,定性和定量评估都表明,神经杜布斯可以通过视频控制综合演讲的韵律,并产生与视频同步的高保真语音。
translated by 谷歌翻译
Co-speech gesture is crucial for human-machine interaction and digital entertainment. While previous works mostly map speech audio to human skeletons (e.g., 2D keypoints), directly generating speakers' gestures in the image domain remains unsolved. In this work, we formally define and study this challenging problem of audio-driven co-speech gesture video generation, i.e., using a unified framework to generate speaker image sequence driven by speech audio. Our key insight is that the co-speech gestures can be decomposed into common motion patterns and subtle rhythmic dynamics. To this end, we propose a novel framework, Audio-driveN Gesture vIdeo gEneration (ANGIE), to effectively capture the reusable co-speech gesture patterns as well as fine-grained rhythmic movements. To achieve high-fidelity image sequence generation, we leverage an unsupervised motion representation instead of a structural human body prior (e.g., 2D skeletons). Specifically, 1) we propose a vector quantized motion extractor (VQ-Motion Extractor) to summarize common co-speech gesture patterns from implicit motion representation to codebooks. 2) Moreover, a co-speech gesture GPT with motion refinement (Co-Speech GPT) is devised to complement the subtle prosodic motion details. Extensive experiments demonstrate that our framework renders realistic and vivid co-speech gesture video. Demo video and more resources can be found in: https://alvinliu0.github.io/projects/ANGIE
translated by 谷歌翻译
我们提出了Styletalker,这是一种新颖的音频驱动的会说话的头部生成模型,可以从单个参考图像中综合一个会说话的人的视频,并具有准确的音频同步的唇形,逼真的头姿势和眼睛眨眼。具体而言,通过利用预验证的图像生成器和图像编码器,我们估计了会说话的头视频的潜在代码,这些代码忠实地反映了给定的音频。通过几个新设计的组件使这成为可能:1)一种用于准确唇部同步的对比性唇部同步鉴别剂,2)一种条件顺序的连续变异自动编码器,该差异自动编码器了解从唇部运动中解散的潜在运动空间,以便我们可以独立地操纵运动运动的运动。和唇部运动,同时保留身份。 3)自动回归事先增强,并通过标准化流量来学习复杂的音频到运动多模式潜在空间。配备了这些组件,Styletalker不仅可以在给出另一个运动源视频时以动作控制的方式生成说话的头视频,而且还可以通过从输入音频中推断出现实的动作,以完全由音频驱动的方式生成。通过广泛的实验和用户研究,我们表明我们的模型能够以令人印象深刻的感知质量合成会说话的头部视频,这些视频与输入音频相符,可以准确地唇部同步,这在很大程度上要优于先进的基线。
translated by 谷歌翻译