Generating realistic motions for digital humans is a core but challenging part of computer animations and games, as human motions are both diverse in content and rich in styles. While the latest deep learning approaches have made significant advancements in this domain, they mostly consider motion synthesis and style manipulation as two separate problems. This is mainly due to the challenge of learning both motion contents that account for the inter-class behaviour and styles that account for the intra-class behaviour effectively in a common representation. To tackle this challenge, we propose a denoising diffusion probabilistic model solution for styled motion synthesis. As diffusion models have a high capacity brought by the injection of stochasticity, we can represent both inter-class motion content and intra-class style behaviour in the same latent. This results in an integrated, end-to-end trained pipeline that facilitates the generation of optimal motion and exploration of content-style coupled latent space. To achieve high-quality results, we design a multi-task architecture of diffusion model that strategically generates aspects of human motions for local guidance. We also design adversarial and physical regulations for global guidance. We demonstrate superior performance with quantitative and qualitative results and validate the effectiveness of our multi-task architecture.
translated by 谷歌翻译
人类运动建模对于许多现代图形应用非常重要,这些应用通常需要专业技能。为了消除外行的技能障碍,最近的运动生成方法可以直接产生以自然语言为条件的人类动作。但是,通过各种文本输入,实现多样化和细粒度的运动产生,仍然具有挑战性。为了解决这个问题,我们提出了MotionDiffuse,这是第一个基于基于文本模型的基于文本驱动的运动生成框架,该框架证明了现有方法的几种期望属性。 1)概率映射。 MotionDiffuse不是确定性的语言映射,而是通过一系列注入变化的步骤生成动作。 2)现实的综合。 MotionDiffuse在建模复杂的数据分布和生成生动的运动序列方面表现出色。 3)多级操作。 Motion-Diffuse响应有关身体部位的细粒度指示,以及随时间变化的文本提示,任意长度运动合成。我们的实验表明,Motion-Diffuse通过说服文本驱动运动产生和动作条件运动的运动来优于现有的SOTA方法。定性分析进一步证明了MotionDiffuse对全面运动产生的可控性。主页:https://mingyuan-zhang.github.io/projects/motiondiffuse.html
translated by 谷歌翻译
Conventional methods for human motion synthesis are either deterministic or struggle with the trade-off between motion diversity and motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can generate long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion editing applications -- like inbetweening, seed conditioning, and text-based editing -- thus, providing crucial abilities for virtual character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state of the art on established benchmarks in the literature. We urge the reader to watch our supplementary video and visit https://vcai.mpi-inf.mpg.de/projects/MoFusion.
translated by 谷歌翻译
DeNoising扩散模型代表了计算机视觉中最新的主题,在生成建模领域表现出了显着的结果。扩散模型是一个基于两个阶段的深层生成模型,一个正向扩散阶段和反向扩散阶段。在正向扩散阶段,通过添加高斯噪声,输入数据在几个步骤中逐渐受到干扰。在反向阶段,模型的任务是通过学习逐步逆转扩散过程来恢复原始输入数据。尽管已知的计算负担,即由于采样过程中涉及的步骤数量,扩散模型对生成样品的质量和多样性得到了广泛赞赏。在这项调查中,我们对视觉中应用的denoising扩散模型的文章进行了全面综述,包括该领域的理论和实际贡献。首先,我们识别并介绍了三个通用扩散建模框架,这些框架基于扩散概率模型,噪声调节得分网络和随机微分方程。我们进一步讨论了扩散模型与其他深层生成模型之间的关系,包括变异自动编码器,生成对抗网络,基于能量的模型,自回归模型和正常流量。然后,我们介绍了计算机视觉中应用的扩散模型的多角度分类。最后,我们说明了扩散模型的当前局限性,并设想了一些有趣的未来研究方向。
translated by 谷歌翻译
We study a challenging task, conditional human motion generation, which produces plausible human motion sequences according to various conditional inputs, such as action classes or textual descriptors. Since human motions are highly diverse and have a property of quite different distribution from conditional modalities, such as textual descriptors in natural languages, it is hard to learn a probabilistic mapping from the desired conditional modality to the human motion sequences. Besides, the raw motion data from the motion capture system might be redundant in sequences and contain noises; directly modeling the joint distribution over the raw motion sequences and conditional modalities would need a heavy computational overhead and might result in artifacts introduced by the captured noises. To learn a better representation of the various human motion sequences, we first design a powerful Variational AutoEncoder (VAE) and arrive at a representative and low-dimensional latent code for a human motion sequence. Then, instead of using a diffusion model to establish the connections between the raw motion sequences and the conditional inputs, we perform a diffusion process on the motion latent space. Our proposed Motion Latent-based Diffusion model (MLD) could produce vivid motion sequences conforming to the given conditional inputs and substantially reduce the computational overhead in both the training and inference stages. Extensive experiments on various human motion generation tasks demonstrate that our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion generation tasks, with two orders of magnitude faster than previous diffusion models on raw motion sequences.
translated by 谷歌翻译
随着信息中的各种方式存在于现实世界中的各种方式,多式联信息之间的有效互动和融合在计算机视觉和深度学习研究中的多模式数据的创造和感知中起着关键作用。通过卓越的功率,在多式联运信息中建模互动,多式联运图像合成和编辑近年来已成为一个热门研究主题。与传统的视觉指导不同,提供明确的线索,多式联路指南在图像合成和编辑方面提供直观和灵活的手段。另一方面,该领域也面临着具有固有的模态差距的特征的几个挑战,高分辨率图像的合成,忠实的评估度量等。在本调查中,我们全面地阐述了最近多式联运图像综合的进展根据数据模型和模型架构编辑和制定分类。我们从图像合成和编辑中的不同类型的引导方式开始介绍。然后,我们描述了多模式图像综合和编辑方法,其具有详细的框架,包括生成的对抗网络(GAN),GaN反转,变压器和其他方法,例如NERF和扩散模型。其次是在多模式图像合成和编辑中广泛采用的基准数据集和相应的评估度量的综合描述,以及分析各个优点和限制的不同合成方法的详细比较。最后,我们为目前的研究挑战和未来的研究方向提供了深入了解。与本调查相关的项目可在HTTPS://github.com/fnzhan/mise上获得
translated by 谷歌翻译
Achieving multiple genres and long-term choreography sequences from given music is a challenging task, due to the lack of a multi-genre dataset. To tackle this problem,we propose a Multi Art Genre Intelligent Choreography Dataset (MagicDance). The data of MagicDance is captured from professional dancers assisted by motion capture technicians. It has a total of 8 hours 3D motioncapture human dances with paired music, and 16 different dance genres. To the best of our knowledge, MagicDance is the 3D dance dataset with the most genres. In addition, we find that the existing two types of methods (generation-based method and synthesis-based method) can only satisfy one of the diversity and duration, but they can complement to some extent. Based on this observation, we also propose a generation-synthesis choreography network (MagicNet), which cascades a Diffusion-based 3D Diverse Dance fragments Generation Network (3DGNet) and a Genre&Coherent aware Retrieval Module (GCRM). The former can generate various dance fragments from only one music clip. The latter is utilized to select the best dance fragment generated by 3DGNet and switch them into a complete dance according to the genre and coherent matching score. Quantitative and qualitative experiments demonstrate the quality of MagicDance, and the state-of-the-art performance of MagicNet.
translated by 谷歌翻译
Conditional diffusion probabilistic models can model the distribution of natural images and can generate diverse and realistic samples based on given conditions. However, oftentimes their results can be unrealistic with observable color shifts and textures. We believe that this issue results from the divergence between the probabilistic distribution learned by the model and the distribution of natural images. The delicate conditions gradually enlarge the divergence during each sampling timestep. To address this issue, we introduce a new method that brings the predicted samples to the training data manifold using a pretrained unconditional diffusion model. The unconditional model acts as a regularizer and reduces the divergence introduced by the conditional model at each sampling step. We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks. The improvements obtained by our method suggest that the priors can be incorporated as a general plugin for improving conditional diffusion models.
translated by 谷歌翻译
Stochastic human motion prediction aims to forecast multiple plausible future motions given a single pose sequence from the past. Most previous works focus on designing elaborate losses to improve the accuracy, while the diversity is typically characterized by randomly sampling a set of latent variables from the latent prior, which is then decoded into possible motions. This joint training of sampling and decoding, however, suffers from posterior collapse as the learned latent variables tend to be ignored by a strong decoder, leading to limited diversity. Alternatively, inspired by the diffusion process in nonequilibrium thermodynamics, we propose MotionDiff, a diffusion probabilistic model to treat the kinematics of human joints as heated particles, which will diffuse from original states to a noise distribution. This process offers a natural way to obtain the "whitened" latents without any trainable parameters, and human motion prediction can be regarded as the reverse diffusion process that converts the noise distribution into realistic future motions conditioned on the observed sequence. Specifically, MotionDiff consists of two parts: a spatial-temporal transformer-based diffusion network to generate diverse yet plausible motions, and a graph convolutional network to further refine the outputs. Experimental results on two datasets demonstrate that our model yields the competitive performance in terms of both accuracy and diversity.
translated by 谷歌翻译
Denoising diffusion models hold great promise for generating diverse and realistic human motions. However, existing motion diffusion models largely disregard the laws of physics in the diffusion process and often generate physically-implausible motions with pronounced artifacts such as floating, foot sliding, and ground penetration. This seriously impacts the quality of generated motions and limits their real-world application. To address this issue, we present a novel physics-guided motion diffusion model (PhysDiff), which incorporates physical constraints into the diffusion process. Specifically, we propose a physics-based motion projection module that uses motion imitation in a physics simulator to project the denoised motion of a diffusion step to a physically-plausible motion. The projected motion is further used in the next diffusion step to guide the denoising diffusion process. Intuitively, the use of physics in our model iteratively pulls the motion toward a physically-plausible space. Experiments on large-scale human motion datasets show that our approach achieves state-of-the-art motion quality and improves physical plausibility drastically (>78% for all datasets).
translated by 谷歌翻译
数字艺术合成在多媒体社区中受到越来越多的关注,因为有效地与公众参与了艺术。当前的数字艺术合成方法通常使用单模式输入作为指导,从而限制了模型的表现力和生成结果的多样性。为了解决这个问题,我们提出了多模式引导的艺术品扩散(MGAD)模型,该模型是一种基于扩散的数字艺术品生成方法,它利用多模式提示作为控制无分类器扩散模型的指导。此外,对比度语言图像预处理(剪辑)模型用于统一文本和图像模式。关于生成的数字艺术绘画质量和数量的广泛实验结果证实了扩散模型和多模式指导的组合有效性。代码可从https://github.com/haha-lisa/mgad-multimodal-guided-artwork-diffusion获得。
translated by 谷歌翻译
深度学习表现出巨大的生成任务潜力。生成模型是可以根据某些隐含参数随机生成观测值的模型类。最近,扩散模型由于其发电能力而成为一类生成模型。如今,已经取得了巨大的成就。除了计算机视觉,语音产生,生物信息学和自然语言处理外,还需要在该领域探索更多应用。但是,扩散模型具有缓慢生成过程的自然缺点,从而导致许多增强的作品。该调查总结了扩散模型的领域。我们首先说明了两项具有里程碑意义的作品的主要问题-DDPM和DSM。然后,我们提供各种高级技术,以加快扩散模型 - 训练时间表,无训练采样,混合模型以及得分和扩散统一。关于现有模型,我们还根据特定的NFE提供了FID得分的基准和NLL。此外,引入了带有扩散模型的应用程序,包括计算机视觉,序列建模,音频和科学AI。最后,该领域以及局限性和进一步的方向都进行了摘要。
translated by 谷歌翻译
Font generation is a difficult and time-consuming task, especially in those languages using ideograms that have complicated structures with a large number of characters, such as Chinese. To solve this problem, few-shot font generation and even one-shot font generation have attracted a lot of attention. However, most existing font generation methods may still suffer from (i) large cross-font gap challenge; (ii) subtle cross-font variation problem; and (iii) incorrect generation of complicated characters. In this paper, we propose a novel one-shot font generation method based on a diffusion model, named Diff-Font, which can be stably trained on large datasets. The proposed model aims to generate the entire font library by giving only one sample as the reference. Specifically, a large stroke-wise dataset is constructed, and a stroke-wise diffusion model is proposed to preserve the structure and the completion of each generated character. To our best knowledge, the proposed Diff-Font is the first work that developed diffusion models to handle the font generation task. The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation. Compared to previous font generation methods, our model reaches state-of-the-art performance both qualitatively and quantitatively.
translated by 谷歌翻译
利用深度学习的最新进展,文本到图像生成模型目前具有吸引公众关注的优点。其中两个模型Dall-E 2和Imagen已经证明,可以从图像的简单文本描述中生成高度逼真的图像。基于一种称为扩散模型的新型图像生成方法,文本对图像模型可以生产许多不同类型的高分辨率图像,其中人类想象力是唯一的极限。但是,这些模型需要大量的计算资源来训练,并处理从互联网收集的大量数据集。此外,代码库和模型均未发布。因此,它可以防止AI社区尝试这些尖端模型,从而使其结果复制变得复杂,即使不是不可能。在本文中,我们的目标是首先回顾这些模型使用的不同方法和技术,然后提出我们自己的文本模型模型实施。高度基于DALL-E 2,我们引入了一些轻微的修改,以应对所引起的高计算成本。因此,我们有机会进行实验,以了解这些模型的能力,尤其是在低资源制度中。特别是,我们提供了比Dall-e 2的作者(包括消融研究)更深入的分析。此外,扩散模型使用所谓的指导方法来帮助生成过程。我们引入了一种新的指导方法,该方法可以与其他指导方法一起使用,以提高图像质量。最后,我们的模型产生的图像质量相当好,而不必维持最先进的文本对图像模型的重大培训成本。
translated by 谷歌翻译
可控图像合成模型允许根据文本指令或来自示例图像的指导创建不同的图像。最近,已经显示出去噪扩散概率模型比现有方法产生更现实的图像,并且已在无条件和类条件设置中成功展示。我们探索细粒度,连续控制该模型类,并引入了一种新颖的统一框架,用于语义扩散指导,允许语言或图像指导,或两者。使用图像文本或图像匹配分数的梯度将指导注入预训练的无条件扩散模型中。我们探讨基于剪辑的文本指导,以及以统一形式的基于内容和类型的图像指导。我们的文本引导综合方法可以应用于没有相关文本注释的数据集。我们对FFHQ和LSUN数据集进行实验,并显示出细粒度的文本引导图像合成的结果,与样式或内容示例图像相关的图像的合成,以及具有文本和图像引导的示例。
translated by 谷歌翻译
扩散模型是一类深入生成模型,在具有密集理论建立的各种任务上显示出令人印象深刻的结果。尽管与其他最先进的模型相比,扩散模型的样本合成质量和多样性令人印象深刻,但它们仍然遭受了昂贵的抽样程序和次优可能的估计。最近的研究表明,对提高扩散模型的性能的热情非常热情。在本文中,我们对扩散模型的现有变体进行了首次全面综述。具体而言,我们提供了扩散模型的第一个分类法,并将它们分类为三种类型,即采样加速增强,可能性最大化的增强和数据将来增强。我们还详细介绍了其他五个生成模型(即变异自动编码器,生成对抗网络,正常流量,自动回归模型和基于能量的模型),并阐明扩散模型与这些生成模型之间的连接。然后,我们对扩散模型的应用进行彻底研究,包括计算机视觉,自然语言处理,波形信号处理,多模式建模,分子图生成,时间序列建模和对抗性纯化。此外,我们提出了与这种生成模型的发展有关的新观点。
translated by 谷歌翻译
We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models. We achieve this on unconditional image synthesis by finding a better architecture through a series of ablations. For conditional image synthesis, we further improve sample quality with classifier guidance: a simple, compute-efficient method for trading off diversity for fidelity using gradients from a classifier. We achieve an FID of 2.97 on ImageNet 128×128, 4.59 on ImageNet 256×256, and 7.72 on ImageNet 512×512, and we match BigGAN-deep even with as few as 25 forward passes per sample, all while maintaining better coverage of the distribution. Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3.94 on ImageNet 256×256 and 3.85 on ImageNet 512×512. We release our code at https://github.com/openai/guided-diffusion.
translated by 谷歌翻译
由于字符之间的复杂和多样化的交互作用,合成的多字符交互是一项艰巨的任务。特别是,在产生诸如舞蹈和战斗之类的紧密互动时,需要精确的时空对齐。现有的生成多字符相互作用的工作集中在给定序列中生成单一类型的反应运动,从而导致缺乏各种结果动作。在本文中,我们提出了一种新颖的方式来创建现实的人类反应动作,通过混合和匹配不同类型的紧密相互作用,在给定数据集中未呈现。我们提出了一个有条件的层次生成对抗网络,具有多热的类嵌入,以从领导者的给定运动序列中生成追随者的混合和匹配反应性运动。实验是对嘈杂(基于深度)和高质量(基于MOCAP)的交互数据集进行的。定量和定性结果表明,我们的方法的表现优于给定数据集上的最新方法。我们还提供了一个增强数据集,具有逼真的反应动作,以刺激该领域的未来研究。该代码可从https://github.com/aman-goel1/imm获得
translated by 谷歌翻译
基于文本的运动生成模型正在引起人们对它们在游戏,动画或机器人行业中自动化运动过程的潜力的兴趣激增。在本文中,我们提出了一种基于扩散的运动合成和名为Flame的编辑模型。受扩散模型中最新成功的启发,我们将基于扩散的生成模型集成到运动域中。火焰可以产生与给定文本很好地对齐的高保真动作。此外,它可以编辑运动的各个部分,无论是在框架和联合方面,而无需进行任何微调。火焰涉及我们设计的新的基于变压器的架构,以更好地处理运动数据,这对于管理可变长度运动和良好的自由形式文本至关重要。在实验中,我们表明火焰在三个文本数据集上实现了最新的一代表演:HumanML3D,Babel和Kit。我们还证明,火焰的编辑能力可以扩展到其他任务,例如运动预测或运动内部,这些任务先前已被专用模型涵盖。
translated by 谷歌翻译
作为生成部件作为自回归模型的向量量化变形式自动化器(VQ-VAE)的集成在图像生成上产生了高质量的结果。但是,自回归模型将严格遵循采样阶段的逐步扫描顺序。这导致现有的VQ系列模型几乎不会逃避缺乏全球信息的陷阱。连续域中的去噪扩散概率模型(DDPM)显示了捕获全局背景的能力,同时产生高质量图像。在离散状态空间中,一些作品已经证明了执行文本生成和低分辨率图像生成的可能性。我们认为,在VQ-VAE的富含内容的离散视觉码本的帮助下,离散扩散模型还可以利用全局上下文产生高保真图像,这补偿了沿像素空间的经典自回归模型的缺陷。同时,离散VAE与扩散模型的集成解决了传统的自回归模型的缺点是超大的,以及在生成图像时需要在采样过程中的过度时间的扩散模型。结果发现所生成的图像的质量严重依赖于离散的视觉码本。广泛的实验表明,所提出的矢量量化离散扩散模型(VQ-DDM)能够实现与低复杂性的顶层方法的相当性能。它还展示了在没有额外培训的图像修复任务方面与自回归模型量化的其他矢量突出的优势。
translated by 谷歌翻译