创建视频是为了表达情感,交换信息和分享经验。视频合成很长时间以来一直吸引了研究人员。尽管视觉合成的进步驱动了迅速的进展,但大多数现有研究都集中在提高框架的质量和之间的过渡上,而在生成更长的视频方面几乎没有取得进展。在本文中,我们提出了一种基于3D-VQGAN和Transformers的方法,以生成具有数千帧的视频。我们的评估表明,我们的模型在16架视频剪辑中培训了来自UCF-101,Sky TimeLapse和Taichi-HD数据集等标准基准测试片段,可以生成多样化,连贯和高质量的长视频。我们还展示了我们通过将时间信息与文本和音频结合在一起来生成有意义的长视频的方法的条件扩展。可以在https://songweige.github.io/projects/tats/index.html上找到视频和代码。
translated by 谷歌翻译
视频显示连续事件,但大多数 - 如果不是全部 - 视频综合框架及时酌情对待它们。在这项工作中,我们想到它们应该是连续的信号的视频,并扩展神经表示的范式以构建连续时间视频发生器。为此,我们首先通过位置嵌入的镜头设计连续运动表示。然后,我们探讨了在非常稀疏的视频上培训问题,并证明可以使用每剪辑的少数为2帧来学习良好的发电机。之后,我们重新思考传统的图像和视频鉴别器对并建议使用基于Hypernetwork的一个。这降低了培训成本并向发电机提供了更丰富的学习信号,使得可以首次直接培训1024美元$ ^ 2 $视频。我们在Stylegan2的顶部构建我们的模型,并且在同样的分辨率下培训速度速度较高5%,同时实现几乎相同的图像质量。此外,我们的潜在空间具有类似的属性,使我们的方法可以及时传播的空间操纵。我们可以在任意高帧速率下任意长的视频,而现有工作努力以固定速率生成均匀的64个帧。我们的模型在四个现代256美元$ ^ 2 $视频综合基准测试中实现最先进的结果,一个1024美元$ ^ 2 $ state。视频和源代码在项目网站上提供:https://universome.github.io/stylegan-v。
translated by 谷歌翻译
我们提出了一个视频生成模型,该模型可以准确地重现对象运动,摄像头视图的变化以及随着时间的推移而产生的新内容。现有的视频生成方法通常无法生成新内容作为时间的函数,同时保持在真实环境中预期的一致性,例如合理的动态和对象持久性。一个常见的故障情况是,由于过度依赖归纳偏见而提供时间一致性,因此内容永远不会改变,例如单个潜在代码决定整个视频的内容。在另一个极端情况下,没有长期一致性,生成的视频可能会在不同场景之间不切实际。为了解决这些限制,我们通过重新设计暂时的潜在表示并通过较长的视频培训从数据中学习长期一致性来优先考虑时间轴。为此,我们利用了两阶段的培训策略,在该策略中,我们以低分辨率和高分辨率的较短视频分别训练了较长的视频。为了评估模型的功能,我们介绍了两个新的基准数据集,并明确关注长期时间动态。
translated by 谷歌翻译
We introduce the MAsked Generative VIdeo Transformer, MAGVIT, to tackle various video synthesis tasks with a single model. We introduce a 3D tokenizer to quantize a video into spatial-temporal visual tokens and propose an embedding method for masked video token modeling to facilitate multi-task learning. We conduct extensive experiments to demonstrate the quality, efficiency, and flexibility of MAGVIT. Our experiments show that (i) MAGVIT performs favorably against state-of-the-art approaches and establishes the best-published FVD on three video generation benchmarks, including the challenging Kinetics-600. (ii) MAGVIT outperforms existing methods in inference time by two orders of magnitude against diffusion models and by 60x against autoregressive models. (iii) A single MAGVIT model supports ten diverse generation tasks and generalizes across videos from different visual domains. The source code and trained models will be released to the public at https://magvit.cs.cmu.edu.
translated by 谷歌翻译
Video generation requires synthesizing consistent and persistent frames with dynamic content over time. This work investigates modeling the temporal relations for composing video with arbitrary length, from a few frames to even infinite, using generative adversarial networks (GANs). First, towards composing adjacent frames, we show that the alias-free operation for single image generation, together with adequately pre-learned knowledge, brings a smooth frame transition without compromising the per-frame quality. Second, by incorporating the temporal shift module (TSM), originally designed for video understanding, into the discriminator, we manage to advance the generator in synthesizing more consistent dynamics. Third, we develop a novel B-Spline based motion representation to ensure temporal smoothness to achieve infinite-length video generation. It can go beyond the frame number used in training. A low-rank temporal modulation is also proposed to alleviate repeating contents for long video generation. We evaluate our approach on various datasets and show substantial improvements over video generation baselines. Code and models will be publicly available at https://genforce.github.io/StyleSV.
translated by 谷歌翻译
Designed to learn long-range interactions on sequential data, transformers continue to show state-of-the-art results on a wide variety of tasks. In contrast to CNNs, they contain no inductive bias that prioritizes local interactions. This makes them expressive, but also computationally infeasible for long sequences, such as high-resolution images. We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images. We show how to (i) use CNNs to learn a contextrich vocabulary of image constituents, and in turn (ii) utilize transformers to efficiently model their composition within high-resolution images. Our approach is readily applied to conditional synthesis tasks, where both non-spatial information, such as object classes, and spatial information, such as segmentations, can control the generated image. In particular, we present the first results on semanticallyguided synthesis of megapixel images with transformers and obtain the state of the art among autoregressive models on class-conditional ImageNet. Code and pretrained models can be found at https://git.io/JnyvK.
translated by 谷歌翻译
在这项工作中,我们介绍了基于补丁的以对象为中心的视频变压器(POVT),这是一种基于区域的新型视频生成体系结构,利用以对象为中心的信息来有效地对视频中的时间动态进行建模。我们在视频预测中通过自回旋变压器在压缩视频的离散潜在空间中进行了先前的工作,并通过边界框进行了更改,以增加对象以对象为中心的信息。由于以对象为中心表示的更好的可压缩性,我们可以通过允许模型仅访问对象信息以获取更长的视野时间信息来提高训练效率。当对以对象为中心的各种困难数据集进行评估时,我们的方法可与其他视频生成模型更好或相等的性能,同时在计算上更有效和可扩展。此外,我们表明我们的方法能够通过边界框操作执行以对象为中心的可控性,这可能有助于下游任务,例如视频编辑或视觉计划。示例可在https://sites.google.com/view/povt-public} {https://sites.google.com/view/povt-public获取
translated by 谷歌翻译
尽管两阶段矢量量化(VQ)生成模型允许合成高保真性和高分辨率图像,但其量化操作员将图像中的相似贴片编码为相同的索引,从而为相似的相邻区域重复使用现有的解码器体系结构的相似相似区域的重复伪像。为了解决这个问题,我们建议将空间条件的归一化结合起来,以调节量化的向量,以便将空间变体信息插入嵌入式索引图中,从而鼓励解码器生成更真实的图像。此外,我们使用多通道量化来增加离散代码的重组能力,而无需增加模型和代码簿的成本。此外,为了在第二阶段生成离散令牌,我们采用掩盖的生成图像变压器(MaskGit)来学习压缩潜在空间中的基础先验分布,该分布比常规自动回归模型快得多。两个基准数据集的实验表明,我们提出的调制VQGAN能够大大提高重建的图像质量,并提供高保真图像的产生。
translated by 谷歌翻译
随着信息中的各种方式存在于现实世界中的各种方式,多式联信息之间的有效互动和融合在计算机视觉和深度学习研究中的多模式数据的创造和感知中起着关键作用。通过卓越的功率,在多式联运信息中建模互动,多式联运图像合成和编辑近年来已成为一个热门研究主题。与传统的视觉指导不同,提供明确的线索,多式联路指南在图像合成和编辑方面提供直观和灵活的手段。另一方面,该领域也面临着具有固有的模态差距的特征的几个挑战,高分辨率图像的合成,忠实的评估度量等。在本调查中,我们全面地阐述了最近多式联运图像综合的进展根据数据模型和模型架构编辑和制定分类。我们从图像合成和编辑中的不同类型的引导方式开始介绍。然后,我们描述了多模式图像综合和编辑方法,其具有详细的框架,包括生成的对抗网络(GAN),GaN反转,变压器和其他方法,例如NERF和扩散模型。其次是在多模式图像合成和编辑中广泛采用的基准数据集和相应的评估度量的综合描述,以及分析各个优点和限制的不同合成方法的详细比较。最后,我们为目前的研究挑战和未来的研究方向提供了深入了解。与本调查相关的项目可在HTTPS://github.com/fnzhan/mise上获得
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译
Video prediction is a challenging computer vision task that has a wide range of applications. In this work, we present a new family of Transformer-based models for video prediction. Firstly, an efficient local spatial-temporal separation attention mechanism is proposed to reduce the complexity of standard Transformers. Then, a full autoregressive model, a partial autoregressive model and a non-autoregressive model are developed based on the new efficient Transformer. The partial autoregressive model has a similar performance with the full autoregressive model but a faster inference speed. The non-autoregressive model not only achieves a faster inference speed but also mitigates the quality degradation problem of the autoregressive counterparts, but it requires additional parameters and loss function for learning. Given the same attention mechanism, we conducted a comprehensive study to compare the proposed three video prediction variants. Experiments show that the proposed video prediction models are competitive with more complex state-of-the-art convolutional-LSTM based models. The source code is available at https://github.com/XiYe20/VPTR.
translated by 谷歌翻译
对机器学习和创造力领域的兴趣越来越大。这项调查概述了计算创造力理论,关键机器学习技术(包括生成深度学习)和相应的自动评估方法的历史和现状。在对该领域的主要贡献进行了批判性讨论之后,我们概述了当前的研究挑战和该领域的新兴机会。
translated by 谷歌翻译
Recent advances in generative adversarial networks (GANs) have demonstrated the capabilities of generating stunning photo-realistic portrait images. While some prior works have applied such image GANs to unconditional 2D portrait video generation and static 3D portrait synthesis, there are few works successfully extending GANs for generating 3D-aware portrait videos. In this work, we propose PV3D, the first generative framework that can synthesize multi-view consistent portrait videos. Specifically, our method extends the recent static 3D-aware image GAN to the video domain by generalizing the 3D implicit neural representation to model the spatio-temporal space. To introduce motion dynamics to the generation process, we develop a motion generator by stacking multiple motion layers to generate motion features via modulated convolution. To alleviate motion ambiguities caused by camera/human motions, we propose a simple yet effective camera condition strategy for PV3D, enabling both temporal and multi-view consistent video generation. Moreover, PV3D introduces two discriminators for regularizing the spatial and temporal domains to ensure the plausibility of the generated portrait videos. These elaborated designs enable PV3D to generate 3D-aware motion-plausible portrait videos with high-quality appearance and geometry, significantly outperforming prior works. As a result, PV3D is able to support many downstream applications such as animating static portraits and view-consistent video motion editing. Code and models will be released at https://showlab.github.io/pv3d.
translated by 谷歌翻译
近期对抗性生成建模的突破导致了能够生产高质量的视频样本的模型,即使在真实世界视频的大型和复杂的数据集上也是如此。在这项工作中,我们专注于视频预测的任务,其中给出了从视频中提取的一系列帧,目标是生成合理的未来序列。我们首先通过对鉴别器分解进行系统的实证研究并提出产生更快的收敛性和更高性能的系统来提高本领域的最新技术。然后,我们分析发电机中的复发单元,并提出了一种新的复发单元,其根据预测的运动样本来改变其过去的隐藏状态,并改进它以处理DIS闭塞,场景变化和其他复杂行为。我们表明,这种经常性单位始终如一地优于以前的设计。我们的最终模型导致最先进的性能中的飞跃,从大型动力学-600数据集中获得25.7的测试集Frechet视频距离为25.7,下降到69.2。
translated by 谷歌翻译
在本文中,我们提出了Nuwa-Infinity,这是无限视觉合成的生成模型,该模型被定义为生成任意尺寸的高分辨率图像或长期视频的任务。提出了一种自回旋的自回旋生成机制来处理这一可变大小的生成任务,其中全球补丁级自回归模型考虑了补丁之间的依赖性,以及局部令牌级自动回收式模型在每个贴片中的视觉令牌之间的依赖性。将附近的上下文池(NCP)引入已生成的与缓存相关的补丁,作为当前补丁生成的上下文,该贴片可以显着节省计算成本而不牺牲补丁级依赖性模型。任意方向控制器(ADC)用于确定不同视觉合成任务的合适生成订单,并学习订单感知的位置嵌入。与DALL-E,Imagen和Parti相比,NUWA-INFINITY可以生成具有任意大小的高分辨率图像,并支持长期视频的生成。与NUWA(也涵盖图像和视频)相比,NUWA-Infinity在分辨率和可变尺寸的生成方面具有出色的视觉合成功能。 github链接是https://github.com/microsoft/nuwa。主页链接是https://nuwa-infinity.microsoft.com。
translated by 谷歌翻译
生成建模研究的持续趋势是将样本分辨率推高更高,同时减少培训和采样的计算要求。我们的目标是通过技术的组合进一步推动这一趋势 - 每个组件代表当前效率在各自领域的顶峰。其中包括载体定量的GAN(VQ-GAN),该模型具有高水平的损耗 - 但感知上微不足道的压缩模型;沙漏变形金刚,一个高度可扩展的自我注意力模型;和逐步未胶片的denoising自动编码器(Sundae),一种非自动化(NAR)文本生成模型。出乎意料的是,当应用于多维数据时,我们的方法突出了沙漏变压器的原始公式中的弱点。鉴于此,我们建议对重采样机制进行修改,该机制适用于将分层变压器应用于多维数据的任何任务。此外,我们证明了圣代表到长序列长度的可伸缩性 - 比先前的工作长四倍。我们提出的框架秤达到高分辨率($ 1024 \ times 1024 $),并迅速火车(2-4天)。至关重要的是,训练有素的模型在消费级GPU(GTX 1080TI)上大约2秒内生产多样化和现实的百像样品。通常,该框架是灵活的:支持任意数量的采样步骤,示例自动插入,自我纠正功能,有条件的生成和NAR公式,以允许任意介绍掩护。我们在FFHQ256上获得10.56的FID得分 - 仅在100个采样步骤中以不到一半的采样步骤接近原始VQ -GAN,而FFHQ1024的FFHQ1024和21.85。
translated by 谷歌翻译
生成的对抗性模型(GANS)继续在静止图像的视觉质量方面产生进步,以及时间相关的学习。但是,很少有效地设法将这两个有趣的功能组合用于综合视频内容:大多数方法需要广泛的训练数据集来学习时间相关性,同时在输出的分辨率和视觉质量中相当有限。我们提出了一种新的视频综合问题方法,有助于大大提高视觉质量,大大减少生成视频所需的培训数据和资源的量。我们的配方将空间域分开,其中从时间域中合成单个帧,其中产生运动。对于空间域,我们使用预先训练的样式手册网络,潜在的空间允许控制它培训的对象的外观。该模型的表现力量使我们能够在样式潜在空间中嵌入我们的培训视频。然后,我们的时间架构不受RGB帧的序列培训,而是验证RGB帧的序列,而是在样式龙舌码的序列上。样式卡空间的有利特性简化了时间相关的发现。我们证明,只需10分钟的镜头为1个受试者约6小时即可培训我们的时间架构就足够了。在培训之后,我们的模型不仅可以为培训主题生成新的纵向视频,还可以为任何可以嵌入在样式卡空间中的任何随机对象。
translated by 谷歌翻译
预测和预测序列中缺少信息的未来结果或原因是代理商能够做出智能决策的关键能力。这需要强大的时间连贯的生成能力。扩散模型最近在几个生成任务中表现出巨大的成功,但在视频域中并未广泛探索。我们提出随机遮罩视频扩散(RAMVID),该扩散将图像扩散模型扩展到使用3D卷积的视频,并在训练过程中引入了一种新的调理技术。通过改变我们条件的面膜,该模型能够执行视频预测,填充和上采样。由于在大多数有条件训练的扩散模型中,我们不使用串联在面罩上条件条件,因此我们能够减少内存足迹。我们在两个基准数据集上评估了该模型以进行视频预测,一个用于视频生成的模型,我们在其中实现了竞争成果。在动力学-600上,我们实现了视频预测的最先进。
translated by 谷歌翻译
我们为视频建模提供了一个框架,该框架基于deo的扩散概率模型,该模型在各种现实的环境中产生长期视频完成。我们介绍了一个生成模型,该模型可以在测试时间样本中任何任意子集的视频帧的任何任意子集,该视频框架以其他任何子集为条件,并为此提供了适合此目的的体系结构。这样做可以使我们有效地比较和优化各种时间表,以对长视频中的帧进行采样,并在先前采样的帧上使用选择性稀疏和长距离调节。我们证明了对许多数据集的先前工作的改进的视频建模,并在25分钟内进行了临时连贯的视频。我们还根据Carla自动驾驶汽车模拟器中生成的视频发布了一个新的视频建模数据集和语义上有意义的指标。
translated by 谷歌翻译
Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-theart approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion.
translated by 谷歌翻译