A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a "visual imagination" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods.
translated by 谷歌翻译
近期对抗性生成建模的突破导致了能够生产高质量的视频样本的模型,即使在真实世界视频的大型和复杂的数据集上也是如此。在这项工作中,我们专注于视频预测的任务,其中给出了从视频中提取的一系列帧,目标是生成合理的未来序列。我们首先通过对鉴别器分解进行系统的实证研究并提出产生更快的收敛性和更高性能的系统来提高本领域的最新技术。然后,我们分析发电机中的复发单元,并提出了一种新的复发单元,其根据预测的运动样本来改变其过去的隐藏状态,并改进它以处理DIS闭塞,场景变化和其他复杂行为。我们表明,这种经常性单位始终如一地优于以前的设计。我们的最终模型导致最先进的性能中的飞跃,从大型动力学-600数据集中获得25.7的测试集Frechet视频距离为25.7,下降到69.2。
translated by 谷歌翻译
Autonomous systems not only need to understand their current environment, but should also be able to predict future actions conditioned on past states, for instance based on captured camera frames. However, existing models mainly focus on forecasting future video frames for short time-horizons, hence being of limited use for long-term action planning. We propose Multi-Scale Hierarchical Prediction (MSPred), a novel video prediction model able to simultaneously forecast future possible outcomes of different levels of granularity at different spatio-temporal scales. By combining spatial and temporal downsampling, MSPred efficiently predicts abstract representations such as human poses or locations over long time horizons, while still maintaining a competitive performance for video frame prediction. In our experiments, we demonstrate that MSPred accurately predicts future video frames as well as high-level representations (e.g. keypoints or semantics) on bin-picking and action recognition datasets, while consistently outperforming popular approaches for future frame prediction. Furthermore, we ablate different modules and design choices in MSPred, experimentally validating that combining features of different spatial and temporal granularity leads to a superior performance. Code and models to reproduce our experiments can be found in https://github.com/AIS-Bonn/MSPred.
translated by 谷歌翻译
以对象为中心的表示是通过提供柔性抽象可以在可以建立的灵活性抽象来实现更系统的推广的有希望的途径。最近的简单2D和3D数据集的工作表明,具有对象的归纳偏差的模型可以学习段,并代表单独的数据的统计结构中的有意义对象,而无需任何监督。然而,尽管使用越来越复杂的感应偏差(例如,用于场景的尺寸或3D几何形状),但这种完全无监督的方法仍然无法扩展到不同的现实数据。在本文中,我们采取了弱监督的方法,并专注于如何使用光流的形式的视频数据的时间动态,2)调节在简单的对象位置上的模型可以用于启用分段和跟踪对象在明显更现实的合成数据中。我们介绍了一个顺序扩展,以便引入我们训练的推出,我们训练用于预测现实看的合成场景的光流,并显示调节该模型的初始状态在一小组提示,例如第一帧中的物体的质量中心,是足以显着改善实例分割。这些福利超出了新型对象,新颖背景和更长的视频序列的培训分配。我们还发现,在推论期间可以使用这种初始状态调节作为对特定物体或物体部分的型号查询模型,这可能会为一系列弱监管方法铺平,并允许更有效的互动训练有素的型号。
translated by 谷歌翻译
不确定性在未来预测中起关键作用。未来是不确定的。这意味着可能有很多可能的未来。未来的预测方法应涵盖坚固的全部可能性。在自动驾驶中,涵盖预测部分中的多种模式对于做出安全至关重要的决策至关重要。尽管近年来计算机视觉系统已大大提高,但如今的未来预测仍然很困难。几个示例是未来的不确定性,全面理解的要求以及嘈杂的输出空间。在本论文中,我们通过以随机方式明确地对运动进行建模并学习潜在空间中的时间动态,从而提出了解决这些挑战的解决方案。
translated by 谷歌翻译
我们引入分层可控的视频生成,在没有任何监督的情况下,将视频的初始帧分解为前景和背景层,用户可以通过简单地操纵前景掩模来控制视频生成过程。关键挑战是无监督的前景背景分离,这是模糊的,并且能够预测用户操作,可以访问未获得原始视频序列。我们通过提出两阶段学习程序来解决这些挑战。在第一阶段,随着丰富的损失和动态前景大小,我们学习如何将帧分离为前景和背景图层,并在这些图层上调节,如何使用VQ-VAE发生器生成下一帧。在第二阶段,我们通过将(参数化)控制从未来框架拟合(参数化)控制来进行该网络来预测对掩码的编辑。我们展示了该学习的有效性和更粒度的控制机制,同时说明了在两个基准数据集上的最先进的性能。我们提供了一个视频摘要以及HTTPS://gabriel-中的视频结果.Github.io/layered_controllable_video_generation
translated by 谷歌翻译
We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). This problem is challenging because video appearance and motion can be highly complex. Traditional optical-flow-based solutions often fail where flow estimation is challenging, while newer neural-network-based methods that hallucinate pixel values directly often produce blurry results. We combine the advantages of these two methods by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which we call deep voxel flow. Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. The technique is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-ofthe-art.
translated by 谷歌翻译
视觉世界可以以稀疏相互作用的不同实体来嘲笑。在动态视觉场景中发现这种组合结构已被证明对端到端的计算机视觉方法有挑战,除非提供明确的实例级别的监督。利用运动提示的基于老虎机的模型最近在学习代表,细分和跟踪对象的情况下没有直接监督显示了巨大的希望,但是它们仍然无法扩展到复杂的现实世界多对象视频。为了弥合这一差距,我们从人类发展中汲取灵感,并假设以深度信号形式的场景几何形状的信息可以促进以对象为中心的学习。我们介绍了一种以对象为中心的视频模型SAVI ++,该模型经过训练,可以预测基于插槽的视频表示的深度信号。通过进一步利用模型缩放的最佳实践,我们能够训练SAVI ++以细分使用移动摄像机记录的复杂动态场景,其中包含在自然主义背景上具有不同外观的静态和移动对象,而无需进行分割监督。最后,我们证明,通过使用从LIDAR获得的稀疏深度信号,Savi ++能够从真实World Waymo Open DataSet中的视频中学习新兴对象细分和跟踪。
translated by 谷歌翻译
In a traditional convolutional layer, the learned filters stay fixed after training. In contrast, we introduce a new framework, the Dynamic Filter Network, where filters are generated dynamically conditioned on an input. We show that this architecture is a powerful one, with increased flexibility thanks to its adaptive nature, yet without an excessive increase in the number of model parameters. A wide variety of filtering operations can be learned this way, including local spatial transformations, but also others like selective (de)blurring or adaptive feature extraction. Moreover, multiple such layers can be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness of the dynamic filter network on the tasks of video and stereo prediction, and reach state-of-the-art performance on the moving MNIST dataset with a much smaller model. By visualizing the learned filters, we illustrate that the network has picked up flow information by only looking at unlabelled training data. This suggests that the network can be used to pretrain networks for various supervised tasks in an unsupervised way, like optical flow and depth estimation. * X. Jia and B. De Brabandere contributed equally to this work and listed in alphabetical order.
translated by 谷歌翻译
本文的目的是一个模型,能够在视频中发现,跟踪和细分多个移动对象。我们做出四个贡献:首先,我们引入了一个以对象为中心的分段模型,具有深度订购的层表示。这是使用摄入光流的变压器体系结构的变体来实现的,每个查询向量为整个视频指定对象及其层。该模型可以有效地发现多个移动对象并处理相互阻塞。其次,我们引入了一条可扩展的管道,用于生成具有多个对象的合成训练数据,从而大大降低了对劳动密集型注释的要求,并支持SIM2REAL概括;第三,我们表明该模型能够学习对象的持久性和时间形状的一致性,并能够预测Amodal分割掩码。第四,我们评估了标准视频细分基准测试模型,戴维斯,MOCA,SEGTRACK,FBMS-59,并实现最新的无监督分割性能,甚至优于几种监督方法。通过测试时间适应,我们观察到进一步的性能提高。
translated by 谷歌翻译
预测以过去观察和电动机命令为条件的未来视觉观察的能力可以使体现的代理能够计划复杂环境中各种任务的解决方案。这项工作表明,我们可以通过掩盖的视觉建模预训练变压器来创建良好的视频预测模型。我们的方法名为MaskVit,基于两个简单的设计决策。首先,为了记忆和训练效率,我们使用两种类型的窗户注意力:时空和时空。其次,在训练期间,我们掩盖了一个可变百分比的令牌,而不是固定蒙版比率。对于推断,MaskVit通过迭代改进生成所有令牌,在该迭代中,我们会在掩码调度函数后逐步降低掩蔽率。在几个数据集上,我们证明了MaskVit优于视频预测中的先前作品,这是参数有效的,并且可以生成高分辨率视频(256x256)。此外,我们通过使用MaskVit在真实机器人上进行计划,证明了推理加速器的好处(最高512x)。我们的工作表明,我们可以通过利用最小的域知识的掩盖视觉建模的一般框架来赋予体现的代理具有强大的预测模型。
translated by 谷歌翻译
可以通过定期预测未来的框架以增强虚拟现实应用程序中的用户体验,从而解决了低计算设备上图形渲染高帧速率视频的挑战。这是通过时间视图合成(TVS)的问题来研究的,该问题的目标是预测给定上一个帧的视频的下一个帧以及上一个和下一个帧的头部姿势。在这项工作中,我们考虑了用户和对象正在移动的动态场景的电视。我们设计了一个将运动解散到用户和对象运动中的框架,以在预测下一帧的同时有效地使用可用的用户运动。我们通过隔离和估计过去框架的3D对象运动,然后推断它来预测对象的运动。我们使用多平面图像(MPI)作为场景的3D表示,并将对象运动作为MPI表示中相应点之间的3D位移建模。为了在估计运动时处理MPI中的稀疏性,我们将部分卷积和掩盖的相关层纳入了相应的点。然后将预测的对象运动与给定的用户或相机运动集成在一起,以生成下一帧。使用不合格的填充模块,我们合成由于相机和对象运动而发现的区域。我们为动态场景的电视开发了一个新的合成数据集,该数据集由800个以全高清分辨率组成的视频组成。我们通过数据集和MPI Sintel数据集上的实验表明我们的模型优于文献中的所有竞争方法。
translated by 谷歌翻译
人类可以轻松地在不知道它们的情况下段移动移动物体。从持续的视觉观测中可能出现这种对象,激励我们与未标记的视频同时进行建模和移动。我们的前提是视频具有通过移动组件相关的相同场景的不同视图,并且右区域分割和区域流程将允许相互视图合成,其可以从数据本身检查,而无需任何外部监督。我们的模型以两个单独的路径开头:一种外观途径,其输出单个图像的基于特征的区域分割,以及输出一对图像的运动功能的运动路径。然后,它将它们绑定在称为段流的联合表示中,该分段流汇集在每个区域上的流程偏移,并提供整个场景的移动区域的总表征。通过培训模型,以最小化基于段流的视图综合误差,我们的外观和运动路径自动学习区域分割和流量估计,而不分别从低级边缘或光学流量构建它们。我们的模型展示了外观途径中对象的令人惊讶的出现,超越了从图像的零射对对象分割上的工作,从带有无监督的测试时间适应的视频移动对象分割,并通过监督微调,通过监督微调。我们的工作是来自视频的第一个真正的零点零点对象分段。它不仅开发了分割和跟踪的通用对象,而且还优于无增强工程的基于普遍的图像对比学习方法。
translated by 谷歌翻译
We describe a learning-based approach to handeye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.
translated by 谷歌翻译
We propose the Encoder-Recurrent-Decoder (ERD) model for recognition and prediction of human body pose in videos and motion capture. The ERD model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers. We test instantiations of ERD architectures in the tasks of motion capture (mocap) generation, body pose labeling and body pose forecasting in videos. Our model handles mocap training data across multiple subjects and activity domains, and synthesizes novel motions while avoiding drifting for long periods of time. For human pose labeling, ERD outperforms a per frame body part detector by resolving left-right body part confusions. For video pose forecasting, ERD predicts body joint displacements across a temporal horizon of 400ms and outperforms a first order motion model based on optical flow. ERDs extend previous Long Short Term Memory (LSTM) models in the literature to jointly learn representations and their dynamics. Our experiments show such representation learning is crucial for both labeling and prediction in space-time. We find this is a distinguishing feature between the spatio-temporal visual domain in comparison to 1D text, speech or handwriting, where straightforward hard coded representations have shown excellent results when directly combined with recurrent units [31] .
translated by 谷歌翻译
Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. Our framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. To achieve this, we decouple appearance and motion information using a self-supervised formulation. To support complex motions, we use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video. Our framework scores best on diverse benchmarks and on a variety of object categories. Our source code is publicly available 1 .
translated by 谷歌翻译
动态对象对机器人对环境的看法产生了重大影响,这降低了本地化和映射等基本任务的性能。在这项工作中,我们通过在由动态对象封闭的区域中合成合理的颜色,纹理和几何形状来解决这个问题。我们提出了一种新的几何感知Dynafill架构,其遵循粗略拓扑,并将我们所通用的经常性反馈机制结合到自适应地融合来自之前的时间步来的信息。我们使用对抗性培训来优化架构,以综合精细的现实纹理,使其能够以空间和时间相干的方式在线在线遮挡地区的幻觉和深度结构,而不依赖于未来的帧信息。将我们的待遇问题作为图像到图像到图像的翻译任务,我们的模型还纠正了与场景中动态对象的存在相关的区域,例如阴影或反射。我们引入了具有RGB-D图像,语义分段标签,摄像机的大型高估数据集,以及遮挡区域的地面RGB-D信息。广泛的定量和定性评估表明,即使在挑战天气条件下,我们的方法也能实现最先进的性能。此外,我们使用综合图像显示基于检索的视觉本地化的结果,该图像证明了我们方法的效用。
translated by 谷歌翻译
我们介绍从单个视频帧预测的问题,从单个视频帧,包括实际瞬时光流的光流量的低维子空间。我们展示了几种自然场景假设如何通过差异和对象实例的表示,通过一组基流字段来识别适当的流子空间。流量子空间与新颖的丢失函数一起可用于预测单眼深度或预测深度加上对象实例嵌入的任务。这提供了一种新方法,可以使用单眼输入视频以无监督的方式学习这些任务,而无需相机内在或姿势。
translated by 谷歌翻译
Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on the alignment of nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task that requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-toend to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high frame rate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines 1 .
translated by 谷歌翻译