We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the current optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024×436) images. Our models are available on https://github.com/NVlabs/PWC-Net.
translated by 谷歌翻译
We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions; these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96% smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (< 1 pixel), a convolutional approach applied to pairs of warped images is appropriate. Third, unlike FlowNet, the learned convolution filters appear similar to classical spatio-temporal filters, giving insight into the method and how to improve it. Our results are more accurate than FlowNet on most standard benchmarks, suggesting a new direction of combining classical flow methods with deep learning.1 This, of course, has well-known limitations, which we discuss later.
translated by 谷歌翻译
Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations.Since existing ground truth datasets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.
translated by 谷歌翻译
The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.
translated by 谷歌翻译
光学流量估计是视频分析领域的一个重要而有挑战性问题。卷积神经网络的不同语义级别/层的特征可以提供不同粒度的信息。为了利用如此灵活和全面的信息,我们提出了一个半监督的特征金字塔形相关和残余重建网络(FPCR-Net),用于框架对的光学流量估计。它由两个主要模块组成:金字塔相关映射和剩余重建。金字塔相关映射模块利用全局/本地补丁的多尺度相关性来通过聚合不同尺度的特征来形成多级成本卷。剩余重建模块旨在重建每个阶段中更精细的光学流的子带高频残差。基于金字塔相关映射,我们进一步提出了相关 - 扭曲 - 归一化(CWN)模块,以有效地利用相关性依赖性。实验结果表明,该方案在针对竞争基线方法的平均终点误差(AEE)方面,实现了最先进的性能,改善了0.80,1.15和0.10 - Flownet2,LiteFlowNet和PWC-Net Sintel DataSet的最终通过。
translated by 谷歌翻译
培训细节和数据集对于筏等最新的光流模型有多重要?它们会概括吗?为了探索这些问题,而不是开发新的模型,我们将重新访问三个突出的模型,即PWC-NET,IRR-PWC和RAFT,并采用一组常见的现代培训技术和数据集,并观察到显着的性能增长,证明了重要性和普遍性这些培训细节。我们新训练的PWC-NET和IRR-PWC模型显示出惊人的改进,与Sintel和Kitti 2015 Benchmarks相比,最高30%的结果与原始发布的结果相比。他们的表现胜过2015年Kitti的最新流程1D,而推断过程中的速度快3倍。我们新训练的筏子在2015年的Kitti上获得了4.31%的成绩,比写作时所有已发表的光流方法更准确。我们的结果表明,分析光流方法的性能提高时,分离模型,训练技术和数据集的贡献的好处。我们的源代码将公开可用。
translated by 谷歌翻译
We present a unified formulation and model for three motion and 3D perception tasks: optical flow, rectified stereo matching and unrectified stereo depth estimation from posed images. Unlike previous specialized architectures for each specific task, we formulate all three tasks as a unified dense correspondence matching problem, which can be solved with a single model by directly comparing feature similarities. Such a formulation calls for discriminative feature representations, which we achieve using a Transformer, in particular the cross-attention mechanism. We demonstrate that cross-attention enables integration of knowledge from another image via cross-view interactions, which greatly improves the quality of the extracted features. Our unified model naturally enables cross-task transfer since the model architecture and parameters are shared across tasks. We outperform RAFT with our unified model on the challenging Sintel dataset, and our final model that uses a few additional task-specific refinement steps outperforms or compares favorably to recent state-of-the-art methods on 10 popular flow, stereo and depth datasets, while being simpler and more efficient in terms of model design and inference speed.
translated by 谷歌翻译
We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow. RAFT extracts perpixel features, builds multi-scale 4D correlation volumes for all pairs of pixels, and iteratively updates a flow field through a recurrent unit that performs lookups on the correlation volumes. RAFT achieves stateof-the-art performance. On KITTI, RAFT achieves an F1-all error of 5.10%, a 16% error reduction from the best published result (6.10%). On Sintel (final pass), RAFT obtains an end-point-error of 2.855 pixels, a 30% error reduction from the best published result (4.098 pixels). In addition, RAFT has strong cross-dataset generalization as well as high efficiency in inference time, training speed, and parameter count. Code is available at https://github.com/princeton-vl/RAFT.
translated by 谷歌翻译
Optical flow estimation is a classical yet challenging task in computer vision. One of the essential factors in accurately predicting optical flow is to alleviate occlusions between frames. However, it is still a thorny problem for current top-performing optical flow estimation methods due to insufficient local evidence to model occluded areas. In this paper, we propose the Super Kernel Flow Network (SKFlow), a CNN architecture to ameliorate the impacts of occlusions on optical flow estimation. SKFlow benefits from the super kernels which bring enlarged receptive fields to complement the absent matching information and recover the occluded motions. We present efficient super kernel designs by utilizing conical connections and hybrid depth-wise convolutions. Extensive experiments demonstrate the effectiveness of SKFlow on multiple benchmarks, especially in the occluded areas. Without pre-trained backbones on ImageNet and with a modest increase in computation, SKFlow achieves compelling performance and ranks $\textbf{1st}$ among currently published methods on the Sintel benchmark. On the challenging Sintel clean and final passes (test), SKFlow surpasses the best-published result in the unmatched areas ($7.96$ and $12.50$) by $9.09\%$ and $7.92\%$. The code is available at \href{https://github.com/littlespray/SKFlow}{https://github.com/littlespray/SKFlow}.
translated by 谷歌翻译
无监督的对光流计算的深度学习取得了令人鼓舞的结果。大多数现有的基于深网的方法都依赖图像亮度一致性和局部平滑度约束来训练网络。他们的性能在发生重复纹理或遮挡的区域降低。在本文中,我们提出了深层的外两极流,这是一种无监督的光流方法,将全局几何约束结合到网络学习中。特别是,我们研究了多种方式在流量估计中强制执行外两极约束。为了减轻在可能存在多个动作的动态场景中遇到的“鸡肉和蛋”类型的问题,我们提出了一个低级别的约束以及对培训的订婚结合的约束。各种基准测试数据集的实验结果表明,与监督方法相比,我们的方法实现了竞争性能,并且优于最先进的无监督深度学习方法。
translated by 谷歌翻译
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluating scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
translated by 谷歌翻译
Recent works have shown that optical flow can be learned by deep networks from unlabelled image pairs based on brightness constancy assumption and smoothness prior. Current approaches additionally impose an augmentation regularization term for continual self-supervision, which has been proved to be effective on difficult matching regions. However, this method also amplify the inevitable mismatch in unsupervised setting, blocking the learning process towards optimal solution. To break the dilemma, we propose a novel mutual distillation framework to transfer reliable knowledge back and forth between the teacher and student networks for alternate improvement. Concretely, taking estimation of off-the-shelf unsupervised approach as pseudo labels, our insight locates at defining a confidence selection mechanism to extract relative good matches, and then add diverse data augmentation for distilling adequate and reliable knowledge from teacher to student. Thanks to the decouple nature of our method, we can choose a stronger student architecture for sufficient learning. Finally, better student prediction is adopted to transfer knowledge back to the efficient teacher without additional costs in real deployment. Rather than formulating it as a supervised task, we find that introducing an extra unsupervised term for multi-target learning achieves best final results. Extensive experiments show that our approach, termed MDFlow, achieves state-of-the-art real-time accuracy and generalization ability on challenging benchmarks. Code is available at https://github.com/ltkong218/MDFlow.
translated by 谷歌翻译
Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. To train our network, we use 1,132 240-fps video clips, containing 300K individual video frames. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.
translated by 谷歌翻译
基于学习的光流量估计已经与成本量的管道管道,具有用于流回归的卷曲,其固有地限于本地相关性,因此很难解决大型位移的长期挑战。为了缓解这一点,通过大量迭代细化产生一系列流动更新,实现最先进的方法,即筏,逐渐提高其预测的质量,实现了显着的性能,但减慢推理速度。为了实现高精度和效率的光学流量估计,我们通过将光学流作为全球匹配问题重新重新重新重新匹配,完全改造主导流回归管道。具体而言,我们提出了一个GMFlow框架,它由三个主要组件组成:用于功能增强的自定义变压器,全局特征匹配的相关和软邮件,以及用于流传播的自我注意层。此外,我们进一步介绍了一种改进步骤,该步骤在较高分辨率下重复使用GMFlow以进行残余流量预测。我们的新框架优于32次迭代RAFT在挑战的Sintel基准测试中的性能,同时仅使用一个细化并更快地运行,为高效和准确的光学流量估算提供了新的可能性。代码将在https://github.com/haofeixu/gmflow上使用。
translated by 谷歌翻译
最近,现场流动估计的神经网络在汽车数据(例如Kitti基准测试)上显示出令人印象深刻的结果。但是,尽管使用了复杂的刚性假设和参数化,但此类网络通常仅限于两个帧对,而这些帧对不允许它们利用时间信息。在我们的论文中,我们通过提出一种新型的多帧方法来解决这一缺点,该方法考虑了前一个立体对。为此,我们采取了两个步骤:首先,基于最近的Raft-3D方法,我们通过合并改进的立体声方法来开发高级的两框基线。其次,甚至更重要的是,利用RAFT-3D的特定建模概念,我们提出了一个像U-NET这样的U-NET架构,该体系结构执行了向前和向后流量估计的融合,因此允许按需将时间信息集成。 KITTI基准测试的实验不仅表明了改进的基线和时间融合方法的优势相互补充,而且还证明了计算的场景流非常准确。更确切地说,我们的方法排名第二,对于更具挑战性的前景对象来说,总的来说,总比原始RAFT-3D方法的表现超过16%。代码可从https://github.com/cv-stuttgart/m-fuse获得。
translated by 谷歌翻译
Recent work has shown that depth estimation from a stereo pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNNs). However, current architectures rely on patch-based Siamese networks, lacking the means to exploit context information for finding correspondence in illposed regions. To tackle this problem, we propose PSM-Net, a pyramid stereo matching network consisting of two main modules: spatial pyramid pooling and 3D CNN. The spatial pyramid pooling module takes advantage of the capacity of global context information by aggregating context in different scales and locations to form a cost volume. The 3D CNN learns to regularize cost volume using stacked multiple hourglass networks in conjunction with intermediate supervision. The proposed approach was evaluated on several benchmark datasets. Our method ranked first in the KITTI 2012 and 2015 leaderboards before March 18, 2018. The codes of PSMNet are available at: https: //github.com/JiaRenChang/PSMNet.
translated by 谷歌翻译
在本文中,我们提出了USEGSCENE,该框架用于使用卷积神经网络对立体声相机图像的深度,光流和自我感动的无监督学习。我们的框架利用语义信息来改善深度和光流图的正则化,多模式融合和遮挡填充考虑动态刚性对象运动作为独立的SE(3)转换。此外,我们与纯照相匹配匹配互补,我们提出了连续图像之间语义特征,像素类别和对象实例边界的匹配。与以前的方法相反,我们提出了一个网络体系结构,该网络体系结构可以使用共享编码器共同预测所有输出,并允许在任务域上传递信息,例如,光流的预测可以从深度的预测中受益。此外,我们明确地了解网络内部的深度和光流遮挡图,这些图被利用,以改善这些区域的预测。我们在流行的Kitti数据集上介绍了结果,并表明我们的方法以大幅度的优于其他方法。
translated by 谷歌翻译
在本文中,我们研究了从同步2D和3D数据共同估计光流量和场景流的问题。以前的方法使用复杂的管道,将联合任务分成独立阶段,或以“早期融合”或“迟到的”方式“的熔断器2D和3D信息。这种单尺寸适合的方法遭受了未能充分利用每个模态的特征的困境,或者最大化模态互补性。为了解决这个问题,我们提出了一个新的端到端框架,称为Camliflow。它由2D和3D分支组成,在特定层之间具有多个双向连接。与以前的工作不同,我们应用基于点的3D分支以更好地提取几何特征,并设计一个对称的学习操作员以保险熔断致密图像特征和稀疏点特征。我们还提出了一种转换,以解决3D-2D投影的非线性问题。实验表明,Camliflow以更少的参数实现了更好的性能。我们的方法在Kitti场景流基准上排名第一,表现出以1/7参数的前一篇文章。代码将可用。
translated by 谷歌翻译
现有的光流估计器通常采用通常用于图像分类的网络体系结构作为提取人均功能的编码器。但是,由于任务之间的自然差异,用于图像分类的架构可能是最佳的流量估计。为了解决此问题,我们建议一种名为Falownas的神经体系结构搜索方法,以自动找到用于流估计任务的更好的编码器体系结构。我们首先设计一个合适的搜索空间,包括各种卷积运算符,并构建一个体重共享的超级网络,以有效评估候选体系结构。然后,为了更好地训练超级网络,我们提出了特征对齐蒸馏,该蒸馏利用训练有素的流量估计器来指导超级网络的训练。最后,利用资源约束的进化算法找到最佳体系结构(即子网络)。实验结果表明,从超级网络继承的权重的发现的结构达到了4.67 \%f1-able kitti上的误差,这是RAFT基线的8.4 \%降低,超过了先进的手工制作的型号GMA和AGFlow,同时降低模型的复杂性和延迟。源代码和训练有素的模型将在https://github.com/vdigpku/flownas中发布。
translated by 谷歌翻译
我们介绍了光流变压器,被称为流动型,这是一种基于变压器的神经网络体系结构,用于学习光流。流动形式将图像对构建的4D成本量构成,将成本令牌编码为成本记忆,并在新颖的潜在空间中使用备用组变压器(AGT)层编码成本记忆,并通过反复的变压器解码器与动态位置成本查询来解码成本记忆。在SINTEL基准测试中,流动型在干净和最终通行证上达到1.144和2.183平均末端PONIT-ERROR(AEPE),从最佳发布的结果(1.388和2.47)降低了17.6%和11.6%的误差。此外,流程度还达到了强大的概括性能。在不接受Sintel的培训的情况下,FlowFormer在Sintel训练套装清洁通行证上达到了0.95 AEPE,优于最佳发布结果(1.29),提高了26.9%。
translated by 谷歌翻译