从视频中获得地面真相标签很具有挑战性,因为在像素流标签的手动注释非常昂贵且费力。此外,现有的方法试图将合成数据集的训练模型调整到真实的视频中,该视频不可避免地遭受了域差异并阻碍了现实世界应用程序的性能。为了解决这些问题,我们提出了RealFlow,这是一个基于期望最大化的框架,可以直接从任何未标记的现实视频中创建大规模的光流数据集。具体而言,我们首先估计一对视频帧之间的光流,然后根据预测流从该对中合成新图像。因此,新图像对及其相应的流可以被视为新的训练集。此外,我们设计了一种逼真的图像对渲染(RIPR)模块,该模块采用软磁性裂口和双向孔填充技术来减轻图像合成的伪像。在E-Step中,RIPR呈现新图像以创建大量培训数据。在M-Step中,我们利用生成的训练数据来训练光流网络,该数据可用于估计下一个E步骤中的光流。在迭代学习步骤中,流网络的能力逐渐提高,流量的准确性以及合成数据集的质量也是如此。实验结果表明,REALFLOW的表现优于先前的数据集生成方法。此外,基于生成的数据集,我们的方法与受监督和无监督的光流方法相比,在两个标准基准测试方面达到了最先进的性能。我们的代码和数据集可从https://github.com/megvii-research/realflow获得
translated by 谷歌翻译
深度和自我运动估计对于自主机器人和自主驾驶的本地化和导航至关重要。最近的研究可以从未标记的单像素视频中学习每个像素深度和自我运动。提出了一种新颖的无监督培训框架,使用显式3D几何进行3D层次细化和增强。在该框架中,深度和姿势估计在分层和相互耦合以通过层改进估计的姿势层。通过用估计的深度和粗姿势翘曲图像中的像素来提出和合成中间视图图像。然后,可以从新视图图像和相邻帧的图像估计残差变换以改进粗糙姿势。迭代细化在本文中以可分散的方式实施,使整个框架均匀优化。同时,提出了一种新的图像增强方法来综合新视图图像来施加姿势估计,这创造性地增强了3D空间中的姿势,而是获得新的增强2D图像。 Kitti的实验表明,我们的深度估计能够实现最先进的性能,甚至超过最近利用其他辅助任务的方法。我们的视觉内径术优于所有最近无监督的单眼学习的方法,并实现了基于几何的方法,ORB-SLAM2的竞争性能,具有后端优化。
translated by 谷歌翻译
Recently, AutoFlow has shown promising results on learning a training set for optical flow, but requires ground truth labels in the target domain to compute its search metric. Observing a strong correlation between the ground truth search metric and self-supervised losses, we introduce self-supervised AutoFlow to handle real-world videos without ground truth labels. Using self-supervised loss as the search metric, our self-supervised AutoFlow performs on par with AutoFlow on Sintel and KITTI where ground truth is available, and performs better on the real-world DAVIS dataset. We further explore using self-supervised AutoFlow in the (semi-)supervised setting and obtain competitive results against the state of the art.
translated by 谷歌翻译
基于深度学习的当前计算机视觉任务需要大量数据,并具有用于模型培训或测试的注释,尤其是在某些密集的估计任务中,例如光流分段和深度估计。实际上,密集估计任务的手动标记非常困难甚至不可能,并且数据集的场景通常仅限于较小的范围,这极大地限制了社区的发展。为了克服这种缺陷,我们提出了一种合成数据集生成方法,以获取无繁重的手动劳动力的可扩展数据集。通过这种方法,我们构建了一个名为Minenavi的数据集,该数据集包含来自飞机的第一镜头视频视频素材,并与准确的地面真相相匹配,以实现飞机导航应用中的深度估算。我们还提供定量实验,以证明通过Minenavi数据集进行预训练可以提高深度估计模型的性能,并加快模型在真实场景数据上的收敛性。由于合成数据集在深层模型的训练过程中与现实世界数据集具有相似的效果,因此我们还提供了具有单眼深度估计方法的其他实验,以证明各种因素在我们的数据集中的影响,例如照明条件和运动模式。
translated by 谷歌翻译
无监督的对光流计算的深度学习取得了令人鼓舞的结果。大多数现有的基于深网的方法都依赖图像亮度一致性和局部平滑度约束来训练网络。他们的性能在发生重复纹理或遮挡的区域降低。在本文中,我们提出了深层的外两极流,这是一种无监督的光流方法,将全局几何约束结合到网络学习中。特别是,我们研究了多种方式在流量估计中强制执行外两极约束。为了减轻在可能存在多个动作的动态场景中遇到的“鸡肉和蛋”类型的问题,我们提出了一个低级别的约束以及对培训的订婚结合的约束。各种基准测试数据集的实验结果表明,与监督方法相比,我们的方法实现了竞争性能,并且优于最先进的无监督深度学习方法。
translated by 谷歌翻译
Photometric differences are widely used as supervision signals to train neural networks for estimating depth and camera pose from unlabeled monocular videos. However, this approach is detrimental for model optimization because occlusions and moving objects in a scene violate the underlying static scenario assumption. In addition, pixels in textureless regions or less discriminative pixels hinder model training. To solve these problems, in this paper, we deal with moving objects and occlusions utilizing the difference of the flow fields and depth structure generated by affine transformation and view synthesis, respectively. Secondly, we mitigate the effect of textureless regions on model optimization by measuring differences between features with more semantic and contextual information without adding networks. In addition, although the bidirectionality component is used in each sub-objective function, a pair of images are reasoned about only once, which helps reduce overhead. Extensive experiments and visual analysis demonstrate the effectiveness of the proposed method, which outperform existing state-of-the-art self-supervised methods under the same conditions and without introducing additional auxiliary information.
translated by 谷歌翻译
可以通过定期预测未来的框架以增强虚拟现实应用程序中的用户体验,从而解决了低计算设备上图形渲染高帧速率视频的挑战。这是通过时间视图合成(TVS)的问题来研究的,该问题的目标是预测给定上一个帧的视频的下一个帧以及上一个和下一个帧的头部姿势。在这项工作中,我们考虑了用户和对象正在移动的动态场景的电视。我们设计了一个将运动解散到用户和对象运动中的框架,以在预测下一帧的同时有效地使用可用的用户运动。我们通过隔离和估计过去框架的3D对象运动,然后推断它来预测对象的运动。我们使用多平面图像(MPI)作为场景的3D表示,并将对象运动作为MPI表示中相应点之间的3D位移建模。为了在估计运动时处理MPI中的稀疏性,我们将部分卷积和掩盖的相关层纳入了相应的点。然后将预测的对象运动与给定的用户或相机运动集成在一起,以生成下一帧。使用不合格的填充模块,我们合成由于相机和对象运动而发现的区域。我们为动态场景的电视开发了一个新的合成数据集,该数据集由800个以全高清分辨率组成的视频组成。我们通过数据集和MPI Sintel数据集上的实验表明我们的模型优于文献中的所有竞争方法。
translated by 谷歌翻译
Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluating scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.
translated by 谷歌翻译
使用从未标识的视频培训的CNNS的单视深度估计显示了重要的承诺。然而,街头场景驾驶场景中主要获得了优异的结果,并且此类方法通常在其他设置中失败,特别是手持设备采取的室内视频。在这项工作中,我们建立了手持式环境中展出的复杂自我动作是学习深度的关键障碍。我们的基本分析表明,旋转在训练期间的噪声表现在训练期间,而不是提供监督信号的翻译(基线)。为了解决挑战,我们提出了一种数据预处理方法,可以通过去除其有效学习的相对旋转来整流训练图像。显着提高的性能验证了我们的动机。在不需要预处理的情况下,我们向端到端学习,我们提出了一种具有新型损失功能的自动整流网络,可以自动学习在训练期间纠正图像。因此,我们的结果在挑战NYUV2数据集中的大幅度上以较大的余量优于先前的无监督的SOTA方法。我们还展示了我们在Scannet和Make3D中培训模型的概括,以及我们提出的7场景和基蒂数据集的建议学习方法的普遍性。
translated by 谷歌翻译
Image dehazing is one of the important and popular topics in computer vision and machine learning. A reliable real-time dehazing method with reliable performance is highly desired for many applications such as autonomous driving, security surveillance, etc. While recent learning-based methods require datasets containing pairs of hazy images and clean ground truth, it is impossible to capture them in real scenes. Many existing works compromise this difficulty to generate hazy images by rendering the haze from depth on common RGBD datasets using the haze imaging model. However, there is still a gap between the synthetic datasets and real hazy images as large datasets with high-quality depth are mostly indoor and depth maps for outdoor are imprecise. In this paper, we complement the existing datasets with a new, large, and diverse dehazing dataset containing real outdoor scenes from High-Definition (HD) 3D movies. We select a large number of high-quality frames of real outdoor scenes and render haze on them using depth from stereo. Our dataset is clearly more realistic and more diversified with better visual quality than existing ones. More importantly, we demonstrate that using this dataset greatly improves the dehazing performance on real scenes. In addition to the dataset, we also evaluate a series state of the art methods on the proposed benchmarking datasets.
translated by 谷歌翻译
Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations.Since existing ground truth datasets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.
translated by 谷歌翻译
图像缝线旨在缝合从不同的观点拍摄的图像到与更广泛的视野的图象。现有方法使用估计的扭曲函数将目标图像翘曲到参考图像,并且同情是最常用的翘曲功能之一。然而,当由于相机的非平面场景和平移运动导致图像具有大的视差时,同性特性不能完全描述两个图像之间的映射。基于全局或​​本地同类估计的现有方法不存在来自此问题的不含问题,并且由于视差而受到不期望的伪影。在本文中,而不是依赖于基于同位的扭曲,我们提出了一种新颖的深度图像拼接框架,利用像素 - 明智的横田来处理大视差问题。所提出的深度图像拼接框架由两个模块组成:像素 - 明智的翘曲模块(PWM)和缝合图像生成模块(SIGMO)。 PWM采用光学流量估计模型来获得整个图像的像素方面的翘曲,并通过所获得的跨场重新恢复目标图像的像素。 SIGMO将翘曲的目标图像和参考图像混合,同时消除了诸如损害缝合结果的合理性的未对准,接缝和孔的不需要的伪影。为了培训和评估所提出的框架,我们构建了一个大规模数据集,包括具有相应像素的图像对的图像对,该图像对进行映像对实际翘曲和样本缝合结果图像。我们表明,所提出的框架的结果与传统方法的结果优于常规方法,特别是当图像具有大视差时。代码和建议的数据集即将公开发布。
translated by 谷歌翻译
The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.
translated by 谷歌翻译
translated by 谷歌翻译
视频预测是一个推断任务,可以预测给定过去帧的未来帧,而视频框架插值是一个插值任务,可以估算两个帧之间的中间帧。我们目睹了视频框架插值的巨大进步,但野外的一般视频预测仍然是一个悬而未决的问题。受视频框架插值的照片真实结果的启发,我们为视频框架插值提供了一个新的优化框架,用于视频预测,其中我们根据插值模型解决了推断问题。我们的视频预测框架是基于优化的,而无需训练数据集,而无需培训数据集,因此训练数据和测试数据之间没有域间隙问题。另外,我们的方法不需要任何其他信息,例如语义或实例地图,这使我们的框架适用于任何视频。关于CityScapes,Kitti,Davis,Middlebury和Vimeo90K数据集的广泛实验表明,在一般情况下,我们的视频预测结果非常强大,我们的方法优于其他需要大量培训数据或额外语义信息的视频预测方法。
translated by 谷歌翻译
新颖的视图合成(NVS)和视频预测(VP)通常被视为计算机视觉中的不相交任务。但是,它们都可以看作是观察空间时代世界的方法:NVS的目的是从新的角度综合一个场景,而副总裁则旨在从新的时间点观看场景。这两个任务提供了互补的信号以获得场景表示形式,因为观点从空间观察中变化为深度的变化,并且时间观察为相机和单个对象的运动提供了信息。受这些观察的启发,我们建议研究时空(背心)中视频外推的问题。我们提出了一个模型,该模型利用了两项任务的自学和互补线索,而现有方法只能解决其中之一。实验表明,我们的方法比室内和室外现实世界数据集上的几种最先进的NVS和VP方法更好地实现了性能。
translated by 谷歌翻译
Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. To train our network, we use 1,132 240-fps video clips, containing 300K individual video frames. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.
translated by 谷歌翻译
我们提出了一种称为基于DNN的基于DNN的框架,称为基于增强的相关匹配的视频帧插值网络,以支持4K的高分辨率,其具有大规模的运动和遮挡。考虑到根据分辨率的网络模型的可扩展性,所提出的方案采用经常性金字塔架构,该架构分享每个金字塔层之间的参数进行光学流量估计。在所提出的流程估计中,通过追踪具有最大相关性的位置来递归地改进光学流。基于前扭曲的相关匹配可以通过排除遮挡区域周围的错误扭曲特征来提高流量更新的准确性。基于最终双向流动,使用翘曲和混合网络合成任意时间位置的中间帧,通过细化网络进一步改善。实验结果表明,所提出的方案在4K视频数据和低分辨率基准数据集中占据了之前的工作,以及具有最小型号参数的客观和主观质量。
translated by 谷歌翻译
Self-supervised monocular depth estimation has shown impressive results in static scenes. It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions and occlusions. Consequently, existing methods show poor accuracy in dynamic scenes, and the estimated depth map is blurred at object boundaries because they are usually occluded in other training views. In this paper, we propose SC-DepthV3 for addressing the challenges. Specifically, we introduce an external pretrained monocular depth estimation model for generating single-image depth prior, namely pseudo-depth, based on which we propose novel losses to boost self-supervised training. As a result, our model can predict sharp and accurate depth maps, even when training from monocular videos of highly-dynamic scenes. We demonstrate the significantly superior performance of our method over previous methods on six challenging datasets, and we provide detailed ablation studies for the proposed terms. Source code and data will be released at https://github.com/JiawangBian/sc_depth_pl
translated by 谷歌翻译
光流估计是自动驾驶和机器人系统系统中的一项基本任务,它可以在时间上解释流量场景。自动驾驶汽车显然受益于360 {\ deg}全景传感器提供的超宽视野(FOV)。但是,由于全景相机的独特成像过程,专为针孔图像设计的模型不会令人满意地概括为360 {\ deg}全景图像。在本文中,我们提出了一个新颖的网络框架 - panoflow,以学习全景图像的光流。为了克服全景转化中等应角投影引起的扭曲,我们设计了一种流动失真增强(FDA)方法,其中包含径向流量失真(FDA-R)或等骨流量失真(FDA-E)。我们进一步研究了全景视频的环状光流的定义和特性,并通过利用球形图像的环状来推断360 {\ deg}光流并将大型位移转换为相对小的位移,从而提出了环状流量估计(CFE)方法移位。 Panoflow适用于任何现有的流量估计方法,并从狭窄的FOL流量估计的进度中受益。此外,我们创建并释放基于CARLA的合成全景数据集Flow360,以促进训练和定量分析。 Panoflow在公共Omniflownet和已建立的Flow360基准中实现了最先进的表现。我们提出的方法将Flow360上的端点误差(EPE)降低了27.3%。在Omniflownet上,Panoflow获得了3.17像素的EPE,从最佳发布的结果中降低了55.5%的误差。我们还通过收集工具和公共现实世界中的全球数据集对我们的方法进行定性验证我们的方法,这表明对现实世界导航应用程序的强大潜力和稳健性。代码和数据集可在https://github.com/masterhow/panoflow上公开获取。
translated by 谷歌翻译