隐式神经表示(INR)被出现为代表信号的强大范例,例如图像,视频,3D形状等。尽管它已经示出了能够表示精细细节的能力,但其效率尚未得到广泛研究数据表示。在INR中,数据以神经网络的参数的形式存储,并且通用优化算法通常不会利用信号中的空间和时间冗余。在本文中,我们建议通过明确地删除数据冗余来表示和压缩视频的新型INR方法。我们提出了跨视频帧和残差的主体剩余流场(NRFF)而不是存储原始RGB颜色,而不是存储原始RGB颜色。维护通常更光滑和更复杂的运动信息,比原始信号更少,需要更少的参数。此外,重用冗余像素值进一步提高了网络参数效率。实验结果表明,所提出的方法优于基线方法的显着边际。代码可用于https://github.com/daniel03c1/eff_video_repruseentation。
translated by 谷歌翻译
Neural fields, also known as coordinate-based or implicit neural representations, have shown a remarkable capability of representing, generating, and manipulating various forms of signals. For video representations, however, mapping pixel-wise coordinates to RGB colors has shown relatively low compression performance and slow convergence and inference speed. Frame-wise video representation, which maps a temporal coordinate to its entire frame, has recently emerged as an alternative method to represent videos, improving compression rates and encoding speed. While promising, it has still failed to reach the performance of state-of-the-art video compression algorithms. In this work, we propose FFNeRV, a novel method for incorporating flow information into frame-wise representations to exploit the temporal redundancy across the frames in videos inspired by the standard video codecs. Furthermore, we introduce a fully convolutional architecture, enabled by one-dimensional temporal grids, improving the continuity of spatial features. Experimental results show that FFNeRV yields the best performance for video compression and frame interpolation among the methods using frame-wise representations or neural fields. To reduce the model size even further, we devise a more compact convolutional architecture using the group and pointwise convolutions. With model compression techniques, including quantization-aware training and entropy coding, FFNeRV outperforms widely-used standard video codecs (H.264 and HEVC) and performs on par with state-of-the-art video compression algorithms.
translated by 谷歌翻译
神经领域已成为一种新的数据表示范式,并在各种信号表示中表现出了显着的成功。由于它们在网络参数中保留信号,因此通过发送和接收整个模型参数来传输数据传输,可以防止在许多实际情况下使用这种新兴技术。我们提出了流媒体神经场,这是一个由各种宽度的可执行子网络组成的单个模型。拟议的建筑和培训技术使一个网络能够随着时间的流逝而流式传输,并重建不同的素质和一部分信号。例如,较小的子网络会产生光滑和低频信号,而较大的子网络可以代表细节。实验结果显示了我们方法在各个域中的有效性,例如2D图像,视频和3D签名的距离函数。最后,我们证明我们提出的方法通过利用参数共享来提高培训稳定性。
translated by 谷歌翻译
我们提出了一种压缩具有隐式神经表示的全分辨率视频序列的方法。每个帧表示为映射坐标位置到像素值的神经网络。我们使用单独的隐式网络来调制坐标输入,从而实现帧之间的有效运动补偿。与一个小的残余网络一起,这允许我们有效地相对于前一帧压缩p帧。通过使用学习的整数量化存储网络权重,我们进一步降低了比特率。我们呼叫隐式像素流(IPF)的方法,提供了几种超简化的既定神经视频编解码器:它不需要接收器可以访问预先磨普的神经网络,不使用昂贵的内插基翘曲操作,而不是需要单独的培训数据集。我们展示了神经隐式压缩对图像和视频数据的可行性。
translated by 谷歌翻译
We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). This problem is challenging because video appearance and motion can be highly complex. Traditional optical-flow-based solutions often fail where flow estimation is challenging, while newer neural-network-based methods that hallucinate pixel values directly often produce blurry results. We combine the advantages of these two methods by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which we call deep voxel flow. Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. The technique is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-ofthe-art.
translated by 谷歌翻译
视频通常将流和连续的视觉数据记录为离散的连续帧。由于存储成本对于高保真度的视频来说是昂贵的,因此大多数存储以相对较低的分辨率和帧速率存储。最新的时空视频超分辨率(STVSR)的工作是开发出来的,以将时间插值和空间超分辨率纳入统一框架。但是,其中大多数仅支持固定的上采样量表,这限制了其灵活性和应用。在这项工作中,我们没有遵循离散表示,我们提出了视频隐式神经表示(videoinr),并显示了其对STVSR的应用。学到的隐式神经表示可以解码为任意空间分辨率和帧速率的视频。我们表明,Videoinr在常见的上采样量表上使用最先进的STVSR方法实现了竞争性能,并且在连续和训练的分布量表上显着优于先前的作品。我们的项目页面位于http://zeyuan-chen.com/videoinr/。
translated by 谷歌翻译
Given two consecutive frames, video interpolation aims at generating intermediate frame(s) to form both spatially and temporally coherent video sequences. While most existing methods focus on single-frame interpolation, we propose an end-to-end convolutional neural network for variable-length multi-frame video interpolation, where the motion interpretation and occlusion reasoning are jointly modeled. We start by computing bi-directional optical flow between the input images using a U-Net architecture. These flows are then linearly combined at each time step to approximate the intermediate bi-directional optical flows. These approximate flows, however, only work well in locally smooth regions and produce artifacts around motion boundaries. To address this shortcoming, we employ another U-Net to refine the approximated flow and also predict soft visibility maps. Finally, the two input images are warped and linearly fused to form each intermediate frame. By applying the visibility maps to the warped images before fusion, we exclude the contribution of occluded pixels to the interpolated intermediate frame to avoid artifacts. Since none of our learned network parameters are time-dependent, our approach is able to produce as many intermediate frames as needed. To train our network, we use 1,132 240-fps video clips, containing 300K individual video frames. Experimental results on several datasets, predicting different numbers of interpolated frames, demonstrate that our approach performs consistently better than existing methods.
translated by 谷歌翻译
Many video enhancement algorithms rely on optical flow to register frames in a video sequence. Precise flow estimation is however intractable; and optical flow itself is often a sub-optimal representation for particular video processing tasks. In this paper, we propose task-oriented flow (TOFlow), a motion representation learned in a selfsupervised, task-specific manner. We design a neural network with a trainable motion estimation component and a video processing component, and train them jointly to learn the task-oriented flow. For evaluation, we build Vimeo-90K, a large-scale, high-quality video dataset for low-level video processing. TOFlow outperforms traditional optical flow on standard benchmarks as well as our Vimeo-90K dataset in three video processing tasks: frame interpolation, video denoising/deblocking, and video super-resolution. IntroductionMotion estimation is a key component in video processing tasks such as temporal frame interpolation, video denoising,
translated by 谷歌翻译
Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on the alignment of nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task that requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-toend to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high frame rate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines 1 .
translated by 谷歌翻译
视频框架合成由插值和外推组成,是一种必不可少的视频处理技术,可应用于各种情况。但是,大多数现有方法无法处理小物体或大型运动,尤其是在高分辨率视频(例如4K视频)中。为了消除此类局限性,我们引入了基于流动帧合成的邻居对应匹配(NCM)算法。由于当前的帧在视频框架合成中不可用,因此NCM以当前框架的方式进行,以在每个像素的空间型社区中建立多尺度对应关系。基于NCM的强大运动表示能力,我们进一步建议在异质的粗到细节方案中估算框架合成的中间流。具体而言,粗尺度模块旨在利用邻居的对应关系来捕获大型运动,而细尺度模块在计算上更有效地加快了估计过程。两个模块都经过逐步训练,以消除培训数据集和现实世界视频之间的分辨率差距。实验结果表明,NCM在多个基准测试中实现了最先进的性能。此外,NCM可以应用于各种实践场景,例如视频压缩,以实现更好的性能。
translated by 谷歌翻译
Conventional video compression approaches use the predictive coding architecture and encode the corresponding motion information and residual information. In this paper, taking advantage of both classical architecture in the conventional video compression method and the powerful nonlinear representation ability of neural networks, we propose the first end-to-end video compression deep model that jointly optimizes all the components for video compression. Specifically, learning based optical flow estimation is utilized to obtain the motion information and reconstruct the current frames. Then we employ two auto-encoder style neural networks to compress the corresponding motion and residual information. All the modules are jointly learned through a single loss function, in which they collaborate with each other by considering the trade-off between reducing the number of compression bits and improving quality of the decoded video. Experimental results show that the proposed approach can outperform the widely used video coding standard H.264 in terms of PSNR and be even on par with the latest standard H.265 in terms of MS-SSIM. Code is released at https://github.com/GuoLusjtu/DVC. * Corresponding author (a) Original frame (Bpp/MS-SSIM) (b) H.264 (0.0540Bpp/0.945) (c) H.265 (0.082Bpp/0.960) (d) Ours ( 0.0529Bpp/ 0.961
translated by 谷歌翻译
Implicit Neural Representations (INR) have recently shown to be powerful tool for high-quality video compression. However, existing works are limiting as they do not explicitly exploit the temporal redundancy in videos, leading to a long encoding time. Additionally, these methods have fixed architectures which do not scale to longer videos or higher resolutions. To address these issues, we propose NIRVANA, which treats videos as groups of frames and fits separate networks to each group performing patch-wise prediction. This design shares computation within each group, in the spatial and temporal dimensions, resulting in reduced encoding time of the video. The video representation is modeled autoregressively, with networks fit on a current group initialized using weights from the previous group's model. To further enhance efficiency, we perform quantization of the network parameters during training, requiring no post-hoc pruning or quantization. When compared with previous works on the benchmark UVG dataset, NIRVANA improves encoding quality from 37.36 to 37.70 (in terms of PSNR) and the encoding speed by 12X, while maintaining the same compression rate. In contrast to prior video INR works which struggle with larger resolution and longer videos, we show that our algorithm is highly flexible and scales naturally due to its patch-wise and autoregressive designs. Moreover, our method achieves variable bitrate compression by adapting to videos with varying inter-frame motion. NIRVANA achieves 6X decoding speed and scales well with more GPUs, making it practical for various deployment scenarios.
translated by 谷歌翻译
最近的作品表明,隐式神经表示(INR)具有信号导数的有意义表示的能力。在这项工作中,我们利用该属性来执行视频框架插值(VFI),通过明确限制INR的衍生物以满足光流约束方程。我们仅使用目标视频及其光流,在有限的运动范围内实现了最先进的VFI,而无需从其他培训数据中学习插值操作员。我们进一步表明,限制INR衍生物不仅可以更好地插值中间框架,还可以提高狭窄网络适合观察到的帧的能力,这暗示了潜在的视频压缩和INR优化的应用。
translated by 谷歌翻译
在许多重要的科学和工程应用中发现了卷数据。渲染此数据以高质量和交互速率为苛刻的应用程序(例如虚拟现实)的可视化化,即使使用专业级硬件也无法实现。我们介绍了Fovolnet - 一种可显着提高数量数据可视化的性能的方法。我们开发了一种具有成本效益的渲染管道,该管道稀疏地对焦点进行了量度,并使用深层神经网络重建了全帧。 FOVEATED渲染是一种优先考虑用户焦点渲染计算的技术。这种方法利用人类视觉系统的属性,从而在用户视野的外围呈现数据时节省了计算资源。我们的重建网络结合了直接和内核预测方法,以产生快速,稳定和感知令人信服的输出。凭借纤细的设计和量化的使用,我们的方法在端到端框架时间和视觉质量中都优于最先进的神经重建技术。我们对系统的渲染性能,推理速度和感知属性进行了广泛的评估,并提供了与竞争神经图像重建技术的比较。我们的测试结果表明,Fovolnet始终在保持感知质量的同时,在传统渲染上节省了大量时间。
translated by 谷歌翻译
我们研究如何代表具有隐式神经表示(INRS)的视频。经典INRS方法通常利用MLP将输入坐标映射到输出像素。尽管最近的一些作品试图直接使用CNN重建整个图像。但是,我们认为,以上像素和图像策略都不利于视频数据。取而代之的是,我们提出了一个贴片解决方案PS-NERV,该解决方案将视频表示为贴片的函数和相应的补丁坐标。它自然继承了图像方法的优势,并以快速解码速度实现出色的重建性能。整个方法包括常规模块,例如位置嵌入,MLP和CNN,同时还引入了ADAIN以增强中间特征。这些简单而基本的更改可以帮助网络轻松拟合高频细节。广泛的实验证明了其在几个与视频有关的任务中的有效性,例如视频压缩和视频介绍。
translated by 谷歌翻译
学习的视频压缩方法在赶上其速率 - 失真(R-D)性能时,追赶传统视频编解码器的许多承诺。然而,现有的学习视频压缩方案受预测模式和固定网络框架的绑定限制。它们无法支持各种帧间预测模式,从而不适用于各种场景。在本文中,为了打破这种限制,我们提出了一种多功能学习的视频压缩(VLVC)框架,它使用一个模型来支持所有可能的预测模式。具体而言,为了实现多功能压缩,我们首先构建一个运动补偿模块,该模块应用用于在空间空间中的加权三线性翘曲的多个3D运动矢量字段(即,Voxel流量)。 Voxel流量传达了时间参考位置的信息,有助于与框架设计中的帧间预测模式分离。其次,在多参考帧预测的情况下,我们应用流预测模块以预测具有统一多项式函数的准确运动轨迹。我们表明流量预测模块可以大大降低体素流的传输成本。实验结果表明,我们提出的VLVC不仅支持各种设置中的多功能压缩,而且还通过MS-SSIM的最新VVC标准实现了可比的R-D性能。
translated by 谷歌翻译
可以通过定期预测未来的框架以增强虚拟现实应用程序中的用户体验,从而解决了低计算设备上图形渲染高帧速率视频的挑战。这是通过时间视图合成(TVS)的问题来研究的,该问题的目标是预测给定上一个帧的视频的下一个帧以及上一个和下一个帧的头部姿势。在这项工作中,我们考虑了用户和对象正在移动的动态场景的电视。我们设计了一个将运动解散到用户和对象运动中的框架,以在预测下一帧的同时有效地使用可用的用户运动。我们通过隔离和估计过去框架的3D对象运动,然后推断它来预测对象的运动。我们使用多平面图像(MPI)作为场景的3D表示,并将对象运动作为MPI表示中相应点之间的3D位移建模。为了在估计运动时处理MPI中的稀疏性,我们将部分卷积和掩盖的相关层纳入了相应的点。然后将预测的对象运动与给定的用户或相机运动集成在一起,以生成下一帧。使用不合格的填充模块,我们合成由于相机和对象运动而发现的区域。我们为动态场景的电视开发了一个新的合成数据集,该数据集由800个以全高清分辨率组成的视频组成。我们通过数据集和MPI Sintel数据集上的实验表明我们的模型优于文献中的所有竞争方法。
translated by 谷歌翻译
Block based motion estimation is integral to inter prediction processes performed in hybrid video codecs. Prevalent block matching based methods that are used to compute block motion vectors (MVs) rely on computationally intensive search procedures. They also suffer from the aperture problem, which can worsen as the block size is reduced. Moreover, the block matching criteria used in typical codecs do not account for the resulting levels of perceptual quality of the motion compensated pictures that are created upon decoding. Towards achieving the elusive goal of perceptually optimized motion estimation, we propose a search-free block motion estimation framework using a multi-stage convolutional neural network, which is able to conduct motion estimation on multiple block sizes simultaneously, using a triplet of frames as input. This composite block translation network (CBT-Net) is trained in a self-supervised manner on a large database that we created from publicly available uncompressed video content. We deploy the multi-scale structural similarity (MS-SSIM) loss function to optimize the perceptual quality of the motion compensated predicted frames. Our experimental results highlight the computational efficiency of our proposed model relative to conventional block matching based motion estimation algorithms, for comparable prediction errors. Further, when used to perform inter prediction in AV1, the MV predictions of the perceptually optimized model result in average Bjontegaard-delta rate (BD-rate) improvements of -1.70% and -1.52% with respect to the MS-SSIM and Video Multi-Method Assessment Fusion (VMAF) quality metrics, respectively as compared to the block matching based motion estimation system employed in the SVT-AV1 encoder.
translated by 谷歌翻译
基于DNN的框架插值从两个连续的帧中生成中间帧,通常取决于具有大量功能的模型体系结构,从而阻止其在具有有限资源的系统(例如移动设备)上部署。我们提出了一种用于框架插值的压缩驱动的网络设计,该设计通过稀疏性诱导优化来利用模型,以大大降低模型大小,同时达到更高的性能。具体而言,我们首先压缩了最近提出的ADACOF模型,并证明了10次压缩ADACOF的性能类似于其原始对应物,在各种超参数设置下,对使用layerwise稀疏信息作为指导的不同策略进行了全面研究。然后,我们通过引入一个多分辨率翘曲模块来增强这种压缩模型,从而提高了视觉一致性,并通过多层次的细节来提高视觉一致性。结果,我们通过原始AdaCof的四分之一获得了可观的性能增长。此外,我们的模型在各种数据集上对其他最先进的方法都表现出色。我们注意到,建议的压缩驱动框​​架是通用的,可以轻松地传输到其他基于DNN的框架插值算法中。源代码可在https://github.com/tding1/cdfi上获得。
translated by 谷歌翻译
我们提出了一种称为基于DNN的基于DNN的框架,称为基于增强的相关匹配的视频帧插值网络,以支持4K的高分辨率,其具有大规模的运动和遮挡。考虑到根据分辨率的网络模型的可扩展性,所提出的方案采用经常性金字塔架构,该架构分享每个金字塔层之间的参数进行光学流量估计。在所提出的流程估计中,通过追踪具有最大相关性的位置来递归地改进光学流。基于前扭曲的相关匹配可以通过排除遮挡区域周围的错误扭曲特征来提高流量更新的准确性。基于最终双向流动,使用翘曲和混合网络合成任意时间位置的中间帧,通过细化网络进一步改善。实验结果表明,所提出的方案在4K视频数据和低分辨率基准数据集中占据了之前的工作,以及具有最小型号参数的客观和主观质量。
translated by 谷歌翻译