Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
单眼深度估计是计算机视觉社区的重要任务。尽管巨大的成功方法取得了出色的结果,但其中大多数在计算上都是昂贵的,并且不适用于实时推论。在本文中,我们旨在解决单眼深度估计的更实际的应用,该解决方案不仅应考虑精度,而且还应考虑移动设备上的推论时间。为此,我们首先开发了一个基于端到端学习的模型,其重量大小(1.4MB)和短的推理时间(Raspberry Pi 4上的27fps)。然后,我们提出了一种简单而有效的数据增强策略,称为R2 CROP,以提高模型性能。此外,我们观察到,只有一个单一损失术语训练的简单轻巧模型将遭受性能瓶颈的影响。为了减轻此问题,我们采用多个损失条款,在培训阶段提供足够的限制。此外,采用简单的动态重量重量策略,我们可以避免耗时的超参数选择损失项。最后,我们采用结构感知的蒸馏以进一步提高模型性能。值得注意的是,我们的解决方案在MAI&AIM2022单眼估计挑战中排名第二,Si-RMSE为0.311,RMSE为3.79,推理时间为37 $ ms $,在Raspberry Pi上进行了测试4.值得注意的是,我们提供了,我们提供了。挑战最快的解决方案。代码和模型将以\ url {https://github.com/zhyever/litedepth}发布。
translated by 谷歌翻译
The increased importance of mobile photography created a need for fast and performant RAW image processing pipelines capable of producing good visual results in spite of the mobile camera sensor limitations. While deep learning-based approaches can efficiently solve this problem, their computational requirements usually remain too large for high-resolution on-device image processing. To address this limitation, we propose a novel PyNET-V2 Mobile CNN architecture designed specifically for edge devices, being able to process RAW 12MP photos directly on mobile phones under 1.5 second and producing high perceptual photo quality. To train and to evaluate the performance of the proposed solution, we use the real-world Fujifilm UltraISP dataset consisting on thousands of RAW-RGB image pairs captured with a professional medium-format 102MP Fujifilm camera and a popular Sony mobile camera sensor. The results demonstrate that the PyNET-V2 Mobile model can substantially surpass the quality of tradition ISP pipelines, while outperforming the previously introduced neural network-based solutions designed for fast image processing. Furthermore, we show that the proposed architecture is also compatible with the latest mobile AI accelerators such as NPUs or APUs that can be used to further reduce the latency of the model to as little as 0.5 second. The dataset, code and pre-trained models used in this paper are available on the project website: https://github.com/gmalivenko/PyNET-v2
translated by 谷歌翻译
随着对移动平台上对计算摄影和成像的需求不断增长,在相机系统中开发和集成了高级图像传感器与相机系统中新型算法。但是,缺乏用于研究的高质量数据以及从行业和学术界进行深入交流的难得的机会限制了移动智能摄影和成像(MIPI)的发展。为了弥合差距,我们介绍了第一个MIPI挑战,包括五个曲目,这些曲目着重于新型图像传感器和成像算法。在本文中,引入了RGB+TOF深度完成,这是五个曲目之一,其中一条介绍了RGB传感器和TOF传感器(带有点照明)的融合。为参与者提供了一个名为TetrasRGBD的新数据集,其中包含18k对高质量合成RGB+DEPTH训练数据和2.3k对来自混合源的测试数据。所有数据均在室内场景中收集。我们要求所有方法的运行时间都应在桌面GPU上实时。最终结果是使用客观指标和平均意见评分(MOS)主观评估的。本文提供了此挑战中所有模型的详细描述。有关此挑战的更多详细信息以及数据集的链接,请访问https://github.com/mipi-challenge/mipi2022。
translated by 谷歌翻译
深度是自治车辆以感知障碍的重要信息。由于价格相对较低,单目一体相机的小尺寸,从单个RGB图像的深度估计引起了对研究界的兴趣。近年来,深神经网络(DNN)的应用已经显着提高了单眼深度估计(MDE)的准确性。最先进的方法通常设计在复杂和极其深的网络架构之上,需要更多的计算资源,而不使用高端GPU实时运行。虽然一些研究人员试图加速运行速度,但深度估计的准确性降低,因为压缩模型不代表图像。另外,现有方法使用的特征提取器的固有特性导致产生的特征图中的严重空间信息丢失,这也损害了小型图像的深度估计的精度。在本研究中,我们有动力设计一种新颖且有效的卷积神经网络(CNN),其连续地组装两个浅编码器解码器样式子网,以解决这些问题。特别是,我们强调MDE准确性和速度之间的权衡。已经在NYU深度V2,Kitti,Make3D和虚幻数据集上进行了广泛的实验。与拥有极其深层和复杂的架构的最先进的方法相比,所提出的网络不仅可以实现可比性的性能,而且在单个不那么强大的GPU上以更快的速度运行。
translated by 谷歌翻译
Monocular depth estimation can play an important role in addressing the issue of deriving scene geometry from 2D images. It has been used in a variety of industries, including robots, self-driving cars, scene comprehension, 3D reconstructions, and others. The goal of our method is to create a lightweight machine-learning model in order to predict the depth value of each pixel given only a single RGB image as input with the Unet structure of the image segmentation network. We use the NYU Depth V2 dataset to test the structure and compare the result with other methods. The proposed method achieves relatively high accuracy and low rootmean-square error.
translated by 谷歌翻译
作为许多自主驾驶和机器人活动的基本组成部分,如自我运动估计,障碍避免和场景理解,单眼深度估计(MDE)引起了计算机视觉和机器人社区的极大关注。在过去的几十年中,已经开发了大量方法。然而,据我们所知,对MDE没有全面调查。本文旨在通过审查1970年至2021年之间发布的197个相关条款来弥补这一差距。特别是,我们为涵盖各种方法的MDE提供了全面的调查,介绍了流行的绩效评估指标并汇总公开的数据集。我们还总结了一些代表方法的可用开源实现,并比较了他们的表演。此外,我们在一些重要的机器人任务中审查了MDE的应用。最后,我们通过展示一些有希望的未来研究方向来结束本文。预计本调查有助于读者浏览该研究领域。
translated by 谷歌翻译
本文回顾了AIM 2022上压缩图像和视频超级分辨率的挑战。这项挑战包括两条曲目。轨道1的目标是压缩图像的超分辨率,轨迹〜2靶向压缩视频的超分辨率。在轨道1中,我们使用流行的数据集DIV2K作为培训,验证和测试集。在轨道2中,我们提出了LDV 3.0数据集,其中包含365个视频,包括LDV 2.0数据集(335个视频)和30个其他视频。在这一挑战中,有12支球队和2支球队分别提交了赛道1和赛道2的最终结果。所提出的方法和解决方案衡量了压缩图像和视频上超分辨率的最先进。提出的LDV 3.0数据集可在https://github.com/renyang-home/ldv_dataset上找到。此挑战的首页是在https://github.com/renyang-home/aim22_compresssr。
translated by 谷歌翻译
在本文中,我们提出了一种快速的单眼深度估计方法,用于启用低成本水下机器人的3D感知能力。我们制定了一种名为udepth的新型端到端深度视觉学习管道,该管道结合了自然水下场景的图像形成特征的领域知识。首先,我们通过利用水下光线衰减来调整新的输入空间,然后在粗像素深度预测中设计最小二乘配方。随后,我们将其扩展到一个域投影损失,该损失指导超过9K RGB-D训练样本的Udepth的端到端学习。 Udepth采用计算轻型MobilenETV2骨架和基于变压器的优化器设计,以确保嵌入式系统上的快速推理速率。通过域感知的设计选择并通过全面的实验分析,我们证明了可以在确保较小的计算足迹的同时实现最新的深度估计性能。具体而言,与现有基准相比,网络参数少70%-80%,Udepth实现了可比性的,并且通常更高的深度估计性能。虽然完整的模型在单个GPU(CPU核心)上提供了超过66 fps(13 fps)的推理率,但我们对粗深度预测的域投影在单板NVIDIA JETSON TX2S上以51.5 fps的速率运行。推理管道可在https://github.com/uf-robopi/udepth上找到。
translated by 谷歌翻译
在本文中,我们解决了单眼散景合成的问题,我们试图从单个全焦点图像中呈现浅深度图像。与DSLR摄像机不同,由于移动光圈的物理限制,这种效果无法直接在移动摄像机中捕获。因此,我们提出了一种基于网络的方法,该方法能够从单个图像输入中渲染现实的单眼散景。为此,我们根据预测的单眼深度图引入了三个新的边缘感知散景损失,该图在模糊背景时锐化了前景边缘。然后,使用对抗性损失对该模型进行固定,从而产生逼真的玻璃效果。实验结果表明,我们的方法能够在处理复杂场景的同时产生令人愉悦的自然散景效果,并具有锋利的边缘。
translated by 谷歌翻译
Semantic segmentation works on the computer vision algorithm for assigning each pixel of an image into a class. The task of semantic segmentation should be performed with both accuracy and efficiency. Most of the existing deep FCNs yield to heavy computations and these networks are very power hungry, unsuitable for real-time applications on portable devices. This project analyzes current semantic segmentation models to explore the feasibility of applying these models for emergency response during catastrophic events. We compare the performance of real-time semantic segmentation models with non-real-time counterparts constrained by aerial images under oppositional settings. Furthermore, we train several models on the Flood-Net dataset, containing UAV images captured after Hurricane Harvey, and benchmark their execution on special classes such as flooded buildings vs. non-flooded buildings or flooded roads vs. non-flooded roads. In this project, we developed a real-time UNet based model and deployed that network on Jetson AGX Xavier module.
translated by 谷歌翻译
这些年来,展示技术已经发展。开发实用的HDR捕获,处理和显示解决方案以将3D技术提升到一个新的水平至关重要。多曝光立体声图像序列的深度估计是开发成本效益3D HDR视频内容的重要任务。在本文中,我们开发了一种新颖的深度体系结构,以进行多曝光立体声深度估计。拟议的建筑有两个新颖的组成部分。首先,对传统立体声深度估计中使用的立体声匹配技术进行了修改。对于我们体系结构的立体深度估计部分,部署了单一到stereo转移学习方法。拟议的配方规避了成本量构造的要求,该要求由基于重新编码的单码编码器CNN取代,具有不同的重量以进行功能融合。基于有效网络的块用于学习差异。其次,我们使用强大的视差特征融合方法组合了从不同暴露水平上从立体声图像获得的差异图。使用针对不同质量度量计算的重量图合并在不同暴露下获得的差异图。获得的最终预测差异图更强大,并保留保留深度不连续性的最佳功能。提出的CNN具有使用标准动态范围立体声数据或具有多曝光低动态范围立体序列的训练的灵活性。在性能方面,所提出的模型超过了最新的单眼和立体声深度估计方法,无论是定量还是质量地,在具有挑战性的场景流以及暴露的Middlebury立体声数据集上。该体系结构在复杂的自然场景中表现出色,证明了其对不同3D HDR应用的有用性。
translated by 谷歌翻译
深度完成旨在预测从深度传感器(例如Lidars)中捕获的极稀疏图的密集像素深度。它在各种应用中起着至关重要的作用,例如自动驾驶,3D重建,增强现实和机器人导航。基于深度学习的解决方案已经证明了这项任务的最新成功。在本文中,我们首次提供了全面的文献综述,可帮助读者更好地掌握研究趋势并清楚地了解当前的进步。我们通过通过对现有方法进行分类的新型分类法提出建议,研究网络体系结构,损失功能,基准数据集和学习策略的设计方面的相关研究。此外,我们在包括室内和室外数据集(包括室内和室外数据集)上进行了三个广泛使用基准测试的模型性能进行定量比较。最后,我们讨论了先前作品的挑战,并为读者提供一些有关未来研究方向的见解。
translated by 谷歌翻译
单像超分辨率可以在需要可靠的视觉流以监视任务,处理远程操作或研究相关视觉细节的环境中支持机器人任务。在这项工作中,我们为实时超级分辨率提出了一个有效的生成对抗网络模型。我们采用了原始SRGAN的量身定制体系结构和模型量化,以提高CPU和Edge TPU设备上的执行,最多达到200 fps的推断。我们通过将其知识提炼成较小版本的网络,进一步优化我们的模型,并与标准培训方法相比获得显着的改进。我们的实验表明,与较重的最新模型相比,我们的快速和轻量级模型可保持相当令人满意的图像质量。最后,我们对图像传输进行带宽降解的实验,以突出提出的移动机器人应用系统的优势。
translated by 谷歌翻译
自我监督的单眼深度估计是一种有吸引力的解决方案,不需要难以供应的深度标签进行训练。卷积神经网络(CNN)最近在这项任务中取得了巨大成功。但是,他们的受欢迎的领域有限地限制了现有的网络体系结构,以便在本地进行推理,从而抑制了自我监督范式的有效性。鉴于Vision Transformers(VIT)最近取得的成功,我们提出了Monovit,这是一个崭新的框架,结合了VIT模型支持的全球推理以及自我监督的单眼深度估计的灵活性。通过将普通的卷积与变压器块相结合,我们的模型可以在本地和全球范围内推理,从而在较高的细节和准确性上产生深度预测,从而使MonoVit可以在已建立的Kitti数据集中实现最先进的性能。此外,Monovit证明了其在其他数据集(例如Make3D和Drivingstereo)上的出色概括能力。
translated by 谷歌翻译
深度估计是3D重建的具有挑战性的任务,以提高环境意识的准确性感测。这项工作带来了一系列改进的新解决方案,与现有方法相比,增加了一系列改进,这增加了对深度图的定量和定性理解。最近,卷积神经网络(CNN)展示了估计单眼图象的深度图的非凡能力。然而,传统的CNN不支持拓扑结构,它们只能在具有确定尺寸和重量的常规图像区域上工作。另一方面,图形卷积网络(GCN)可以处理非欧几里德数据的卷积,并且它可以应用于拓扑结构内的不规则图像区域。因此,在这项工作中为了保护对象几何外观和分布,我们的目的是利用GCN进行自我监督的深度估计模型。我们的模型包括两个并行自动编码器网络:第一个是一个自动编码器,它取决于Reset-50,并从输入图像和多尺度GCN上提取功能以估计深度图。反过来,第二网络将用于基于Reset-18的两个连续帧之间估计自我运动矢量(即3D姿势)。估计的3D姿势和深度图都将用于构建目标图像。使用与光度,投影和平滑度相关的损耗函数的组合用于应对不良深度预测,并保持对象的不连续性。特别是,我们的方法提供了可比性和有前途的结果,在公共基准和Make3D数据集中的高预测精度为89%,与最先进的解决方案相比,培训参数的数量减少了40%。源代码在https://github.com/arminmasoumian/gcndepth.git上公开可用
translated by 谷歌翻译
建立新型观点综合的最近进展后,我们提出了改善单眼深度估计的应用。特别是,我们提出了一种在三个主要步骤中分开的新颖训练方法。首先,单眼深度网络的预测结果被扭转到额外的视点。其次,我们应用一个额外的图像综合网络,其纠正并提高了翘曲的RGB图像的质量。通过最小化像素-WISE RGB重建误差,该网络的输出需要尽可能类似地查看地面真实性视图。第三,我们将相同的单眼深度估计重新应用于合成的第二视图点,并确保深度预测与相关的地面真理深度一致。实验结果证明,我们的方法在Kitti和Nyu-Deaft-V2数据集上实现了最先进的或可比性,具有轻量级和简单的香草U-Net架构。
translated by 谷歌翻译