Recognizing the surrounding environment at low latency is critical in autonomous driving. In real-time environment, surrounding environment changes when processing is over. Current detection models are incapable of dealing with changes in the environment that occur after processing. Streaming perception is proposed to assess the latency and accuracy of real-time video perception. However, additional problems arise in real-world applications due to limited hardware resources, high temperatures, and other factors. In this study, we develop a model that can reflect processing delays in real time and produce the most reasonable results. By incorporating the proposed feature queue and feature select module, the system gains the ability to forecast specific time steps without any additional computational costs. Our method is tested on the Argoverse-HD dataset. It achieves higher performance than the current state-of-the-art methods(2022.10) in various environments when delayed . The code is available at https://github.com/danjos95/DADE
translated by 谷歌翻译
自主驾驶的感知模型需要在低潜伏期内快速推断。尽管现有作品忽略了处理后不可避免的环境变化,但流媒体感知将延迟和准确性共同评估为视频在线感知的单个度量标准,从而指导先前的工作以搜索准确性和速度之间的权衡。在本文中,我们探讨了该指标上实时模型的性能,并赋予模型预测未来的能力,从而显着改善了流媒体感知的结果。具体来说,我们构建了一个具有两个有效模块的简单框架。一个是双流感知模块(DFP)。它分别由捕获运动趋势和基本检测特征并行的动态流和静态流动。趋势意识损失(TAL)是另一个模块,它以其移动速度适应每个对象的体重。实际上,我们考虑了多个速度驾驶场景,并进一步提出了含量不足的流媒体AP(VSAP)以共同评估准确性。在这种现实的环境中,我们设计了一种有效的混合速度训练策略,以指导检测器感知任何速度。我们的简单方法与强大的基线相比,在Argoverse-HD数据集上实现了最先进的性能,并将SAP和VSAP分别提高了4.7%和8.2%,从而验证了其有效性。
translated by 谷歌翻译
In recent years, vision-centric perception has flourished in various autonomous driving tasks, including 3D detection, semantic map construction, motion forecasting, and depth estimation. Nevertheless, the latency of vision-centric approaches is too high for practical deployment (e.g., most camera-based 3D detectors have a runtime greater than 300ms). To bridge the gap between ideal research and real-world applications, it is necessary to quantify the trade-off between performance and efficiency. Traditionally, autonomous-driving perception benchmarks perform the offline evaluation, neglecting the inference time delay. To mitigate the problem, we propose the Autonomous-driving StreAming Perception (ASAP) benchmark, which is the first benchmark to evaluate the online performance of vision-centric perception in autonomous driving. On the basis of the 2Hz annotated nuScenes dataset, we first propose an annotation-extending pipeline to generate high-frame-rate labels for the 12Hz raw images. Referring to the practical deployment, the Streaming Perception Under constRained-computation (SPUR) evaluation protocol is further constructed, where the 12Hz inputs are utilized for streaming evaluation under the constraints of different computational resources. In the ASAP benchmark, comprehensive experiment results reveal that the model rank alters under different constraints, suggesting that the model latency and computation budget should be considered as design choices to optimize the practical deployment. To facilitate further research, we establish baselines for camera-based streaming 3D detection, which consistently enhance the streaming performance across various hardware. ASAP project page: https://github.com/JeffWang987/ASAP.
translated by 谷歌翻译
有效的视觉在延迟预算下的精度最大化。这些作品一次评估脱机准确性,一次是一张图像。但是,诸如自动驾驶之类的实时视觉应用在流媒体设置中运行,在这些设置中,地面真相在推理开始和终点之间会发生变化。这会导致明显的准确性下降。因此,最近提出的一项旨在最大程度地提高流媒体设置准确性的工作。在本文中,我们建议在每个环境环境中最大化流的准确性。我们认为场景难度会影响初始(离线)精度差异,而场景中的障碍物位移会影响后续的准确性降解。我们的方法章鱼使用这些方案属性来选择在测试时最大化流量准确性的配置。我们的方法将跟踪性能(S-MOTA)提高了7.4%,而常规静态方法则提高了。此外,使用我们的方法提高性能,而不是离线准确性的进步,而不是代替而不是进步。
translated by 谷歌翻译
已经提出了高效和自适应计算机视觉系统以使计算机视觉任务,例如图像分类和对象检测,针对嵌入或移动设备进行了优化。这些解决方案最近的起源,专注于通过设计具有近似旋钮的自适应系统来优化模型(深神经网络,DNN)或系统。尽管最近的几项努力,但我们表明现有解决方案遭受了两个主要缺点。首先,系统不考虑模型的能量消耗,同时在制定要运行的模型的决定时。其次,由于其他共同居民工作负载,评估不考虑设备上的争用的实际情况。在这项工作中,我们提出了一种高效和自适应的视频对象检测系统,这是联合优化的精度,能量效率和延迟。底层Virtuoso是一个多分支执行内核,它能够在精度 - 能量 - 延迟轴上的不同运行点处运行,以及轻量级运行时调度程序,以选择最佳的执行分支以满足用户要求。要与Virtuoso相当比较,我们基准于15件最先进的或广泛使用的协议,包括更快的R-CNN(FRCNN),YOLO V3,SSD,培训台,SELSA,MEGA,REPP,FastAdapt和我们的内部FRCNN +,YOLO +,SSD +和高效+(我们的变体具有增强的手机效率)的自适应变体。通过这种全面的基准,Virtuoso对所有上述协议显示出优势,在NVIDIA Jetson Mobile GPU上的每一项效率水平上引领精度边界。具体而言,Virtuoso的准确性为63.9%,比一些流行的物体检测模型高于10%,51.1%,yolo为49.5%。
translated by 谷歌翻译
感知环境是实现合作驾驶自动化(CDA)的最基本关键之一,该关键被认为是解决当代运输系统的安全性,流动性和可持续性问题的革命性解决方案。尽管目前在计算机视觉的物体感知领域正在发生前所未有的进化,但由于不可避免的物理遮挡和单辆车的接受程度有限,最先进的感知方法仍在与复杂的现实世界流量环境中挣扎系统。基于多个空间分离的感知节点,合作感知(CP)诞生是为了解锁驱动自动化的感知瓶颈。在本文中,我们全面审查和分析了CP的研究进度,据我们所知,这是第一次提出统一的CP框架。审查了基于不同类型的传感器的CP系统的体系结构和分类学,以显示对CP系统的工作流程和不同结构的高级描述。对节点结构,传感器模式和融合方案进行了审查和分析,并使用全面的文献进行了详细的解释。提出了分层CP框架,然后对现有数据集和模拟器进行审查,以勾勒出CP的整体景观。讨论重点介绍了当前的机会,开放挑战和预期的未来趋势。
translated by 谷歌翻译
通过在图像传感器设计中加入可编程的兴趣区域(ROI)读数来提高嵌入式视觉系统的能量效率的巨大范围。在这项工作中,我们研究如何利用ROI可编程性,以便通过预期ROI将位于未来帧中的位置并在该区域之外切换像素来进行跟踪应用程序。我们将ROI预测的该过程和对应的传感器配置称为自适应限制。我们的自适应数据采样算法包括对象检测器和ROI预测器(卡尔曼滤波器),其结合地操作以优化视觉管道的能量效率,其结束任务是对象跟踪。为了进一步促进现实生活中的自适应算法的实施,我们选择候选算法并将其映射到FPGA上。利用Xilinx血管AI工具,我们设计并加速了基于YOLO对象探测器的自适应数据采样算法。为了进一步改进算法的部署后,我们在OTB100和LASOT数据集中评估了几个竞争的基线。我们发现将ECO跟踪器与Kalman滤波器耦合,在OTB100和Lasot Datasets上具有0.4568和0.3471的竞争性AUC分数。此外,该算法的功率效率与另一个基线优于相同的情况,并且在几个外部的情况下。基于ECO的算法在两个数据集上发生大约4W的功耗,而基于YOLO的方法需要大约6 W的功耗(根据我们的功耗模型)。在精度延迟权衡方面,基于ECO的算法在管理达到竞争跟踪精度的同时提供近实时性能(19.23 FPS)。
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译
在这项工作中,我们提出了一个新颖的调度框架,可随时对基于深神经网络(DNN)的3D对象检测管道的感知。我们专注于计算昂贵的区域提案网络(RPN)和每个类别多头检测器组件,这些探测器组件在3D对象检测管道中很常见,并使它们变得截止日期。我们提出了一种调度算法,该算法巧妙地选择了组件的子集,以实现有效的时间和准确性权衡。我们通过通过估计将先前检测到的对象投射到当前场景上,从而最大程度地减少跳过某些神经网络子组件的准确性损失。我们将方法应用于最先进的3D对象检测网络,Pointpillars,并使用Nuscenes数据集评估其在Jetson Xavier Agx上的性能。与基线相比,我们的方法在各种截止日期限制下显着提高了网络的准确性。
translated by 谷歌翻译
深度神经网络(DNN)由于其高度的感知,决策和控制而被广泛用于自主驾驶中。在诸如自动驾驶之类的安全至关重要系统中,实时执行感测和感知等任务对于车辆的安全至关重要,这需要应用程序的执行时间才能预测。但是,在DNN推断中观察到不可忽略的时间变化。当前的DNN推理研究要么忽略时间变化问题,要么依靠调度程序来处理它。当前的工作都没有解释DNN推理时间变化的根本原因。了解DNN推理的时间变化成为自动驾驶实时计划的基本挑战。在这项工作中,我们从六个角度分析了DNN推断的时间变化:数据,I/O,模型,运行时,硬件和端到端感知系统。在理解DNN推断的时间变化方面得出了六个见解。
translated by 谷歌翻译
视频对象检测一直是计算机视觉中一个重要但充满挑战的话题。传统方法主要集中于设计图像级或框级特征传播策略以利用时间信息。本文认为,通过更有效,更有效的功能传播框架,视频对象探测器可以在准确性和速度方面提高。为此,本文研究了对象级特征传播,并提出了一个针对高性能视频对象检测的对象查询传播(QueryProp)框架。所提出的查询Prop包含两个传播策略:1)查询传播是从稀疏的钥匙帧到密集的非钥匙框架执行的,以减少非钥匙帧的冗余计算; 2)查询传播是从以前的关键帧到当前关键框架进行的,以通过时间上下文建模来改善特征表示。为了进一步促进查询传播,自适应传播门旨在实现灵活的钥匙框架选择。我们在Imagenet VID数据集上进行了广泛的实验。 QueryProp通过最先进的方法实现了可比的精度,并实现了不错的精度/速度权衡。代码可在https://github.com/hf1995/queryprop上获得。
translated by 谷歌翻译
Utilizing the latest advances in Artificial Intelligence (AI), the computer vision community is now witnessing an unprecedented evolution in all kinds of perception tasks, particularly in object detection. Based on multiple spatially separated perception nodes, Cooperative Perception (CP) has emerged to significantly advance the perception of automated driving. However, current cooperative object detection methods mainly focus on ego-vehicle efficiency without considering the practical issues of system-wide costs. In this paper, we introduce VINet, a unified deep learning-based CP network for scalable, lightweight, and heterogeneous cooperative 3D object detection. VINet is the first CP method designed from the standpoint of large-scale system-level implementation and can be divided into three main phases: 1) Global Pre-Processing and Lightweight Feature Extraction which prepare the data into global style and extract features for cooperation in a lightweight manner; 2) Two-Stream Fusion which fuses the features from scalable and heterogeneous perception nodes; and 3) Central Feature Backbone and 3D Detection Head which further process the fused features and generate cooperative detection results. A cooperative perception platform is designed and developed for CP dataset acquisition and several baselines are compared during the experiments. The experimental analysis shows that VINet can achieve remarkable improvements for pedestrians and cars with 2x less system-wide computational costs and 12x less system-wide communicational costs.
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
Deep learning has been widely used in the perception (e.g., 3D object detection) of intelligent vehicle driving. Due to the beneficial Vehicle-to-Vehicle (V2V) communication, the deep learning based features from other agents can be shared to the ego vehicle so as to improve the perception of the ego vehicle. It is named as Cooperative Perception in the V2V research, whose algorithms have been dramatically advanced recently. However, all the existing cooperative perception algorithms assume the ideal V2V communication without considering the possible lossy shared features because of the Lossy Communication (LC) which is common in the complex real-world driving scenarios. In this paper, we first study the side effect (e.g., detection performance drop) by the lossy communication in the V2V Cooperative Perception, and then we propose a novel intermediate LC-aware feature fusion method to relieve the side effect of lossy communication by a LC-aware Repair Network (LCRN) and enhance the interaction between the ego vehicle and other vehicles by a specially designed V2V Attention Module (V2VAM) including intra-vehicle attention of ego vehicle and uncertainty-aware inter-vehicle attention. The extensive experiment on the public cooperative perception dataset OPV2V (based on digital-twin CARLA simulator) demonstrates that the proposed method is quite effective for the cooperative point cloud based 3D object detection under lossy V2V communication.
translated by 谷歌翻译
基于LIDAR的传感驱动器电流自主车辆。尽管进展迅速,但目前的激光雷达传感器在分辨率和成本方面仍然落后于传统彩色相机背后的二十年。对于自主驾驶,这意味着靠近传感器的大物体很容易可见,但远方或小物体仅包括一个测量或两个。这是一个问题,尤其是当这些对象结果驾驶危险时。另一方面,在车载RGB传感器中清晰可见这些相同的对象。在这项工作中,我们提出了一种将RGB传感器无缝熔化成基于LIDAR的3D识别方法。我们的方法采用一组2D检测来生成密集的3D虚拟点,以增加否则稀疏的3D点云。这些虚拟点自然地集成到任何基于标准的LIDAR的3D探测器以及常规激光雷达测量。由此产生的多模态检测器简单且有效。大规模NUSCENES数据集的实验结果表明,我们的框架通过显着的6.6地图改善了强大的中心点基线,并且优于竞争融合方法。代码和更多可视化可在https://tianweiy.github.io/mvp/上获得
translated by 谷歌翻译
随着智能车辆和先进驾驶员援助系统(ADAS)的快速发展,新趋势是人类驾驶员的混合水平将参与运输系统。因此,在这种情况下,司机的必要视觉指导对于防止潜在风险至关重要。为了推进视觉指导系统的发展,我们介绍了一种新的视觉云数据融合方法,从云中集成相机图像和数字双胞胎信息,帮助智能车辆做出更好的决策。绘制目标车辆边界框并在物体检测器的帮助下(在EGO车辆上运行)和位置信息(从云接收)匹配。使用深度图像作为附加特征源获得最佳匹配结果,从工会阈值下面的0.7交叉口下的精度为79.2%。进行了对车道改变预测的案例研究,以表明所提出的数据融合方法的有效性。在案例研究中,提出了一种多层的Perceptron算法,用修改的车道改变预测方法提出。从Unity游戏发动机获得的人型仿真结果表明,在安全性,舒适度和环境可持续性方面,拟议的模型可以显着提高高速公路驾驶性能。
translated by 谷歌翻译
Multi-modal fusion is a basic task of autonomous driving system perception, which has attracted many scholars' interest in recent years. The current multi-modal fusion methods mainly focus on camera data and LiDAR data, but pay little attention to the kinematic information provided by the bottom sensors of the vehicle, such as acceleration, vehicle speed, angle of rotation. These information are not affected by complex external scenes, so it is more robust and reliable. In this paper, we introduce the existing application fields of vehicle bottom information and the research progress of related methods, as well as the multi-modal fusion methods based on bottom information. We also introduced the relevant information of the vehicle bottom information data set in detail to facilitate the research as soon as possible. In addition, new future ideas of multi-modal fusion technology for autonomous driving tasks are proposed to promote the further utilization of vehicle bottom information.
translated by 谷歌翻译
Event-based vision has been rapidly growing in recent years justified by the unique characteristics it presents such as its high temporal resolutions (~1us), high dynamic range (>120dB), and output latency of only a few microseconds. This work further explores a hybrid, multi-modal, approach for object detection and tracking that leverages state-of-the-art frame-based detectors complemented by hand-crafted event-based methods to improve the overall tracking performance with minimal computational overhead. The methods presented include event-based bounding box (BB) refinement that improves the precision of the resulting BBs, as well as a continuous event-based object detection method, to recover missed detections and generate inter-frame detections that enable a high-temporal-resolution tracking output. The advantages of these methods are quantitatively verified by an ablation study using the higher order tracking accuracy (HOTA) metric. Results show significant performance gains resembled by an improvement in the HOTA from 56.6%, using only frames, to 64.1% and 64.9%, for the event and edge-based mask configurations combined with the two methods proposed, at the baseline framerate of 24Hz. Likewise, incorporating these methods with the same configurations has improved HOTA from 52.5% to 63.1%, and from 51.3% to 60.2% at the high-temporal-resolution tracking rate of 384Hz. Finally, a validation experiment is conducted to analyze the real-world single-object tracking performance using high-speed LiDAR. Empirical evidence shows that our approaches provide significant advantages compared to using frame-based object detectors at the baseline framerate of 24Hz and higher tracking rates of up to 500Hz.
translated by 谷歌翻译
在自主驾驶系统中,感知 - 来自环境的特征和物体的识别 - 至关重要。在自主赛车中,高速和小幅度的距离需要快速准确的检测系统。在比赛期间,天气可能会突然变化,导致感知的显着降解,导致操作效果无效。为了改善恶劣天气的检测,基于深度学习的模型通常需要在这种条件下捕获的广泛数据集 - 这是一种繁琐,费力和昂贵的过程。然而,最新的Conscangan架构的发展允许在多种天气条件下合成高度现实的场景。为此,我们介绍了一种在自主赛车中使用合成的不利条件数据集(使用Cyclegan产生)来提高五个最先进的探测器的性能,平均为42.7和4.4地图百分比点分别存在夜间条件和液滴。此外,我们对五个对象探测器进行了比较分析 - 识别探测器的最佳配对和在挑战条件下自主赛车中使用的培训数据。
translated by 谷歌翻译
在深度感知的固有歧义的范围内,现代相机的3D对象检测方法属于性能瓶颈。从直觉上讲,利用时间多视角立体声(MVS)技术是解决这种歧义的自然知识。但是,在适用于3D对象检测场景时,MV的传统尝试在两个方面存在缺陷:1)所有观点之间的亲和力测量遭受昂贵的计算成本; 2)很难处理经常移动物体的室外场景。为此,我们引入了一种有效的时间立体声方法,以动态选择匹配候选者的尺度,从而显着减少计算开销。更进一步,我们设计了一种迭代算法,以更新更有价值的候选人,使其适应移动候选人。我们将我们提出的方法实例化,以进行多视图3D检测器,即Bevstereo。 Bevstereo在Nuscenes数据集的仅相机轨道上实现了新的最先进的性能(即52.5%地图和61.0%NDS)。同时,广泛的实验反映了我们的方法比当代MVS方法更好地处理复杂的室外场景。代码已在https://github.com/megvii astection/bevstereo上发布。
translated by 谷歌翻译