近年来,事件摄像机(DVS - 动态视觉传感器)已在视觉系统中用作传统摄像机的替代或补充。它们的特征是高动态范围,高时间分辨率,低潜伏期和在有限的照明条件下可靠的性能 - 在高级驾驶员辅助系统(ADAS)和自动驾驶汽车的背景下,参数尤为重要。在这项工作中,我们测试这些相当新颖的传感器是否可以应用于流行的交通标志检测任务。为此,我们分析事件数据的不同表示:事件框架,事件频率和指数衰减的时间表面,并使用称为FireNet的深神经网络应用视频框架重建。我们将深度卷积神经网络Yolov4用作检测器。对于特定表示,我们获得了86.9-88.9%map@0.5的检测准确性。使用融合所考虑的表示形式的使用使我们能够获得更高准确性的检测器89.9%map@0.5。相比之下,用Firenet重建的框架的检测器的特征是52.67%map@0.5。获得的结果说明了汽车应用中事件摄像机的潜力,无论是独立传感器还是与典型的基于框架的摄像机密切合作。
translated by 谷歌翻译
This paper presents a method for detection and recognition of traffic signs based on information extracted from an event camera. The solution used a FireNet deep convolutional neural network to reconstruct events into greyscale frames. Two YOLOv4 network models were trained, one based on greyscale images and the other on colour images. The best result was achieved for the model trained on the basis of greyscale images, achieving an efficiency of 87.03%.
translated by 谷歌翻译
This paper proposes the use of an event camera as a component of a vision system that enables counting of fast-moving objects - in this case, falling corn grains. These type of cameras transmit information about the change in brightness of individual pixels and are characterised by low latency, no motion blur, correct operation in different lighting conditions, as well as very low power consumption. The proposed counting algorithm processes events in real time. The operation of the solution was demonstrated on a stand consisting of a chute with a vibrating feeder, which allowed the number of grains falling to be adjusted. The objective of the control system with a PID controller was to maintain a constant average number of falling objects. The proposed solution was subjected to a series of tests to determine the correctness of the developed method operation. On their basis, the validity of using an event camera to count small, fast-moving objects and the associated wide range of potential industrial applications can be confirmed.
translated by 谷歌翻译
Visual object tracking under challenging conditions of motion and light can be hindered by the capabilities of conventional cameras, prone to producing images with motion blur. Event cameras are novel sensors suited to robustly perform vision tasks under these conditions. However, due to the nature of their output, applying them to object detection and tracking is non-trivial. In this work, we propose a framework to take advantage of both event cameras and off-the-shelf deep learning for object tracking. We show that reconstructing event data into intensity frames improves the tracking performance in conditions under which conventional cameras fail to provide acceptable results.
translated by 谷歌翻译
这项工作介绍了使用常规摄像头和事件摄像机的多动画视觉数据获取的共同捕获系统。事件摄像机比基于框架的相机具有多个优势,例如高时间分辨率和时间冗余抑制,这使我们能够有效捕获鱼类的快速和不稳定的运动。此外,我们提出了一种基于事件的多动物跟踪算法,该算法证明了该方法的可行性,并为进一步探索事件摄像机和传统摄像机的多动物跟踪的优势提供了基础。
translated by 谷歌翻译
Event-based vision has been rapidly growing in recent years justified by the unique characteristics it presents such as its high temporal resolutions (~1us), high dynamic range (>120dB), and output latency of only a few microseconds. This work further explores a hybrid, multi-modal, approach for object detection and tracking that leverages state-of-the-art frame-based detectors complemented by hand-crafted event-based methods to improve the overall tracking performance with minimal computational overhead. The methods presented include event-based bounding box (BB) refinement that improves the precision of the resulting BBs, as well as a continuous event-based object detection method, to recover missed detections and generate inter-frame detections that enable a high-temporal-resolution tracking output. The advantages of these methods are quantitatively verified by an ablation study using the higher order tracking accuracy (HOTA) metric. Results show significant performance gains resembled by an improvement in the HOTA from 56.6%, using only frames, to 64.1% and 64.9%, for the event and edge-based mask configurations combined with the two methods proposed, at the baseline framerate of 24Hz. Likewise, incorporating these methods with the same configurations has improved HOTA from 52.5% to 63.1%, and from 51.3% to 60.2% at the high-temporal-resolution tracking rate of 384Hz. Finally, a validation experiment is conducted to analyze the real-world single-object tracking performance using high-speed LiDAR. Empirical evidence shows that our approaches provide significant advantages compared to using frame-based object detectors at the baseline framerate of 24Hz and higher tracking rates of up to 500Hz.
translated by 谷歌翻译
神经形态视觉是一个快速增长的领域,在自动驾驶汽车的感知系统中有许多应用。不幸的是,由于传感器的工作原理,事件流中有很大的噪声。在本文中,我们提出了一种基于IIR滤波器矩阵的新算法,用于过滤此类噪声和硬件体系结构,该算法允许使用SOC FPGA加速。我们的方法具有非常好的过滤效率,无法相关噪声 - 删除了超过99%的嘈杂事件。已经对几个事件数据集进行了测试,并增加了随机噪声。我们设计了硬件体系结构,以减少FPGA内部BRAM资源的利用。这使得每秒的潜伏期非常低,最多可达3858元MERP的事件。在模拟和Xilinx Zynx Zynx Ultrascale+ MPSOC+ MPSOC芯片上,拟议的硬件体系结构在Mercury+ XU9模块上进行了验证。
translated by 谷歌翻译
事件摄像机可产生大型动态范围事件流,并具有很高的时间分辨率,可丢弃冗余视觉信息,从而为对象检测任务带来新的可能性。但是,将事件摄像机应用于使用深度学习方法对象检测任务的现有方法仍然存在许多问题。首先,由于全局同步时间窗口和时间分辨率,现有方法无法考虑具有不同速度的对象。其次,大多数现有方法都依赖于大型参数神经网络,这意味着较大的计算负担和低推理速度,因此与事件流的高时间分辨率相反。在我们的工作中,我们设计了一种使用简单但有效的数据增强方法的高速轻质检测器,称为敏捷事件检测器(AED)。此外,我们提出了一个称为“时间主动焦点(TAF)”的事件流表示张量,该量子充分利用了事件流数据的异步生成,并且对移动对象的运动非常强大。它也可以在不耗时的情况下构造。我们进一步提出了一个称为分叉折叠模块(BFM)的模块,以在AED检测器的输入层的TAF张量中提取丰富的时间信息。我们对两个典型的实体事件摄像机对象检测数据集进行了实验:完整的预言GEN1汽车检测数据集和预言1 Megapixel Automotive检测数据集,带有部分注释。实验表明,我们的方法在准确性,速度和参数数量方面具有竞争力。同样,通过基于光流密度度量的对象将对象分类为多个运动级别,我们说明了相对于摄像机具有不同速度的对象的方法的鲁棒性。
translated by 谷歌翻译
Neuromorphic vision or event vision is an advanced vision technology, where in contrast to the visible camera that outputs pixels, the event vision generates neuromorphic events every time there is a brightness change which exceeds a specific threshold in the field of view (FOV). This study focuses on leveraging neuromorphic event data for roadside object detection. This is a proof of concept towards building artificial intelligence (AI) based pipelines which can be used for forward perception systems for advanced vehicular applications. The focus is on building efficient state-of-the-art object detection networks with better inference results for fast-moving forward perception using an event camera. In this article, the event-simulated A2D2 dataset is manually annotated and trained on two different YOLOv5 networks (small and large variants). To further assess its robustness, single model testing and ensemble model testing are carried out.
translated by 谷歌翻译
这项工作为卫星视频中的车辆检测提供了一种深度学习方法。由于车辆的微小(4-10像素)及其与背景的相似性,因此在单个EO卫星图像中可能不可能进行车辆检测。取而代之的是,我们考虑卫星视频,该视频克服了由于车辆运动的时间一致性而缺乏空间信息。提出了一种紧凑型$ 3 $ 3 $卷积的神经网络的新时空模型,该模型忽略了合并层并使用泄漏的保留。然后,我们使用输出热图的重新制定,包括最终分割的非最大抑制(NMS)。两个新的带注释的卫星视频的经验结果重新确认该方法用于车辆检测的适用性。他们更重要的是表明,对WAMI数据进行预训练,然后在几个带注释的视频帧上进行微调以进行新视频就足够了。在我们的实验中,只有五个带注释的图像在新视频中产生的$ F_1 $得分为0.81,显示出比拉斯维加斯视频更复杂的流量模式。我们对拉斯维加斯的最佳结果是$ F_1 $得分为0.87,这使得拟议的方法成为该基准的领先方法。
translated by 谷歌翻译
交通灯检测对于自动驾驶汽车在城市地区安全导航至关重要。公开可用的交通灯数据集不足以开发用于检测提供重要导航信息的遥远交通信号灯的算法。我们介绍了一个新颖的基准交通灯数据集,该数据集使用一对涵盖城市和半城市道路的狭窄角度和广角摄像机捕获。我们提供1032张训练图像和813个同步图像对进行测试。此外,我们提供同步视频对进行定性分析。该数据集包括第1920 $ \ times $ 1080的分辨率图像,覆盖10个不同类别。此外,我们提出了一种用于结合两个相机输出的后处理算法。结果表明,与使用单个相机框架的传统方法相比,我们的技术可以在速度和准确性之间取得平衡。
translated by 谷歌翻译
通过流行和通用的计算机视觉挑战来判断,如想象成或帕斯卡VOC,神经网络已经证明是在识别任务中特别准确。然而,最先进的准确性通常以高计算价格出现,需要硬件加速来实现实时性能,而使用案例(例如智能城市)需要实时分析固定摄像机的图像。由于网络带宽的数量,这些流将生成,我们不能依赖于卸载计算到集中云。因此,预期分布式边缘云将在本地处理图像。但是,边缘是由性质资源约束的,这给了可以执行的计算复杂性限制。然而,需要边缘与准确的实时视频分析之间的会面点。专用轻量级型号在每相机基础上可能有所帮助,但由于相机的数量增长,除非该过程是自动的,否则它很快就会变得不可行。在本文中,我们展示并评估COVA(上下文优化的视频分析),这是一个框架,可以帮助在边缘相机中自动专用模型专业化。 COVA通过专业化自动提高轻质模型的准确性。此外,我们讨论和审查过程中涉及的每个步骤,以了解每个人所带来的不同权衡。此外,我们展示了静态相机的唯一假设如何使我们能够制定一系列考虑因素,这大大简化了问题的范围。最后,实验表明,最先进的模型,即能够概括到看不见的环境,可以有效地用作教师以以恒定的计算成本提高较小网络的教师,提高精度。结果表明,我们的COVA可以平均提高预先训练的型号的准确性,平均为21%。
translated by 谷歌翻译
神经形态的愿景是一种生物启发技术,它已经引发了计算机视觉界的范式转变,并作为众多应用的关键推动器。该技术提供了显着的优势,包括降低功耗,降低处理需求和通信加速。然而,神经形态摄像机患有大量的测量噪声。这种噪声恶化了基于神经形态事件的感知和导航算法的性能。在本文中,我们提出了一种新的噪声过滤算法来消除不代表观察场景中的实际记录强度变化的事件。我们采用图形神经网络(GNN) - 驱动的变压器算法,称为GNN变换器,将原始流中的每个活动事件像素分类为实木强度变化或噪声。在GNN中,传递一个名为EventConv的消息传递框架,以反映事件之间的时空相关性,同时保留它们的异步性质。我们还介绍了在各种照明条件下生成事件流的近似地面真理标签(KogT1)方法。 Kogtl用于生成标记的数据集,从记录在充满挑战的照明条件下进行的实验。这些数据集用于培训和广泛测试我们所提出的算法。在取消检测的数据集上测试时,所提出的算法在过滤精度方面优于现有方法12%。还对公共数据集进行了额外的测试,以展示在存在照明变化和不同运动动态的情况下所提出的算法的泛化能力。与现有解决方案相比,定性结果验证了所提出的算法的卓越能力,以消除噪音,同时保留有意义的场景事件。
translated by 谷歌翻译
宽阔的区域运动图像(瓦米)产生具有大量极小物体的高分辨率图像。目标物体在连续帧中具有大的空间位移。令人讨厌的图像的这种性质使对象跟踪和检测具有挑战性。在本文中,我们介绍了我们基于深度神经网络的组合对象检测和跟踪模型,即热图网络(HM-Net)。 HM-Net明显快于最先进的帧差异和基于背景减法的方法,而不会影响检测和跟踪性能。 HM-Net遵循基于对象的联合检测和跟踪范式。简单的热图的预测支持无限数量的同时检测。所提出的方法使用来自前一帧的两个连续帧和物体检测热图作为输入,这有助于帧之间的HM-Net监视器时空变化并跟踪先前预测的对象。尽管重复使用先前的物体检测热图作为基于生命的反馈的存储器元件,但它可能导致假阳性检测的意外浪涌。为了增加对误报和消除低置信度检测的方法的稳健性,HM-Net采用新的反馈滤波器和高级数据增强。 HM-Net优于最先进的WAMI移动对象检测和跟踪WPAFB数据集的跟踪方法,其96.2%F1和94.4%地图检测分数,同时在同一数据集上实现61.8%的地图跟踪分数。这种性能对应于F1,6.1%的地图分数的增长率为2.1%,而在追踪最先进的地图分数的地图分数为9.5%。
translated by 谷歌翻译
我们提出了一种基于事件的降雪算法,称为EBSNOR。我们开发了一种技术,可以使用基于事件的相机数据来测量像素上雪花的停留时间,该数据用于进行Neyman-Pearson假设测试,以将事件流分为雪花和背景事件。在一个名为udayton22ebsnow的新数据集上验证了拟议的EBSNOR的有效性,该数据集由前面事件的摄像机组成,该相机在汽车中驾驶雪中,并在周围车辆周围手动注释的边界盒。在定性上,Ebsnor正确地标识了与雪花相对应的事件;并且在定量上,EBSNOR预处理的事件数据改善了基于事件的CAR检测算法的性能。
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译
计算机视觉在智能运输系统(ITS)和交通监视中发挥了重要作用。除了快速增长的自动化车辆和拥挤的城市外,通过实施深层神经网络的实施,可以使用视频监视基础架构进行自动和高级交通管理系统(ATM)。在这项研究中,我们为实时交通监控提供了一个实用的平台,包括3D车辆/行人检测,速度检测,轨迹估算,拥塞检测以及监视车辆和行人的相互作用,都使用单个CCTV交通摄像头。我们适应了定制的Yolov5深神经网络模型,用于车辆/行人检测和增强的排序跟踪算法。还开发了基于混合卫星的基于混合卫星的逆透视图(SG-IPM)方法,用于摄像机自动校准,从而导致准确的3D对象检测和可视化。我们还根据短期和长期的时间视频数据流开发了层次结构的交通建模解决方案,以了解脆弱道路使用者的交通流量,瓶颈和危险景点。关于现实世界情景和与最先进的比较的几项实验是使用各种交通监控数据集进行的,包括从高速公路,交叉路口和城市地区收集的MIO-TCD,UA-DETRAC和GRAM-RTM,在不同的照明和城市地区天气状况。
translated by 谷歌翻译
在由车辆安装的仪表板摄像机捕获的视频中检测危险交通代理(仪表板)对于促进在复杂环境中的安全导航至关重要。与事故相关的视频只是驾驶视频大数据的一小部分,并且瞬态前的事故流程具有高度动态和复杂性。此外,风险和非危险交通代理的外观可能相似。这些使驾驶视频中的风险对象本地化特别具有挑战性。为此,本文提出了一个注意力引导的多式功能融合网络(AM-NET),以将仪表板视频的危险交通代理本地化。两个封闭式复发单元(GRU)网络使用对象边界框和从连续视频帧中提取的光流功能来捕获时空提示,以区分危险交通代理。加上GRUS的注意力模块学会了与事故相关的交通代理。融合了两个功能流,AM-NET预测了视频中交通代理的风险评分。在支持这项研究的过程中,本文还引入了一个名为“风险对象本地化”(ROL)的基准数据集。该数据集包含带有事故,对象和场景级属性的空间,时间和分类注释。拟议的AM-NET在ROL数据集上实现了85.73%的AUC的有希望的性能。同时,AM-NET在DOTA数据集上优于视频异常检测的当前最新视频异常检测。一项彻底的消融研究进一步揭示了AM-NET通过评估其不同组成部分的贡献的优点。
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
The last decade witnessed increasingly rapid progress in self-driving vehicle technology, mainly backed up by advances in the area of deep learning and artificial intelligence. The objective of this paper is to survey the current state-of-the-art on deep learning technologies used in autonomous driving. We start by presenting AI-based self-driving architectures, convolutional and recurrent neural networks, as well as the deep reinforcement learning paradigm. These methodologies form a base for the surveyed driving scene perception, path planning, behavior arbitration and motion control algorithms. We investigate both the modular perception-planning-action pipeline, where each module is built using deep learning methods, as well as End2End systems, which directly map sensory information to steering commands. Additionally, we tackle current challenges encountered in designing AI architectures for autonomous driving, such as their safety, training data sources and computational hardware. The comparison presented in this survey helps to gain insight into the strengths and limitations of deep learning and AI approaches for autonomous driving and assist with design choices. 1
translated by 谷歌翻译