Road-safety inspection is an indispensable instrument for reducing road-accident fatalities contributed to road infrastructure. Recent work formalizes road-safety assessment in terms of carefully selected risk factors that are also known as road-safety attributes. In current practice, these attributes are manually annotated in geo-referenced monocular video for each road segment. We propose to reduce dependency on tedious human labor by automating recognition with a two-stage neural architecture. The first stage predicts more than forty road-safety attributes by observing a local spatio-temporal context. Our design leverages an efficient convolutional pipeline, which benefits from pre-training on semantic segmentation of street scenes. The second stage enhances predictions through sequential integration across a larger temporal window. Our design leverages per-attribute instances of a lightweight bidirectional LSTM architecture. Both stages alleviate extreme class imbalance by incorporating a multi-task variant of recall-based dynamic loss weighting. We perform experiments on the iRAP-BH dataset, which involves fully labeled geo-referenced video along 2,300 km of public roads in Bosnia and Herzegovina. We also validate our approach by comparing it with the related work on two road-scene classification datasets from the literature: Honda Scenes and FM3m. Experimental evaluation confirms the value of our contributions on all three datasets.
translated by 谷歌翻译
场景分类已确定为一个具有挑战性的研究问题。与单个对象的图像相比,场景图像在语义上可能更为复杂和抽象。它们的差异主要在于识别的粒度水平。然而,图像识别是场景识别良好表现的关键支柱,因为从对象图像中获得的知识可用于准确识别场景。现有场景识别方法仅考虑场景的类别标签。但是,我们发现包含详细的本地描述的上下文信息也有助于允许场景识别模型更具歧视性。在本文中,我们旨在使用对象中编码的属性和类别标签信息来改善场景识别。基于属性和类别标签的互补性,我们提出了一个多任务属性识别识别(MASR)网络,该网络学习一个类别嵌入式,同时预测场景属性。属性采集和对象注释是乏味且耗时的任务。我们通过提出部分监督的注释策略来解决该问题,其中人类干预大大减少。该策略为现实世界情景提供了更具成本效益的解决方案,并且需要减少注释工作。此外,考虑到对象检测到的分数所指示的重要性水平,我们重新进行了权威预测。使用提出的方法,我们有效地注释了四个大型数据集的属性标签,并系统地研究场景和属性识别如何相互受益。实验结果表明,与最先进的方法相比
translated by 谷歌翻译
我们介绍了在视频中发现时间精确,细粒度事件的任务(检测到时间事件的精确时刻)。精确的斑点需要模型在全球范围内对全日制动作规模进行推理,并在本地识别微妙的框架外观和运动差异,以识别这些动作过程中事件的识别。令人惊讶的是,我们发现,最高的绩效解决方案可用于先前的视频理解任务,例如操作检测和细分,不能同时满足这两个要求。作为响应,我们提出了E2E点,这是一种紧凑的端到端模型,在精确的发现任务上表现良好,可以在单个GPU上快速培训。我们证明,E2E点的表现明显优于最近根据视频动作检测,细分和将文献发现到精确的发现任务的基线。最后,我们为几个细粒度的运动动作数据集贡献了新的注释和分裂,以使这些数据集适用于未来的精确发现工作。
translated by 谷歌翻译
目前,城市流动研究和政府举措主要集中在与电动机相关的问题上,例如,拥堵与污染问题。然而,我们不能忽视城市景观中最脆弱的元素:行人,暴露于比其他道路用户更高的风险。实际上,城市的安全,无障碍和可持续的运输系统是联合国2030年议程的核心目标。因此,有机会将先进的计算工具应用于交通安全的问题,特别是对过去常被忽视的行人。本文结合了公共数据来源,大型街道图像和计算机视觉技术,以自动化,相对简单和普遍适用的数据处理方案接近行人和车辆安全性。该流水线所涉及的步骤包括对残余卷积神经网络的适应和训练,以确定每个给定城市场景的危险指标,以及基于这些相同图像的图像分割和类激活映射的解释性分析。结合,这种计算方法的结果是一个城市危险水平的细粒度地图,以及识别可能同时改善行人和车辆安全的干预措施的启发式。拟议的框架应作为城市规划者和公共当局的工作补充。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
TU Dresden www.cityscapes-dataset.net train/val -fine annotation -3475 images train -coarse annotation -20 000 images test -fine annotation -1525 images
translated by 谷歌翻译
Deep supervised models have an unprecedented capacity to absorb large quantities of training data. Hence, training on multiple datasets becomes a method of choice towards strong generalization in usual scenes and graceful performance degradation in edge cases. Unfortunately, different datasets often have incompatible labels. For instance, the Cityscapes road class subsumes all driving surfaces, while Vistas defines separate classes for road markings, manholes etc. Furthermore, many datasets have overlapping labels. For instance, pickups are labeled as trucks in VIPER, cars in Vistas, and vans in ADE20k. We address this challenge by considering labels as unions of universal visual concepts. This allows seamless and principled learning on multi-domain dataset collections without requiring any relabeling effort. Our method achieves competitive within-dataset and cross-dataset generalization, as well as ability to learn visual concepts which are not separately labeled in any of the training datasets. Experiments reveal competitive or state-of-the-art performance on two multi-domain dataset collections and on the WildDash 2 benchmark.
translated by 谷歌翻译
机器学习正在改变视频编辑行业。计算机视觉的最新进展已升级视频编辑任务,例如智能重新构图,旋转镜,颜色分级或应用数字化妆。但是,大多数解决方案都集中在视频操作和VFX上。这项工作介绍了视频编辑,数据集和基准测试的解剖结构,以促进AI辅助视频编辑研究。我们的基准套件专注于视频编辑任务,除了视觉效果之外,例如自动录像组织和辅助视频组装。为了对这些方面进行研究,我们注释了超过150万的标签,并从196176年从电影场景中取样了摄影作品。我们为每个任务建立竞争性基线方法和详细分析。我们希望我们的作品能够对AI辅助视频编辑的未经展开的领域进行创新的研究。
translated by 谷歌翻译
在设计可持续和弹性的城市建造环境的同时,越来越多地促进了世界各地的,重大的数据差距对压迫可持续性问题挑战开展的研究。已知人行道具有强大的经济和环境影响;然而,由于数据收集的成本持久和耗时的性质,大多数城市缺乏它们的表面的空间目录。计算机愿景的最新进展与街道级别图像的可用性一起为城市提供了新的机会,以利用较低的实施成本和更高的准确性提取大规模建筑环境数据。在本文中,我们提出了一个基于主动学习的框架,利用计算机视觉技术来使用广泛可用的街道图像进行分类的计算机视觉技术。我们培训了来自纽约市和波士顿的图像的框架,评价结果显示了90.5%的Miou评分。此外,我们使用六个不同城市的图像评估框架,表明它可以应用于具有不同城市面料的区域,即使在培训数据的领域之外。 Citysurfaces可以为研究人员和城市代理商提供低成本,准确,可扩展的方法来收集人行道材料数据,在寻求主要可持续性问题方面发挥着关键作用,包括气候变化和地表水管理。
translated by 谷歌翻译
Computer vision applications in intelligent transportation systems (ITS) and autonomous driving (AD) have gravitated towards deep neural network architectures in recent years. While performance seems to be improving on benchmark datasets, many real-world challenges are yet to be adequately considered in research. This paper conducted an extensive literature review on the applications of computer vision in ITS and AD, and discusses challenges related to data, models, and complex urban environments. The data challenges are associated with the collection and labeling of training data and its relevance to real world conditions, bias inherent in datasets, the high volume of data needed to be processed, and privacy concerns. Deep learning (DL) models are commonly too complex for real-time processing on embedded hardware, lack explainability and generalizability, and are hard to test in real-world settings. Complex urban traffic environments have irregular lighting and occlusions, and surveillance cameras can be mounted at a variety of angles, gather dirt, shake in the wind, while the traffic conditions are highly heterogeneous, with violation of rules and complex interactions in crowded scenarios. Some representative applications that suffer from these problems are traffic flow estimation, congestion detection, autonomous driving perception, vehicle interaction, and edge computing for practical deployment. The possible ways of dealing with the challenges are also explored while prioritizing practical deployment.
translated by 谷歌翻译
地理定位的概念是指确定地球上的某些“实体”的位置的过程,通常使用全球定位系统(GPS)坐标。感兴趣的实体可以是图像,图像序列,视频,卫星图像,甚至图像中可见的物体。由于GPS标记媒体的大规模数据集由于智能手机和互联网而迅速变得可用,而深入学习已经上升以提高机器学习模型的性能能力,因此由于其显着影响而出现了视觉和对象地理定位的领域广泛的应用,如增强现实,机器人,自驾驶车辆,道路维护和3D重建。本文提供了对涉及图像的地理定位的全面调查,其涉及从捕获图像(图像地理定位)或图像内的地理定位对象(对象地理定位)的地理定位的综合调查。我们将提供深入的研究,包括流行算法的摘要,对所提出的数据集的描述以及性能结果的分析来说明每个字段的当前状态。
translated by 谷歌翻译
为视频中的每个像素分配语义类和跟踪身份的任务称为视频Panoptic分段。我们的工作是第一个在真实世界中瞄准这项任务,需要在空间和时间域中的密集解释。由于此任务的地面真理难以获得,但是,现有数据集是合成构造的或仅在短视频剪辑中稀疏地注释。为了克服这一点,我们介绍了一个包含两个数据集,Kitti-Step和Motchallenge步骤的新基准。数据集包含长视频序列,提供具有挑战性的示例和用于研究长期像素精确分割和在真实条件下跟踪的测试床。我们进一步提出了一种新的评估度量分割和跟踪质量(STQ),其相当余额平衡该任务的语义和跟踪方面,并且更适合评估任意长度的序列。最后,我们提供了几个基线来评估此新具有挑战性数据集的现有方法的状态。我们已将我们的数据集,公制,基准服务器和基准公开提供,并希望这将激发未来的研究。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
交通场景边缘壳体的语义分割的鲁棒性是智能运输安全的重要因素。然而,交通事故的大多数关键场景都是非常动态和以前看不见的,这严重损害了语义分割方法的性能。另外,在高速驾驶期间传统相机的延迟将进一步降低时间尺寸中的上下文信息。因此,我们建议从基于事件的数据提取动态上下文,以更高的时间分辨率来增强静态RGB图像,即使对于来自运动模糊,碰撞,变形,翻转等的流量事故而言,此外,为评估分割交通事故中的性能,我们提供了一个像素 - 明智的注释事故数据集,即Dada-Seg,其中包含来自交通事故的各种临界情景。我们的实验表明,基于事件的数据可以通过在事故中保留快速移动的前景(碰撞物体)的微粒运动来提供互补信息以在不利条件下稳定语义分割。我们的方法在拟议的事故数据集中实现了+ 8.2%的性能增益,超过了20多种最先进的语义细分方法。已经证明该提案对于在多个源数据库中学到的模型,包括CityScapes,Kitti-360,BDD和Apolloscape的模型始终如一。
translated by 谷歌翻译
In this work, we tackle two vital tasks in automated driving systems, i.e., driver intent prediction and risk object identification from egocentric images. Mainly, we investigate the question: what would be good road scene-level representations for these two tasks? We contend that a scene-level representation must capture higher-level semantic and geometric representations of traffic scenes around ego-vehicle while performing actions to their destinations. To this end, we introduce the representation of semantic regions, which are areas where ego-vehicles visit while taking an afforded action (e.g., left-turn at 4-way intersections). We propose to learn scene-level representations via a novel semantic region prediction task and an automatic semantic region labeling algorithm. Extensive evaluations are conducted on the HDD and nuScenes datasets, and the learned representations lead to state-of-the-art performance for driver intention prediction and risk object identification.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
The Mapillary Vistas Dataset is a novel, largescale street-level image dataset containing 25 000 highresolution images annotated into 66 object categories with additional, instance-specific labels for 37 classes. Annotation is performed in a dense and fine-grained style by using polygons for delineating individual objects. Our dataset is 5× larger than the total amount of fine annotations for Cityscapes and contains images from all around the world, captured at various conditions regarding weather, season and daytime. Images come from different imaging devices (mobile phones, tablets, action cameras, professional capturing rigs) and differently experienced photographers. In such a way, our dataset has been designed and compiled to cover diversity, richness of detail and geographic extent. As default benchmark tasks, we define semantic image segmentation and instance-specific image segmentation, aiming to significantly further the development of state-of-theart methods for visual road-scene understanding.
translated by 谷歌翻译
Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
translated by 谷歌翻译
自动手术场景细分是促进现代手术剧院认知智能的基础。以前的作品依赖于常规的聚合模块(例如扩张的卷积,卷积LSTM),仅利用局部环境。在本文中,我们提出了一个新颖的框架STSWINCL,该框架通过逐步捕获全球环境来探讨互补的视频内和访问间关系以提高细分性能。我们首先开发了层次结构变压器,以捕获视频内关系,其中包括来自邻居像素和以前的帧的富裕空间和时间提示。提出了一个联合时空窗口移动方案,以有效地将这两个线索聚集到每个像素嵌入中。然后,我们通过像素到像素对比度学习探索视频间的关系,该学习很好地结构了整体嵌入空间。开发了一个多源对比度训练目标,可以将视频中的像素嵌入和基础指导分组,这对于学习整个数据的全球属性至关重要。我们在两个公共外科视频基准测试中广泛验证了我们的方法,包括Endovis18 Challenge和Cadis数据集。实验结果证明了我们的方法的有希望的性能,这始终超过了先前的最新方法。代码可在https://github.com/yuemingjin/stswincl上找到。
translated by 谷歌翻译
密集的语义预测通过推断未观察到的未来图像的像素级语义来预测视频中的未来事件。我们提出了一种适用于各种单帧架构和任务的新方法。我们的方法包括两个模块。功能 - 动作(F2M)模块预测了密集的变形领域,将过去的功能扭曲到其未来的位置。功能到特征(F2F)模块直接回归未来功能,因此能够考虑紧急风景。化合物F2MF模型以任务 - 不可行的方式与新奇效果的运动效果脱钩。我们的目标是将F2MF预测应用于所需单帧模型的最自述和最抽象的最摘要表示。我们的设计利用了相邻时间瞬间可变形卷曲和空间相关系数。我们在三个密集预测任务中执行实验:语义分割,实例级分割和Panoptic分割。结果介绍了三个密集预测任务的最先进的预测精度。
translated by 谷歌翻译