深度神经网络的视频活动识别对于许多课程令人印象深刻。然而,它缺乏人类性能,特别是为了挑战歧视活动。人类通过识别明确识别的物体和部件之间的关键时空关系来区分这些复杂的活动,例如输入容器的孔径的物体。深度神经网络可以有效地努力学习这些关键关系。因此,我们提出了一种更有人类的识别方法,其解释了顺序时间阶段的视频,并在这些阶段中提取物体和手中的特定关系。随机森林分类器从这些提取的关系中学到。我们将该方法应用于某种东西的具有挑战性的数据集,并对挑战活动的神经网络基线实现更强大的性能。
translated by 谷歌翻译
Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets -Something-Something, Jester, and Charades -which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos 1 .
translated by 谷歌翻译
可穿戴摄像机可以从用户的角度获取图像和视频。可以处理这些数据以了解人类的行为。尽管人类的行为分析已在第三人称视野中进行了彻底的研究,但仍在以自我为中心的环境中,尤其是在工业场景中进行了研究。为了鼓励在该领域的研究,我们介绍了Meccano,这是一个以自我为中心视频的多式模式数据集来研究类似工业的环境中的人类行为理解。多模式的特征是凝视信号,深度图和RGB视频同时使用自定义耳机获得。该数据集已在从第一人称视角的人类行为理解的背景下明确标记为基本任务,例如识别和预测人类对象的相互作用。使用MECCANO数据集,我们探索了五个不同的任务,包括1)动作识别,2)活动对象检测和识别,3)以自我为中心的人类对象互动检测,4)动作预期和5)下一步活动对象检测。我们提出了一个旨在研究人类行为的基准,该基准在被考虑的类似工业的情况下,表明所研究的任务和所考虑的方案对于最先进的算法具有挑战性。为了支持该领域的研究,我们在https://iplab.dmi.unict.it/meccano/上公开发布数据集。
translated by 谷歌翻译
This paper introduces a video dataset of spatiotemporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips.AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6% mAP, underscoring the need for developing new approaches for video understanding.
translated by 谷歌翻译
瑜伽是全球广受好评的,广泛推荐的健康生活实践。在执行瑜伽时保持正确的姿势至关重要。在这项工作中,我们采用了从人类姿势估计模型中的转移学习来提取整个人体的136个关键点,以训练一个随机的森林分类器,该分类器用于估算瑜伽室。在内部收集的内部收集的瑜伽视频数据库中评估了结果,该数据库是从4个不同的相机角度记录的51个主题。我们提出了一个三步方案,用于通过对1)看不见的帧,2)看不见的受试者进行测试来评估瑜伽分类器的普遍性。我们认为,对于大多数应用程序,对看不见的主题的验证精度和看不见的摄像头是最重要的。我们经验分析了三个公共数据集,转移学习的优势以及目标泄漏的可能性。我们进一步证明,分类精度在很大程度上取决于所采用的交叉验证方法,并且通常会产生误导。为了促进进一步的研究,我们已公开提供关键点数据集和代码。
translated by 谷歌翻译
由于价格合理的可穿戴摄像头和大型注释数据集的可用性,在过去几年中,Egintric Vision(又名第一人称视觉-FPV)的应用程序在过去几年中蓬勃发展。可穿戴摄像机的位置(通常安装在头部上)允许准确记录摄像头佩戴者在其前面的摄像头,尤其是手和操纵物体。这种内在的优势可以从多个角度研究手:将手及其部分定位在图像中;了解双手涉及哪些行动和活动;并开发依靠手势的人类计算机界面。在这项调查中,我们回顾了使用以自我为中心的愿景专注于手的文献,将现有方法分类为:本地化(其中的手或部分在哪里?);解释(手在做什么?);和应用程序(例如,使用以上为中心的手提示解决特定问题的系统)。此外,还提供了带有手基注释的最突出的数据集的列表。
translated by 谷歌翻译
具有注释的缺乏大规模的真实数据集使转移学习视频活动的必要性。我们的目标是为少数行动分类开发几次拍摄转移学习的有效方法。我们利用独立培训的本地视觉提示来学习可以从源域传输的表示,该源域只能使用少数示例来从源域传送到不同的目标域。我们使用的视觉提示包括对象 - 对象交互,手掌和地区内的动作,这些地区是手工位置的函数。我们采用了一个基于元学习的框架,以提取部署的视觉提示的独特和域不变组件。这使得能够在使用不同的场景和动作配置捕获的公共数据集中传输动作分类模型。我们呈现了我们转让学习方法的比较结果,并报告了阶级阶级和数据间数据间际传输的最先进的行动分类方法。
translated by 谷歌翻译
Drone-camera based human activity recognition (HAR) has received significant attention from the computer vision research community in the past few years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Fusion (SWTF) module to utilize sparsely sampled video frames for obtaining global weighted temporal fusion outcome. The proposed SWTF is divided into two components. First, a temporal segment network that sparsely samples a given set of frames. Second, weighted temporal fusion, that incorporates a fusion of feature maps derived from optical flow, with raw RGB images. This is followed by base-network, which comprises a convolutional neural network module along with fully connected layers that provide us with activity recognition. The SWTF network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a significant margin.
translated by 谷歌翻译
视频中的人类对象相互作用(HOI)识别对于分析人类活动很重要。在现实世界中,大多数关注视觉特征的工作通常都会受到阻塞。当HOI中有多个人和物体涉及时,这种问题将更加复杂。考虑到诸如人类姿势和物体位置之类的几何特征提供有意义的信息来了解HOI,我们认为将视觉和几何特征的好处结合在HOI识别中,并提出了一个新颖的两级几何形状特征信息信息图形卷积(2G) -GCN)。几何级图模拟了人类和对象的几何特征之间的相互依赖性,而融合级别的图将它们与人类和对象的视觉特征融合在一起。为了证明我们方法在挑战性场景中的新颖性和有效性,我们提出了一个新的多人HOI数据集(Mphoi-72)。关于Mphoi-72(多人HOI),CAD-1220(单人HOI)和双人动作(双手HOI)数据集的广泛实验证明了我们的表现与最先进的表现相比。
translated by 谷歌翻译
人们经常在网上观看视频,以学习如何烹饪新食谱,组装家具或维修计算机。我们希望启用具有相同功能的机器人。这是具有挑战性的;操作动作的差异很大,有些视频甚至涉及多个人,他们通过共享和交换对象和工具来协作。此外,学习的表示形式需要足够通用才能转移到机器人系统中。另一方面,以前的工作表明,人类操纵动作的空间具有语言,层次结构,将动作与操纵对象和工具联系起来。在这种行动的语言理论的基础上,我们提出了一个系统,以理解和执行从网络上的全长真实烹饪视频中展示的动作序列。该系统将输入作为一个新的,以前看不见的烹饪视频,并用对象标签和边界框注释,并为一个或多个机器人臂输出协作操作操作计划。我们在100个YouTube烹饪视频的标准化数据集以及六个完整的YouTube视频中演示了该系统的性能,其中包括两个参与者之间的协作动作。我们将系统与由最先进的动作检测基线组成的基线系统进行比较,并显示我们的系统达到了更高的动作检测准确性。我们还提出了一个开源平台,用于在模拟环境以及实际机器人部门中执行学习的计划。
translated by 谷歌翻译
人类对象相互作用(HOI)识别的关键是推断人与物体之间的关系。最近,该图像的人类对象相互作用(HOI)检测取得了重大进展。但是,仍然有改善视频HOI检测性能的空间。现有的一阶段方法使用精心设计的端到端网络来检测视频段并直接预测交互。它使网络的模型学习和进一步的优化更加复杂。本文介绍了空间解析和动态时间池(SPDTP)网络,该网络将整个视频作为时空图作为人类和对象节点作为输入。与现有方法不同,我们提出的网络通过显式空间解析预测交互式和非相互作用对之间的差异,然后执行交互识别。此外,我们提出了一个可学习且可区分的动态时间模块(DTM),以强调视频的关键帧并抑制冗余帧。此外,实验结果表明,SPDTP可以更多地关注主动的人类对象对和有效的密钥帧。总体而言,我们在CAD-1220数据集和某些ELSE数据集上实现了最先进的性能。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
本章旨在帮助开发网络 - 物理系统(CPS)在视频监控的各种应用中自动理解事件和活动。这些事件主要由无人机,中央电视台或新手和低端设备上的非熟板捕获。由于许多质量因素,这些视频是不受约束的,这些视频是非常挑战性的。我们在多年来提出了为解决问题的各种方法提供了广泛的陈述。这根据来自基于运动(SFM)的结构的方法,涉及涉及深神经网络的最近解决方案框架的方法。我们表明,长期运动模式在识别事件的任务中,单独发挥枢轴作用。因此,每个视频由使用基于图形的方法的固定数量的键帧显着表示。仅使用混合卷积神经网络(CNN)+经常性神经网络(RNN)架构利用时间特征。我们获得的结果是令人鼓舞的,因为它们优于标准的时间CNN,并且与使用空间信息以及运动提示的人员相提并论。进一步探索多际型号,我们构思了网络的空间和时间翼的多层融合策略。使用偏置的混合技术获得对视频和帧级别的各个预测载体的整合表示。与最先进的方法相比,融合策略在每个阶段的精度赋予我们更高的精度,因此在分类中实现了强大的共识。结果记录在动作识别域,即CCV,HMDB,UCF-101和KCV中广泛使用的四个基准数据集。可推动的是,专注于视频序列的更好分类肯定会导致强大的致动设计用于事件监视和对象暨活动跟踪的系统。
translated by 谷歌翻译
我们提出了一种新的四管齐下的方法,在文献中首次建立消防员的情境意识。我们构建了一系列深度学习框架,彼此之叠,以提高消防员在紧急首次响应设置中进行的救援任务的安全性,效率和成功完成。首先,我们使用深度卷积神经网络(CNN)系统,以实时地分类和识别来自热图像的感兴趣对象。接下来,我们将此CNN框架扩展了对象检测,跟踪,分割与掩码RCNN框架,以及具有多模级自然语言处理(NLP)框架的场景描述。第三,我们建立了一个深入的Q学习的代理,免受压力引起的迷失方向和焦虑,能够根据现场消防环境中观察和存储的事实来制定明确的导航决策。最后,我们使用了一种低计算无监督的学习技术,称为张量分解,在实时对异常检测进行有意义的特征提取。通过这些临时深度学习结构,我们建立了人工智能系统的骨干,用于消防员的情境意识。要将设计的系统带入消防员的使用,我们设计了一种物理结构,其中处理后的结果被用作创建增强现实的投入,这是一个能够建议他们所在地的消防员和周围的关键特征,这对救援操作至关重要在手头,以及路径规划功能,充当虚拟指南,以帮助迷彩的第一个响应者恢复安全。当组合时,这四种方法呈现了一种新颖的信息理解,转移和综合方法,这可能会大大提高消防员响应和功效,并降低寿命损失。
translated by 谷歌翻译
Human activity recognition (HAR) using drone-mounted cameras has attracted considerable interest from the computer vision research community in recent years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is comprised of two parts. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76%, 92.56%, and 78.86% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26%, 18.56%, and 2.94%, respectively.
translated by 谷歌翻译
We develop a novel framework for single-scene video anomaly localization that allows for human-understandable reasons for the decisions the system makes. We first learn general representations of objects and their motions (using deep networks) and then use these representations to build a high-level, location-dependent model of any particular scene. This model can be used to detect anomalies in new videos of the same scene. Importantly, our approach is explainable - our high-level appearance and motion features can provide human-understandable reasons for why any part of a video is classified as normal or anomalous. We conduct experiments on standard video anomaly detection datasets (Street Scene, CUHK Avenue, ShanghaiTech and UCSD Ped1, Ped2) and show significant improvements over the previous state-of-the-art.
translated by 谷歌翻译
当前的时空动作管检测方法通常将一个给定键框的边界框提案扩展到附近帧的3D颞轴和池特征。但是,如果演员的位置或形状通过大型的2D运动和可变性,由于大型摄像机运动,大型演员形状变形,快速演员的动作等,这种合并就无法积累有意义的时空特征。在这项工作中,我们旨在研究在大动作下的动作检测中观察到Cuboid感知特征聚集的性能。此外,我们建议通过跟踪参与者并沿各个轨道进行时间特征聚集来增强演员特征表示。我们在各种固定时间尺度的动作管/轨道框之间使用相交的行动者(IOU)定义了演员运动。随着时间的推移,具有较大运动的动作将导致较低的IOU,并且较慢的动作将保持更高的IOU。我们发现,轨道感知功能聚集始终取得了巨大的改善,尤其是对于与Cuboid感知的基线相比,在大型运动下进行的动作。结果,我们还报告了大规模多运动数据集的最先进。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译