First-person vision is gaining interest as it offers a unique viewpoint on people's interaction with objects, their attention, and even intention. However, progress in this challenging domain has been relatively slow due to the lack of sufficiently large datasets. In this paper, we introduce EPIC-KITCHENS, a large-scale egocentric video benchmark recorded by 32 participants in their native kitchen environments. Our videos depict non-scripted daily activities: we simply asked each participant to start recording every time they entered their kitchen. Recording took place in 4 cities (in North America and Europe) by participants belonging to 10 different nationalities, resulting in highly diverse cooking styles. Our dataset features 55 hours of video consisting of 11.5M frames, which we densely labelled for a total of 39.6K action segments and 454.3K object bounding boxes. Our annotation is unique in that we had the participants narrate their own videos (after recording), thus reflecting true intention, and we crowd-sourced ground-truths based on these. We describe our object, action and anticipation challenges, and evaluate several baselines over two test splits, seen and unseen kitchens.
translated by 谷歌翻译
可穿戴摄像机可以从用户的角度获取图像和视频。可以处理这些数据以了解人类的行为。尽管人类的行为分析已在第三人称视野中进行了彻底的研究,但仍在以自我为中心的环境中,尤其是在工业场景中进行了研究。为了鼓励在该领域的研究,我们介绍了Meccano,这是一个以自我为中心视频的多式模式数据集来研究类似工业的环境中的人类行为理解。多模式的特征是凝视信号,深度图和RGB视频同时使用自定义耳机获得。该数据集已在从第一人称视角的人类行为理解的背景下明确标记为基本任务,例如识别和预测人类对象的相互作用。使用MECCANO数据集,我们探索了五个不同的任务,包括1)动作识别,2)活动对象检测和识别,3)以自我为中心的人类对象互动检测,4)动作预期和5)下一步活动对象检测。我们提出了一个旨在研究人类行为的基准,该基准在被考虑的类似工业的情况下,表明所研究的任务和所考虑的方案对于最先进的算法具有挑战性。为了支持该领域的研究,我们在https://iplab.dmi.unict.it/meccano/上公开发布数据集。
translated by 谷歌翻译
我们介绍了遮阳板,一个新的像素注释的新数据集和一个基准套件,用于在以自我为中心的视频中分割手和活动对象。遮阳板注释Epic-kitchens的视频,其中带有当前视频分割数据集中未遇到的新挑战。具体而言,我们需要确保像素级注释作为对象经历变革性相互作用的短期和长期一致性,例如洋葱被剥皮,切成丁和煮熟 - 我们旨在获得果皮,洋葱块,斩波板,刀,锅以及表演手的准确像素级注释。遮阳板引入了一条注释管道,以零件为ai驱动,以进行可伸缩性和质量。总共,我们公开发布257个对象类的272K手册语义面具,990万个插值密集口罩,67K手动关系,涵盖36小时的179个未修剪视频。除了注释外,我们还引入了视频对象细分,互动理解和长期推理方面的三个挑战。有关数据,代码和排行榜:http://epic-kitchens.github.io/visor
translated by 谷歌翻译
This paper introduces a video dataset of spatiotemporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips.AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6% mAP, underscoring the need for developing new approaches for video understanding.
translated by 谷歌翻译
设计可以成功部署在日常生活环境中的活动检测系统需要构成现实情况典型挑战的数据集。在本文中,我们介绍了一个新的未修剪日常生存数据集,该数据集具有几个现实世界中的挑战:Toyota Smarthome Untrimmed(TSU)。 TSU包含以自发方式进行的各种活动。数据集包含密集的注释,包括基本的,复合活动和涉及与对象相互作用的活动。我们提供了对数据集所需的现实世界挑战的分析,突出了检测算法的开放问题。我们表明,当前的最新方法无法在TSU数据集上实现令人满意的性能。因此,我们提出了一种新的基线方法,以应对数据集提供的新挑战。此方法利用一种模态(即视线流)生成注意力权重,以指导另一种模态(即RGB)以更好地检测活动边界。这对于检测以高时间差异为特征的活动特别有益。我们表明,我们建议在TSU和另一个受欢迎的挑战数据集Charades上优于最先进方法的方法。
translated by 谷歌翻译
每天,人类都会执行许多紧密相关的活动,这些活动涉及微妙的判别动作,例如穿衬衫与穿夹克,或握手与给高五个。道德视觉AI的活动识别可以为我们的日常生活模式提供见解,但是现有的活动识别数据集并不能捕捉到世界各地这些人类活动的巨大多样性。为了解决此限制,我们介绍Collector,这是一个免费的移动应用程序,以录制视频,同时注释同意主题的对象和活动。这个新的数据收集平台用于策划People(CAP)数据集的同意活动,这是全球人的第一个大规模,细粒度的活动数据集。 CAP数据集包含145万个日常生活的512个细粒子活动标签的视频片段,由33个国家 /地区的780名受试者收集。我们为该数据集提供活动分类和活动检测基准,并分析基线结果,以深入了解世界周围人如何进行共同活动。可以在visym.github.io/cap上使用数据集,基准,评估工具,公共排行榜和移动应用程序。
translated by 谷歌翻译
对人类对象相互作用的理解在第一人称愿景(FPV)中至关重要。遵循相机佩戴者操纵的对象的视觉跟踪算法可以提供有效的信息,以有效地建模此类相互作用。在过去的几年中,计算机视觉社区已大大提高了各种目标对象和场景的跟踪算法的性能。尽管以前有几次尝试在FPV域中利用跟踪器,但仍缺少对最先进跟踪器的性能的有条理分析。这项研究差距提出了一个问题,即应使用当前的解决方案``现成''还是应进行更多特定领域的研究。本文旨在为此类问题提供答案。我们介绍了FPV中单个对象跟踪的首次系统研究。我们的研究广泛分析了42个算法的性能,包括通用对象跟踪器和基线FPV特定跟踪器。分析是通过关注FPV设置的不同方面,引入新的绩效指标以及与FPV特定任务有关的。这项研究是通过引入Trek-150(由150个密集注释的视频序列组成的新型基准数据集)来实现的。我们的结果表明,FPV中的对象跟踪对当前的视觉跟踪器构成了新的挑战。我们强调了导致这种行为的因素,并指出了可能的研究方向。尽管遇到了困难,但我们证明了跟踪器为需要短期对象跟踪的FPV下游任务带来好处。我们预计,随着新的和FPV特定的方法学会得到研究,通用对象跟踪将在FPV中受欢迎。
translated by 谷歌翻译
具有注释的缺乏大规模的真实数据集使转移学习视频活动的必要性。我们的目标是为少数行动分类开发几次拍摄转移学习的有效方法。我们利用独立培训的本地视觉提示来学习可以从源域传输的表示,该源域只能使用少数示例来从源域传送到不同的目标域。我们使用的视觉提示包括对象 - 对象交互,手掌和地区内的动作,这些地区是手工位置的函数。我们采用了一个基于元学习的框架,以提取部署的视觉提示的独特和域不变组件。这使得能够在使用不同的场景和动作配置捕获的公共数据集中传输动作分类模型。我们呈现了我们转让学习方法的比较结果,并报告了阶级阶级和数据间数据间际传输的最先进的行动分类方法。
translated by 谷歌翻译
Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill.To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 seconds, showing activities of 267 people from three continents, and over 15% of the videos have more than one person. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community.
translated by 谷歌翻译
Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in
translated by 谷歌翻译
由于几个因素之间的微妙权衡:参与者的隐私,生态有效性,数据保真度和后勤开销,记录野外未脚本人类互动的动态是具有挑战性的。为了解决这些问题,在社区精神上为社区的“数据集”之后,我们提出了会议生活实验室(Conflab):一个新的概念,用于多模式多模式数据收集,野生野外社交对话。对于此处描述的Conflab的首次实例化,我们在一次大型国际会议上组织了现实生活中的专业网络活动。该数据集涉及48个会议参与者,捕捉了地位,熟人和网络动机的各种组合。我们的捕获设置改善了先前野外数据集的数据保真度,同时保留隐私敏感性:从非侵入性的架空视图中获得8个视频(1920x1080,60 fps),并具有定制的可穿戴传感器,并带有车载记录(完整9) - 轴IMU),具有隐私性的低频音频(1250 Hz)和基于蓝牙的接近度。此外,我们开发了用于采集时分布式硬件同步的自定义解决方案,并以高采样速率对身体关键点和动作进行了及时的连续注释。我们的基准测试展示了与野外隐私保护社交数据分析有关的一些开放研究任务:从高架摄像头视图,基于骨架的No-Audio扬声器检测和F-Formation检测中的关键点检测。
translated by 谷歌翻译
Can we teach a robot to recognize and make predictions for activities that it has never seen before? We tackle this problem by learning models for video from text. This paper presents a hierarchical model that generalizes instructional knowledge from large-scale text corpora and transfers the knowledge to video. Given a portion of an instructional video, our model recognizes and predicts coherent and plausible actions multiple steps into the future, all in rich natural language. To demonstrate the capabilities of our model, we introduce the \emph{Tasty Videos Dataset V2}, a collection of 4022 recipes for zero-shot learning, recognition and anticipation. Extensive experiments with various evaluation metrics demonstrate the potential of our method for generalization, given limited video data for training models.
translated by 谷歌翻译
This paper presents a new large scale multi-person tracking dataset -- \texttt{PersonPath22}, which is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20 datasets. The lack of large scale training and test data for this task has limited the community's ability to understand the performance of their tracking systems on a wide range of scenarios and conditions such as variations in person density, actions being performed, weather, and time of day. \texttt{PersonPath22} dataset was specifically sourced to provide a wide variety of these conditions and our annotations include rich meta-data such that the performance of a tracker can be evaluated along these different dimensions. The lack of training data has also limited the ability to perform end-to-end training of tracking systems. As such, the highest performing tracking systems all rely on strong detectors trained on external image datasets. We hope that the release of this dataset will enable new lines of research that take advantage of large scale video based training data.
translated by 谷歌翻译
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the chal-
translated by 谷歌翻译
在这项工作中,我们介绍了BBC-oxford英国手语(Bobsl)数据集,这是英国手语的大规模视频集合(BSL)。Bobsl是一个基于以前工作中引入的BSL-1K数据集的扩展和公开发布的数据集。我们描述了数据集的动机,以及统计和可用注释。我们进行实验,为标志识别,手语对齐和手语翻译的任务提供基线。最后,我们从机器学习和语言学的角度描述了数据的几个优势和局限,注意数据集中存在的偏差源,并在手语技术背景下讨论Bobsl的潜在应用。数据集可在https://www.robots.ox.ac.uk/~vgg/data/bobsl/处获得。
translated by 谷歌翻译
我们介绍了空间本地化叙述中的视频中的任务。我们的方法的关键是能够学会在与随附的叙述的视频中的大型视频中对自我监督进行空间地定位与自我监督的互动。为实现这一目标,我们提出了一种多层跨模型关注网络,可以在培训期间有效优化对比损失。我们介绍了一种分割的策略,可以通过视觉和自然语言方式计算和中间模态注意力之间的交替,这允许通过直接对比两种方式的表示来实现有效的培训。我们展示了我们对HOWTO100M教学数据集的自我训练的方法的有效性,并在YouCook2 DataSet中的本地化描述交互的新收集数据集上进行评估。我们展示了我们的方法优于替代基准,包括浅薄的共同关注和完全跨越的关注。我们还将我们的方法应用于在Flickr30k上的弱监管下的图像中的接地短语,并显示堆叠多个注意层是有效的,并且当与对区域丢失相结合时,在召回召回和指向时达到最先进的艺术状态手准确性。
translated by 谷歌翻译
为了使AI安全地在医院,学校和工作场所等现实世界中安全部署,它必须能够坚定地理解物理世界。这种推理的基础是物理常识:了解可用对象的物理特性和提供的能力,如何被操纵以及它们如何与其他对象进行交互。物理常识性推理从根本上是一项多感官任务,因为物理特性是通过多种模式表现出来的,其中两个是视觉和声学。我们的论文通过贡献PACS来朝着现实世界中的物理常识推理:第一个用于物理常识属性注释的视听基准。 PACS包含13,400对答案对,涉及1,377个独特的物理常识性问题和1,526个视频。我们的数据集提供了新的机会来通过将音频作为此多模式问题的核心组成部分来推进物理推理的研究领域。使用PACS,我们在我们的新挑战性任务上评估了多种最先进的模型。尽管某些模型显示出令人鼓舞的结果(精度为70%),但它们都没有人类的绩效(精度为95%)。我们通过证明多模式推理的重要性并为未来的研究提供了可能的途径来结束本文。
translated by 谷歌翻译
在本文中,我们考虑了从长时间的视频到几分钟的长视频进行分类的问题(例如,烹饪不同的食谱,烹饪不同的食谱,进行不同的家庭装修,创建各种形式的艺术和手工艺品)。准确地对这些活动进行分类,不仅需要识别构成任务的单个步骤,还需要捕获其时间依赖性。这个问题与传统的动作分类大不相同,在传统的动作分类中,模型通常在跨越几秒钟的视频上进行了优化,并且手动修剪以包含简单的原子动作。虽然步骤注释可以使模型的培训能够识别程序活动的各个步骤,但由于长时间视频中手动注释时间界的超级注释,因此该领域的现有大规模数据集不包括此类段标签。为了解决这个问题,我们建议通过利用文本知识库(Wikihow)的遥远监督来自动确定教学视频中的步骤,其中包括对执行各种复杂活动所需的步骤的详细描述。我们的方法使用语言模型来匹配视频中自动转录的语音,以在知识库中逐步描述。我们证明,经过训练的视频模型可以识别这些自动标记的步骤(无手动监督)产生了在四个下游任务上实现卓越的概括性能的表示:识别程序活动,步骤分类,步骤预测和以自我为中心的视频分类。
translated by 谷歌翻译
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
translated by 谷歌翻译
多媒体异常数据集在自动监视中发挥着至关重要的作用。它们具有广泛的应用程序,从异常对象/情况检测到检测危及生命事件的检测。该字段正在接收大量的1.5多年的巨大研究兴趣,因此,已经创建了越来越多地专用于异常动作和对象检测的数据集。点击这些公共异常数据集使研究人员能够生成和比较具有相同输入数据的各种异常检测框架。本文介绍了各种视频,音频以及基于异常检测的应用的综合调查。该调查旨在解决基于异常检测的多媒体公共数据集缺乏全面的比较和分析。此外,它可以帮助研究人员选择最佳可用数据集,用于标记框架。此外,我们讨论了现有数据集和未来方向洞察中开发多峰异常检测数据集的差距。
translated by 谷歌翻译