对异常域特定视频集的有效分析是一个重要的实践问题,在该问题中,最新的通用模型仍面临局限性。因此,希望设计基准数据集,以挑战具有其他约束的特定领域的新型强大模型。重要的是要记住,特定域的数据可能更嘈杂(例如,内窥镜或水下视频),并且通常需要更多经验丰富的用户才能有效搜索。在本文中,我们专注于从水下环境中移动相机拍摄的单次视频,这构成了研究目的的非平凡挑战。提出了新的海洋视频套件数据集的第一个碎片,用于用于视频检索和其他计算机视觉挑战。除了基本的元数据统计数据外,我们还基于低级特征以及所选密钥帧的语义注释提供了几个见解和参考图。该分析还包含实验,显示了检索受人尊敬的通用模型的局限性。
translated by 谷歌翻译
Learning text-video embeddings usually requires a dataset of video clips with manually provided captions. However, such datasets are expensive and time consuming to create and therefore difficult to obtain on a large scale. In this work, we propose instead to learn such embeddings from video data with readily available natural language annotations in the form of automatically transcribed narrations. The contributions of this work are three-fold. First, we introduce HowTo100M: a large-scale dataset of 136 million video clips sourced from 1.22M narrated instructional web videos depicting humans performing and describing over 23k different visual tasks. Our data collection procedure is fast, scalable and does not require any additional manual annotation. Second, we demonstrate that a text-video embedding trained on this data leads to state-ofthe-art results for text-to-video retrieval and action localization on instructional video datasets such as YouCook2 or CrossTask. Finally, we show that this embedding transfers well to other domains: fine-tuning on generic Youtube videos (MSR-VTT dataset) and movies (LSMDC dataset) outperforms models trained on these datasets alone. Our dataset, code and models are publicly available [1]. * Equal contribution.
translated by 谷歌翻译
近期和越来越越来越多的视频 - 语言研究的兴趣已经推动了大规模数据集的开发,可实现数据密集型机器学习技术。相比之下,在评估这些数据集的适应性时,已经进行了有限的努力进行视频 - 语言接地任务。最近的作品已经开始发现这些数据集中的重大限制,这表明最先进的技术通常会过度地覆盖到隐藏的数据集偏差。在这项工作中,我们呈现MAD(电影音频描述),这是一种新颖的基准,从扩充现有视频数据集的范式,其中包含文本注释,并专注于爬行和对齐主流电影的可用音频描述。 MAD包含超过384,000个自然语言句子,该句子接地为超过1,200小时的视频,并且在视频 - 语言接地数据集中展示目前诊断的偏差显着减少。疯狂的收集策略使新颖且更具挑战性的视频 - 语言接地版本,其中短时间时刻(通常秒长)必须在多样化的长型视频中准确地接地,可以持续长达三个小时。
translated by 谷歌翻译
The photograph and our understanding of photography is ever changing and has transitioned from a world of unprocessed rolls of C-41 sitting in a fridge 50 years ago to sharing photos on the 1.5" screen of a point and shoot camera 10 years back. And today the photograph is again something different. The way we take photos is fundamentally different. We can view, share, and interact with photos on the device they were taken on. We can edit, tag, or "filter" photos directly on the camera at the same time the photo is being taken. Photos can be automatically pushed to various online sharing services, and the distinction between photos and videos has lessened. Beyond this, and more importantly, there are now lots of them. To Facebook alone more than 250 billion photos have been uploaded and on average it receives over 350 million new photos every day [6], while YouTube reports that 300 hours of video are uploaded every minute [22]. A back of the envelope estimation reports 10% of all photos in the world were taken in the last 12 months, and that was calculated already more than three years ago [8].Today, a large number of the digital media objects that are shared have been uploaded to services like Flickr or Instagram, which along with their metadata and their social ecosystem form a vibrant environment for finding solutions to many research questions at scale. Photos and videos provide a wealth of information about the universe, covering entertainment, travel, personal records, and various other aspects of life in general as it was when they were taken. Considered collectively, they represent knowledge that goes * This work was done while Benjamin Elizalde was at ICSI.† This work was done while Karl Ni was at LLNL. ‡ This work was done while Damian Borth was at ICSI. § This work was done while Li-Jia Li was at Yahoo Labs.
translated by 谷歌翻译
在Web规模数据上预测的大型视觉和语言模型提供了对许多V&L问题无价的表示。但是,目前尚不清楚如何将它们用于以非结构化语言为特定于用户特定的视觉概念。这个问题来自多个域,从个性化图像检索到与智能设备的个性化交互。我们介绍了一个新的学习设置,称为个性化视觉和语言(PERVL),并使用两个新的基准数据集来检索和细分用户特定的“个性化”概念“野外”。在PERVL中,应该独立于下游任务(2)允许经过审慎的模型以免费语言来推论它们,并且(3)不需要个性化的负面示例。我们提出了一个用于解决PERVL的体系结构,该体系结构通过扩展了一个预审计模型的输入词汇,并用新单词嵌入新的个性化概念。然后,模型可以通过简单地在句子中使用它们来推理它们。我们证明我们的方法从几个示例中学习了个性化的视觉概念,并且可以使用丰富的文本查询有效地将它们应用于图像检索和语义细分中。
translated by 谷歌翻译
Visual object analysis researchers are increasingly experimenting with video, because it is expected that motion cues should help with detection, recognition, and other analysis tasks. This paper presents the Cambridge-driving Labeled Video Database (CamVid) as the first collection of videos with object class semantic labels, complete with metadata. The database provides ground truth labels that associate each pixel with one of 32 semantic classes. The database addresses the need for experimental data to quantitatively evaluate emerging algorithms. While most videos are filmed with fixed-position CCTV-style cameras, our data was captured from the perspective of a driving automobile. The driving scenario increases the number and heterogeneity of the observed object classes. Over 10 min of high quality 30 Hz footage is being provided, with corresponding semantically labeled images at 1 Hz and in part, 15 Hz. The CamVid Database offers four contributions that are relevant to object analysis researchers. First, the per-pixel semantic segmentation of over 700 images was specified manually, and was then inspected and confirmed by a second person for accuracy. Second, the high-quality and large resolution color video images in the database represent valuable extended duration digitized footage to those interested in driving scenarios or ego-motion. Third, we filmed calibration sequences for the camera color response and intrinsics, and computed a 3D camera pose for each frame in the sequences. Finally, in support of expanding this or other databases, we present custom-made labeling software for assisting users who wish to paint precise class-labels for other images and videos. We evaluate the relevance of the database by measuring the performance of an algorithm from each of three distinct domains: multi-class object recognition, pedestrian detection, and label propagation.
translated by 谷歌翻译
多媒体异常数据集在自动监视中发挥着至关重要的作用。它们具有广泛的应用程序,从异常对象/情况检测到检测危及生命事件的检测。该字段正在接收大量的1.5多年的巨大研究兴趣,因此,已经创建了越来越多地专用于异常动作和对象检测的数据集。点击这些公共异常数据集使研究人员能够生成和比较具有相同输入数据的各种异常检测框架。本文介绍了各种视频,音频以及基于异常检测的应用的综合调查。该调查旨在解决基于异常检测的多媒体公共数据集缺乏全面的比较和分析。此外,它可以帮助研究人员选择最佳可用数据集,用于标记框架。此外,我们讨论了现有数据集和未来方向洞察中开发多峰异常检测数据集的差距。
translated by 谷歌翻译
While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content.In this paper we present MSR-VTT (standing for "MSR-Video to Text") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.
translated by 谷歌翻译
根据文本描述检索目标视频是巨大实用价值的任务,并且在过去几年中受到了不断的关注。在本文中,我们专注于多查询视频检索的较少设置,其中提供了多个查询,以便搜索视频档案。首先表明,多查询检索任务是更务实的,代表现实世界用例,更好地评估当前模型的检索能力,从而应得进一步调查与更普遍的单程检索再现。然后,我们提出了几种新方法,用于利用训练时间来利用多个查询,以改善从常规单查验训练模型的简单组合多个查询的相似性输出。我们的模型在三个不同的数据集中始终如一地占有几种竞争基础。例如,Recall @ 1可以在MSR-VTT上提高4.7点,在MSVD上的4.1点和Gatex上的11.7点,在最先进的Clip4Clip模型上构建的强大基线。我们相信进一步的建模努力将为这种方向带来新的见解,并在现实世界视频检索应用中表现更好的新系统。代码可在https://github.com/princetonvisualai/mqvr获得。
translated by 谷歌翻译
它仍然是一个管道梦想,电话和AR眼镜的AI助手可以帮助我们的日常生活来解决我们的问题,如“如何调整这款手表日期?”和“如何设置加热持续时间?(指向烤箱的同时)”。传统任务中使用的查询(即视频问题应答,视频检索,时刻定位)通常是有关的,并基于纯文本。相比之下,我们提出了一项名为Cometdancy的问题驱动视频段检索(AQVSR)的新任务。我们每个问题都是一个图像框文本查询,专注于我们日常生活中的物品,并期望从教学视频转录程序段的语料库中检索相关的答案段。为了支持对此AQVSR任务的研究,我们构建一个名为AssionSR的新数据集。我们设计新颖的准则来创造高质量样本。此数据集包含有关1K视频片段的1.4K多模态问题,来自各种日用物品的教学视频。为了解决AQVSR,我们开发了一个称为双重多模式编码器(DME)的简单但有效的模型,显着优于几种基线方法,同时仍然有大型未来改善空间。此外,我们提供了详细的消融分析。我们的代码和数据可以在https://github.com/stanlei52/aqvsr中获得。
translated by 谷歌翻译
水下结构的维修和维护以及海洋科学在很大程度上依赖于水下对象检测的结果,这是图像处理工作流程的关键部分。尽管已经提出了许多基于计算机视觉的方法,但还没有人开发出一种可靠,准确地检测并对深海中发现的物体和动物进行分类的系统。这主要是由于障碍物在水下环境中散射和吸收光线。随着深度学习的引入,科学家们已经能够解决广泛的问题,包括保护海洋生态系统,在紧急情况下挽救生命,防止水下灾难,并发现,汤匙和识别水下目标。但是,这些深度学习系统的好处和缺点仍然未知。因此,本文的目的是提供在水下对象检测中使用的数据集的概述,并介绍为此目的所采用的算法的优势和缺点的讨论。
translated by 谷歌翻译
最近在视觉跟踪中成功的关键因素之一是专用基准的可用性。尽管对跟踪研究有很大的受益,但现有的基准并没有与以前相同的难度,而最近的跟踪器的性能则主要是由于(i)引入了更复杂的基于变形金刚的方法,并且(ii)缺乏各种情况,因此缺乏各种情况。不良的可见性,例如恶劣的天气条件,伪装和成像效应。我们介绍了Avist,这是一个专门的基准,用于在具有不良可见性的不同情况下进行视觉跟踪。 Avist包括120个具有80k注释框架的具有挑战性的序列,涵盖了18种不同的方案,这些场景大致分为五个具有42个对象类别的属性。远景的主要贡献是涵盖恶劣天气条件的多样化和挑战性的情况,例如浓雾,大雨和沙尘暴;阻塞效应,包括火,阳光和溅水;不利成像效应,例如,低光;目标效应,包括小目标和干扰物对象以及伪装。我们进一步基准了17个关于Avist的流行和最新跟踪器,对它们跨属性的跟踪性能进行了详细分析,这表明了性能改善的巨大空间。我们认为,远景可以通过补充现有的基准,开发新的创意跟踪解决方案,以继续推动最先进的界限,从而极大地使跟踪社区受益。我们的数据集以及完整的跟踪性能评估可在以下网址提供:https://github.com/visionml/pytracking
translated by 谷歌翻译
开放程序代表全球手术的主要形式。人工智能(AI)有可能优化手术实践并改善患者结果,但努力主要集中在微创技术上。我们的工作通过策划,从YouTube,从YouTube,Open Surgical视频的最大数据集克服了培训AI模型的现有数据限制:1997年从50个国家上传的23个外科手术的视频。使用此数据集,我们开发了一种能够实时了解外科行为,手和工具的多任务AI模型 - 程序流程和外科医生技能的构建块。我们表明我们的模型推广了各种外科类型和环境。说明这种普遍性,我们直接应用了YouTube培训的模型,分析了在学术医疗中心前瞻性收集的开放式手术,并确定了与手动效率相关的外科技能的运动学描述符。我们的开放外科(AVOS)数据集和培训模式的注释视频将可用于进一步发展外科艾。
translated by 谷歌翻译
我们介绍了Caltech Fish计数数据集(CFC),这是一个用于检测,跟踪和计数声纳视频中鱼类的大型数据集。我们将声纳视频识别为可以推进低信噪比计算机视觉应用程序并解决多对象跟踪(MOT)和计数中的域概括的丰富数据来源。与现有的MOT和计数数据集相比,这些数据集主要仅限于城市中的人和车辆的视频,CFC来自自然世界领域,在该域​​中,目标不容易解析,并且无法轻易利用外观功能来进行目标重新识别。 CFC允许​​研究人员训练MOT和计数算法并评估看不见的测试位置的概括性能。我们执行广泛的基线实验,并确定在MOT和计数中推进概括的最新技术的关键挑战和机会。
translated by 谷歌翻译
由于其前所未有的优势,在规模,移动,部署和隐蔽观察能力方面,空中平台和成像传感器的快速出现是实现新的空中监测形式。本文从计算机视觉和模式识别的角度来看,全面概述了以人为本的空中监控任务。它旨在为读者提供使用无人机,无人机和其他空中平台的空中监测任务当前状态的深入系统审查和技术分析。感兴趣的主要对象是人类,其中要检测单个或多个受试者,识别,跟踪,重新识别并进行其行为。更具体地,对于这四项任务中的每一个,我们首先讨论与基于地面的设置相比在空中环境中执行这些任务的独特挑战。然后,我们审查和分析公共可用于每项任务的航空数据集,并深入了解航空文学中的方法,并调查他们目前如何应对鸟瞰挑战。我们在讨论缺失差距和开放研究问题的讨论中得出结论,告知未来的研究途径。
translated by 谷歌翻译
第六版的AI城市挑战赛特别关注了两个领域的问题,在计算机视觉和人工智能的交集中具有巨大的解锁潜力:智能交通系统(ITS),以及实体和砂浆零售业务。 2022年AI City Challenge的四个挑战赛收到了来自27个国家 /地区254个团队的参与请求。轨道1地址的城市规模多目标多摄像机(MTMC)车辆跟踪。轨道2地址为基于天然语言的车辆轨道检索。 Track 3是一条全新的自然主义驾驶分析的轨道,该轨道是由安装在车辆内部的几台相机捕获的,该摄像头专注于驾驶员安全,而任务是对驾驶员的操作进行分类。 Track 4是另一个旨在仅使用单个视图摄像头实现零售商店自动结帐的新轨道。我们发布了两个基于不同方法的领导董事会成员提交,包括比赛的公共负责人委员会,不允许使用外部数据,以及用于所有提交结果的总管委员会。参与团队的最高表现建立了强大的基线,甚至超过了拟议的挑战赛中的最先进。
translated by 谷歌翻译
认知科学表明,人类会以所见主体的变化分离的事件来感知视频。状态变化触发新事件,是大量冗余信息中最有用的事件之一。但是,先前的研究重点是对细分市场的总体理解,而无需评估内部的细粒度变化。在本文中,我们介绍了一个名为Kinetic-GEB+的新数据集。该数据集由与标题相关的170K边界组成,这些字幕描述了12K视频中通用事件中的状态更改。在这个新数据集中,我们提出了三个任务,支持通过状态变化开发对视频的更细粒度,健壮和类似人类的理解。我们在数据集中评估了许多代表性基线,在该基础上,我们还设计了一种新的TPD(基于时间的成对差异)建模方法,以进行视觉差异并实现显着的性能改进。此外,结果表明,在利用不同粒度,视觉差异的表示以及状态变化的准确定位方面,当前方法仍然存在着巨大的挑战。进一步的分析表明,我们的数据集可以推动开发更强大的方法来了解状态变化,从而提高视频级别的理解。该数据集可从https://github.com/yuxuan-w/geb-plus获得
translated by 谷歌翻译
Timeyou have a little pressure you are cutting the wood readjusting the table saw I am using a roller sure you applied glue Figure 1: We describe an efficient approach to learn visual representations from misaligned and noisy narrations (bottom) automatically extracted from instructional videos (top). Our video representations are learnt from scratch without relying on any manually annotated visual dataset yet outperform all self-supervised and many fully-supervised methods on several video recognition benchmarks.
translated by 谷歌翻译
Massive data corpora like WebText, Wikipedia, Conceptual Captions, WebImageText, and LAION have propelled recent dramatic progress in AI. Large neural models trained on such datasets produce impressive results and top many of today's benchmarks. A notable omission within this family of large-scale datasets is 3D data. Despite considerable interest and potential applications in 3D vision, datasets of high-fidelity 3D models continue to be mid-sized with limited diversity of object categories. Addressing this gap, we present Objaverse 1.0, a large dataset of objects with 800K+ (and growing) 3D models with descriptive captions, tags, and animations. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category. We demonstrate the large potential of Objaverse via four diverse applications: training generative 3D models, improving tail category segmentation on the LVIS benchmark, training open-vocabulary object-navigation models for Embodied AI, and creating a new benchmark for robustness analysis of vision models. Objaverse can open new directions for research and enable new applications across the field of AI.
translated by 谷歌翻译
In spite of many dataset efforts for human action recognition, current computer vision algorithms are still severely limited in terms of the variability and complexity of the actions that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on simple actions and movements occurring on manually trimmed videos. In this paper we introduce ActivityNet, a new largescale video benchmark for human activity understanding. Our benchmark aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours. We illustrate three scenarios in which ActivityNet can be used to compare algorithms for human activity understanding: untrimmed video classification, trimmed activity classification and activity detection.
translated by 谷歌翻译