It is desirable for detection and classification algorithms to generalize to unfamiliar environments, but suitable benchmarks for quantitatively studying this phenomenon are not yet available. We present a dataset designed to measure recognition generalization to novel environments. The images in our dataset are harvested from twenty camera traps deployed to monitor animal populations. Camera traps are fixed at one location, hence the background changes little across images; capture is triggered automatically, hence there is no human bias. The challenge is learning recognition in a handful of locations, and generalizing animal detection and classification to new locations where no training data is available. In our experiments state-of-the-art algorithms show excellent performance when tested at the same location where they were trained. However, we find that generalization to new locations is poor, especially for classification systems.
translated by 谷歌翻译
Existing image classification datasets used in computer vision tend to have a uniform distribution of images across object categories. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist species classification and detection dataset, consisting of 859,000 images from over 5,000 different species of plants and animals. It features visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, feature a large class imbalance, and have been verified by multiple citizen scientists. We discuss the collection of the dataset and present extensive baseline experiments using state-of-the-art computer vision classification and detection models. Results show that current nonensemble based methods achieve only 67% top one classification accuracy, illustrating the difficulty of the dataset. Specifically, we observe poor results for classes with small numbers of training examples suggesting more attention is needed in low-shot learning.
translated by 谷歌翻译
我们介绍了Caltech Fish计数数据集(CFC),这是一个用于检测,跟踪和计数声纳视频中鱼类的大型数据集。我们将声纳视频识别为可以推进低信噪比计算机视觉应用程序并解决多对象跟踪(MOT)和计数中的域概括的丰富数据来源。与现有的MOT和计数数据集相比,这些数据集主要仅限于城市中的人和车辆的视频,CFC来自自然世界领域,在该域​​中,目标不容易解析,并且无法轻易利用外观功能来进行目标重新识别。 CFC允许​​研究人员训练MOT和计数算法并评估看不见的测试位置的概括性能。我们执行广泛的基线实验,并确定在MOT和计数中推进概括的最新技术的关键挑战和机会。
translated by 谷歌翻译
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the chal-
translated by 谷歌翻译
通过流行和通用的计算机视觉挑战来判断,如想象成或帕斯卡VOC,神经网络已经证明是在识别任务中特别准确。然而,最先进的准确性通常以高计算价格出现,需要硬件加速来实现实时性能,而使用案例(例如智能城市)需要实时分析固定摄像机的图像。由于网络带宽的数量,这些流将生成,我们不能依赖于卸载计算到集中云。因此,预期分布式边缘云将在本地处理图像。但是,边缘是由性质资源约束的,这给了可以执行的计算复杂性限制。然而,需要边缘与准确的实时视频分析之间的会面点。专用轻量级型号在每相机基础上可能有所帮助,但由于相机的数量增长,除非该过程是自动的,否则它很快就会变得不可行。在本文中,我们展示并评估COVA(上下文优化的视频分析),这是一个框架,可以帮助在边缘相机中自动专用模型专业化。 COVA通过专业化自动提高轻质模型的准确性。此外,我们讨论和审查过程中涉及的每个步骤,以了解每个人所带来的不同权衡。此外,我们展示了静态相机的唯一假设如何使我们能够制定一系列考虑因素,这大大简化了问题的范围。最后,实验表明,最先进的模型,即能够概括到看不见的环境,可以有效地用作教师以以恒定的计算成本提高较小网络的教师,提高精度。结果表明,我们的COVA可以平均提高预先训练的型号的准确性,平均为21%。
translated by 谷歌翻译
对人类对象相互作用的理解在第一人称愿景(FPV)中至关重要。遵循相机佩戴者操纵的对象的视觉跟踪算法可以提供有效的信息,以有效地建模此类相互作用。在过去的几年中,计算机视觉社区已大大提高了各种目标对象和场景的跟踪算法的性能。尽管以前有几次尝试在FPV域中利用跟踪器,但仍缺少对最先进跟踪器的性能的有条理分析。这项研究差距提出了一个问题,即应使用当前的解决方案``现成''还是应进行更多特定领域的研究。本文旨在为此类问题提供答案。我们介绍了FPV中单个对象跟踪的首次系统研究。我们的研究广泛分析了42个算法的性能,包括通用对象跟踪器和基线FPV特定跟踪器。分析是通过关注FPV设置的不同方面,引入新的绩效指标以及与FPV特定任务有关的。这项研究是通过引入Trek-150(由150个密集注释的视频序列组成的新型基准数据集)来实现的。我们的结果表明,FPV中的对象跟踪对当前的视觉跟踪器构成了新的挑战。我们强调了导致这种行为的因素,并指出了可能的研究方向。尽管遇到了困难,但我们证明了跟踪器为需要短期对象跟踪的FPV下游任务带来好处。我们预计,随着新的和FPV特定的方法学会得到研究,通用对象跟踪将在FPV中受欢迎。
translated by 谷歌翻译
相机陷阱彻底改变了许多物种的动物研究,这些物种以前由于其栖息地或行为而几乎无法观察到。它们通常是固定在触发时拍摄短序列图像的树上的相机。深度学习有可能克服工作量以根据分类单元或空图像自动化图像分类。但是,标准的深神经网络分类器失败,因为动物通常代表了高清图像的一小部分。这就是为什么我们提出一个名为“弱对象检测”的工作流程,以更快的速度rcnn+fpn适合这一挑战。该模型受到弱监督,因为它仅需要每个图像的动物分类量标签,但不需要任何手动边界框注释。首先,它会使用来自多个帧的运动自动执行弱监督的边界框注释。然后,它使用此薄弱的监督训练更快的RCNN+FPN模型。来自巴布亚新几内亚和密苏里州生物多样性监测活动的两个数据集获得了实验结果,然后在易于重复的测试台上获得了实验结果。
translated by 谷歌翻译
弱监督的对象本地化(WSOL)旨在学习仅使用图像级类别标签编码对象位置的表示形式。但是,许多物体可以在不同水平的粒度标记。它是动物,鸟还是大角的猫头鹰?我们应该使用哪些图像级标签?在本文中,我们研究了标签粒度在WSOL中的作用。为了促进这项调查,我们引入了Inatloc500,这是一个新的用于WSOL的大规模细粒基准数据集。令人惊讶的是,我们发现选择正确的训练标签粒度比选择最佳的WSOL算法提供了更大的性能。我们还表明,更改标签粒度可以显着提高数据效率。
translated by 谷歌翻译
了解物种的丰富是迈向理解其长期可持续性的第一步和我们可能对其的影响。生态学家使用相机陷阱来远程调查,用于存在特定的动物物种。以前的研究表明,可以训练深度学习模型,以便在相机陷阱图像内自动检测和分类动物,具有高度的信心。然而,培训这些模型的能力是依赖于拥有足够高质量训练数据的依赖性。当动物很少罕见或数据集是不存在的?该研究提出了一种使用颈部珍稀动物的图像的方法(专注于苏格兰野猫队)来生成训练数据集。我们探讨与在野生收集的数据应用时培训的普遍存在培训的模型相关的挑战。该研究是以生态学家在规划/工程中的需求的语境。在其他研究中之后,该项目建立了对象检测的集合,然后使用不同的图像操纵和类结构化技术来测试的对象检测,图像分割和图像分类模型来鼓励模型泛化。在苏格兰野猫队的背景下,研究得出结论,捕获在囚禁图像上的模型不能使用现有技术来推广到野生摄像机陷阱图像。然而,基于两级模型Wildcat与Wildcat的最终模型表演实现了81.6%的总精度得分,并且在测试集中的野猫准确度得分为54.8%,其中仅包含野猫队的1%的图像。这表明使用囚禁图像是可行的,具有进一步的研究。这是第一个研究,该研究试图基于囚禁数据生成培训集,并在规划/工程中的生态学家的背景下探讨这些模型的发展。
translated by 谷歌翻译
全球城市可免费获得大量的地理参考全景图像,以及各种各样的城市物体上的位置和元数据的详细地图。它们提供了有关城市物体的潜在信息来源,但是对象检测的手动注释是昂贵,费力和困难的。我们可以利用这种多媒体来源自动注释街道级图像作为手动标签的廉价替代品吗?使用Panorams框架,我们引入了一种方法,以根据城市上下文信息自动生成全景图像的边界框注释。遵循这种方法,我们仅以快速自动的方式从开放数据源中获得了大规模的(尽管嘈杂,但都嘈杂,但对城市数据集进行了注释。该数据集涵盖了阿姆斯特丹市,其中包括771,299张全景图像中22个对象类别的1400万个嘈杂的边界框注释。对于许多对象,可以从地理空间元数据(例如建筑价值,功能和平均表面积)获得进一步的细粒度信息。这样的信息将很难(即使不是不可能)单独根据图像来获取。为了进行详细评估,我们引入了一个有效的众包协议,用于在全景图像中进行边界框注释,我们将其部署以获取147,075个地面真实对象注释,用于7,348张图像的子集,Panorams-clean数据集。对于我们的Panorams-Noisy数据集,我们对噪声以及不同类型的噪声如何影响图像分类和对象检测性能提供了广泛的分析。我们可以公开提供数据集,全景噪声和全景清洁,基准和工具。
translated by 谷歌翻译
The ability to capture detailed interactions among individuals in a social group is foundational to our study of animal behavior and neuroscience. Recent advances in deep learning and computer vision are driving rapid progress in methods that can record the actions and interactions of multiple individuals simultaneously. Many social species, such as birds, however, live deeply embedded in a three-dimensional world. This world introduces additional perceptual challenges such as occlusions, orientation-dependent appearance, large variation in apparent size, and poor sensor coverage for 3D reconstruction, that are not encountered by applications studying animals that move and interact only on 2D planes. Here we introduce a system for studying the behavioral dynamics of a group of songbirds as they move throughout a 3D aviary. We study the complexities that arise when tracking a group of closely interacting animals in three dimensions and introduce a novel dataset for evaluating multi-view trackers. Finally, we analyze captured ethogram data and demonstrate that social context affects the distribution of sequential interactions between birds in the aviary.
translated by 谷歌翻译
Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced 'el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ∼2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. LVIS is available at http://www.lvisdataset.org.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
translated by 谷歌翻译
多年来,为各种对象检测任务开发了数据集。海事域中的对象检测对于船舶的安全和导航至关重要。但是,在海事域中,仍然缺乏公开可用的大规模数据集。为了克服这一挑战,我们提出了Kolomverse,这是一个开放的大型图像数据集,可在Kriso(韩国研究所和海洋工程研究所)的海事域中进行物体检测。我们收集了从韩国21个领土水域捕获的5,845小时的视频数据。通过精心设计的数据质量评估过程,我们从视频数据中收集了大约2,151,470 4K分辨率的图像。该数据集考虑了各种环境:天气,时间,照明,遮挡,观点,背景,风速和可见性。 Kolomverse由五个类(船,浮标,渔网浮标,灯塔和风电场)组成,用于海上对象检测。该数据集的图像为3840美元$ \ times $ 2160像素,据我们所知,它是迄今为止最大的公开数据集,用于海上域中的对象检测。我们进行了对象检测实验,并在几个预训练的最先进的架构上评估了我们的数据集,以显示我们数据集的有效性和实用性。该数据集可在:\ url {https://github.com/maritimedataset/kolomverse}中获得。
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
This paper presents a new large scale multi-person tracking dataset -- \texttt{PersonPath22}, which is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20 datasets. The lack of large scale training and test data for this task has limited the community's ability to understand the performance of their tracking systems on a wide range of scenarios and conditions such as variations in person density, actions being performed, weather, and time of day. \texttt{PersonPath22} dataset was specifically sourced to provide a wide variety of these conditions and our annotations include rich meta-data such that the performance of a tracker can be evaluated along these different dimensions. The lack of training data has also limited the ability to perform end-to-end training of tracking systems. As such, the highest performing tracking systems all rely on strong detectors trained on external image datasets. We hope that the release of this dataset will enable new lines of research that take advantage of large scale video based training data.
translated by 谷歌翻译
由于其前所未有的优势,在规模,移动,部署和隐蔽观察能力方面,空中平台和成像传感器的快速出现是实现新的空中监测形式。本文从计算机视觉和模式识别的角度来看,全面概述了以人为本的空中监控任务。它旨在为读者提供使用无人机,无人机和其他空中平台的空中监测任务当前状态的深入系统审查和技术分析。感兴趣的主要对象是人类,其中要检测单个或多个受试者,识别,跟踪,重新识别并进行其行为。更具体地,对于这四项任务中的每一个,我们首先讨论与基于地面的设置相比在空中环境中执行这些任务的独特挑战。然后,我们审查和分析公共可用于每项任务的航空数据集,并深入了解航空文学中的方法,并调查他们目前如何应对鸟瞰挑战。我们在讨论缺失差距和开放研究问题的讨论中得出结论,告知未来的研究途径。
translated by 谷歌翻译
相机陷阱是监视收集大量图片的野生动植物的策略。从每个物种收集的图像数量通常遵循长尾分布,即,一些类有大量实例,而许多物种只有很小的比例。尽管在大多数情况下,这些稀有物种是生态学家感兴趣的类别,但在使用深度学习模型时,它们通常被忽略,因为这些模型需要大量的培训图像。在这项工作中,我们系统地评估了最近提出的技术 - 即平方根重新采样,平衡的焦点损失和平衡的组软效果 - 以解决相机陷阱图像中动物物种的长尾视觉识别。为了得出更一般的结论,我们评估了四个计算机视觉模型家族(Resnet,Mobilenetv3,EdgitionNetV2和Swin Transformer)和具有不同特征不同的相机陷阱数据集的四个家族。最初,我们用最新的培训技巧准备了一个健壮的基线,然后应用了改善长尾识别的方法。我们的实验表明,Swin Transformer可以在不应用任何其他方法处理不平衡的方法的情况下达到稀有类别的高性能,WCS数据集的总体准确性为88.76%,Snapshot Serengeti的总体准确性为94.97%,考虑到基于位置的火车/测试拆分。通常,平方根采样是一种方法,它最大程度地提高了少数族裔阶级的表现约为10%,但以降低多数类准确性至少4%的代价。这些结果促使我们使用合并平方根采样和基线的合奏提出了一种简单有效的方法。拟议的方法实现了尾巴级的性能与头等阶级准确性的成本之间的最佳权衡。
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译