Manually analyzing spermatozoa is a tremendous task for biologists due to the many fast-moving spermatozoa, causing inconsistencies in the quality of the assessments. Therefore, computer-assisted sperm analysis (CASA) has become a popular solution. Despite this, more data is needed to train supervised machine learning approaches in order to improve accuracy and reliability. In this regard, we provide a dataset called VISEM-Tracking with 20 video recordings of 30s of spermatozoa with manually annotated bounding-box coordinates and a set of sperm characteristics analyzed by experts in the domain. VISEM-Tracking is an extension of the previously published VISEM dataset. In addition to the annotated data, we provide unlabeled video clips for easy-to-use access and analysis of the data. As part of this paper, we present baseline sperm detection performances using the YOLOv5 deep learning model trained on the VISEM-Tracking dataset. As a result, the dataset can be used to train complex deep-learning models to analyze spermatozoa. The dataset is publicly available at https://zenodo.org/record/7293726.
translated by 谷歌翻译
我们提出了一个数据集,该数据集包含具有唯一对象标识(IDS)的对象注释,用于高效视频编码(HEVC)V1常见测试条件(CTC)序列。准备了13个序列的地面实际注释并作为称为SFU-HW-Tracks-V1的数据集发布。对于每个视频帧,地面真相注释包括对象类ID,对象ID和边界框位置及其维度。数据集可用于评估未压缩视频序列上的对象跟踪性能,并研究视频压缩与对象跟踪之间的关系。
translated by 谷歌翻译
多媒体异常数据集在自动监视中发挥着至关重要的作用。它们具有广泛的应用程序,从异常对象/情况检测到检测危及生命事件的检测。该字段正在接收大量的1.5多年的巨大研究兴趣,因此,已经创建了越来越多地专用于异常动作和对象检测的数据集。点击这些公共异常数据集使研究人员能够生成和比较具有相同输入数据的各种异常检测框架。本文介绍了各种视频,音频以及基于异常检测的应用的综合调查。该调查旨在解决基于异常检测的多媒体公共数据集缺乏全面的比较和分析。此外,它可以帮助研究人员选择最佳可用数据集,用于标记框架。此外,我们讨论了现有数据集和未来方向洞察中开发多峰异常检测数据集的差距。
translated by 谷歌翻译
本研究的目标是开发新的可靠开放式手术缝合培训医学院的仿真系统,以便在资源有限或国内设置。即,我们开发了一种工具和手本地化的算法,以及根据简单的网络摄像头视频数据,计算出用于评估外科技能的运动指标。二十五位参与者使用我们的模拟器执行多个缝合任务。 yolo网络已被修改为多任务网络,以便工具本地化和工具手动交互检测。这是通过分割YOLO检测头来实现的,使得它们支持两项任务,以对计算机运行时间最小的添加。此外,基于系统的结果,计算了运动指标。这些指标包括传统的指标,如时间和路径长度以及评估技术参与者使用的新度量来控制工具。双重任务网络性能与两个网络的性能类似,而计算负载仅略大于一个网络。此外,运动指标显示专家和新手之间的显着差异。虽然视频捕获是微创手术的重要组成部分,但它不是开放手术的整体组成部分。因此,需要新的算法,重点关注当前的独特挑战,是开放的手术视频存在。在本研究中,开发了一种双任务网络来解决本地化任务和手动工具交互任务。双网络可以很容易地扩展到多任务网络,这可能对具有多个层的图像有用,并且用于评估这些不同层之间的交互。
translated by 谷歌翻译
开放程序代表全球手术的主要形式。人工智能(AI)有可能优化手术实践并改善患者结果,但努力主要集中在微创技术上。我们的工作通过策划,从YouTube,从YouTube,Open Surgical视频的最大数据集克服了培训AI模型的现有数据限制:1997年从50个国家上传的23个外科手术的视频。使用此数据集,我们开发了一种能够实时了解外科行为,手和工具的多任务AI模型 - 程序流程和外科医生技能的构建块。我们表明我们的模型推广了各种外科类型和环境。说明这种普遍性,我们直接应用了YouTube培训的模型,分析了在学术医疗中心前瞻性收集的开放式手术,并确定了与手动效率相关的外科技能的运动学描述符。我们的开放外科(AVOS)数据集和培训模式的注释视频将可用于进一步发展外科艾。
translated by 谷歌翻译
在由车辆安装的仪表板摄像机捕获的视频中检测危险交通代理(仪表板)对于促进在复杂环境中的安全导航至关重要。与事故相关的视频只是驾驶视频大数据的一小部分,并且瞬态前的事故流程具有高度动态和复杂性。此外,风险和非危险交通代理的外观可能相似。这些使驾驶视频中的风险对象本地化特别具有挑战性。为此,本文提出了一个注意力引导的多式功能融合网络(AM-NET),以将仪表板视频的危险交通代理本地化。两个封闭式复发单元(GRU)网络使用对象边界框和从连续视频帧中提取的光流功能来捕获时空提示,以区分危险交通代理。加上GRUS的注意力模块学会了与事故相关的交通代理。融合了两个功能流,AM-NET预测了视频中交通代理的风险评分。在支持这项研究的过程中,本文还引入了一个名为“风险对象本地化”(ROL)的基准数据集。该数据集包含带有事故,对象和场景级属性的空间,时间和分类注释。拟议的AM-NET在ROL数据集上实现了85.73%的AUC的有希望的性能。同时,AM-NET在DOTA数据集上优于视频异常检测的当前最新视频异常检测。一项彻底的消融研究进一步揭示了AM-NET通过评估其不同组成部分的贡献的优点。
translated by 谷歌翻译
Visual object analysis researchers are increasingly experimenting with video, because it is expected that motion cues should help with detection, recognition, and other analysis tasks. This paper presents the Cambridge-driving Labeled Video Database (CamVid) as the first collection of videos with object class semantic labels, complete with metadata. The database provides ground truth labels that associate each pixel with one of 32 semantic classes. The database addresses the need for experimental data to quantitatively evaluate emerging algorithms. While most videos are filmed with fixed-position CCTV-style cameras, our data was captured from the perspective of a driving automobile. The driving scenario increases the number and heterogeneity of the observed object classes. Over 10 min of high quality 30 Hz footage is being provided, with corresponding semantically labeled images at 1 Hz and in part, 15 Hz. The CamVid Database offers four contributions that are relevant to object analysis researchers. First, the per-pixel semantic segmentation of over 700 images was specified manually, and was then inspected and confirmed by a second person for accuracy. Second, the high-quality and large resolution color video images in the database represent valuable extended duration digitized footage to those interested in driving scenarios or ego-motion. Third, we filmed calibration sequences for the camera color response and intrinsics, and computed a 3D camera pose for each frame in the sequences. Finally, in support of expanding this or other databases, we present custom-made labeling software for assisting users who wish to paint precise class-labels for other images and videos. We evaluate the relevance of the database by measuring the performance of an algorithm from each of three distinct domains: multi-class object recognition, pedestrian detection, and label propagation.
translated by 谷歌翻译
在结肠息肉是众所周知的如通过结肠镜检查鉴定的癌症的前体或者有关诊断工作为症状,结肠直肠癌筛查或某些疾病的系统的监视。虽然大部分息肉是良性的,在数量,尺寸和息肉的表面结构是紧密相连的结肠癌的风险。有高的漏检率和不完全去除结肠息肉的存在由于可变性质,困难描绘异常,高复发率和结肠的解剖外形。过去,多种方法已建成自动化息肉检测与分割。然而,大多数方法的关键问题是,他们没有经过严格的大型多中心的专用数据集进行测试。因此,这些方法可能无法推广到不同人群的数据集,因为他们过度拟合到一个特定的人口和内镜监控。在这个意义上,我们已经从整合超过300名患者6个不同的中心策划的数据集。所述数据集包括与由六名高级肠胃验证息肉边界的精确划定3446个注释息肉标签单帧和序列数据。据我们所知,这是由一组计算科学家和专家肠胃的策划最全面的检测和像素级的细分数据集。此数据集已在起源的Endocv2021挑战旨在息肉检测与分割处理可推广的一部分。在本文中,我们提供全面的洞察数据结构和注释策略,标注的质量保证和技术验证我们的扩展EndoCV2021数据集,我们称之为PolypGen。
translated by 谷歌翻译
RGB-D对象跟踪最近引起了广泛的关注,这得益于视觉和深度通道之间的共生能力。但是,鉴于有限的注释RGB-D跟踪数据,大多数最先进的RGB-D跟踪器是高性能RGB的简单扩展程序,而无需完全利用深度通道中深度通道的潜在潜力离线训练阶段。为了解决数据集缺乏问题,本文发布了一个名为RGBD1K的新的RGB-D数据集。 RGBD1K包含1,050个序列,总计约250万帧。为了证明对较大的RGB-D数据集的培训的好处,尤其是RGBD1K,我们开发了一个基于变压器的RGB-D跟踪器,名为SPT,是使用新数据集的未来视觉对象跟踪研究的基线。使用SPT跟踪器进行的广泛实验的结果表明,RGBD1K数据集的潜力可以提高RGB-D跟踪的性能,从而激发了有效跟踪器设计的未来发展。数据集和代码将在项目主页上提供:https://will.be.available.at.at.this.website。
translated by 谷歌翻译
This paper presents a new large scale multi-person tracking dataset -- \texttt{PersonPath22}, which is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20 datasets. The lack of large scale training and test data for this task has limited the community's ability to understand the performance of their tracking systems on a wide range of scenarios and conditions such as variations in person density, actions being performed, weather, and time of day. \texttt{PersonPath22} dataset was specifically sourced to provide a wide variety of these conditions and our annotations include rich meta-data such that the performance of a tracker can be evaluated along these different dimensions. The lack of training data has also limited the ability to perform end-to-end training of tracking systems. As such, the highest performing tracking systems all rely on strong detectors trained on external image datasets. We hope that the release of this dataset will enable new lines of research that take advantage of large scale video based training data.
translated by 谷歌翻译
Despite the numerous developments in object tracking, further development of current tracking algorithms is limited by small and mostly saturated datasets. As a matter of fact, data-hungry trackers based on deep-learning currently rely on object detection datasets due to the scarcity of dedicated large-scale tracking datasets. In this work, we present TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild. We provide more than 30K videos with more than 14 million dense bounding box annotations. Our dataset covers a wide selection of object classes in broad and diverse context. By releasing such a large-scale dataset, we expect deep trackers to further improve and generalize. In addition, we introduce a new benchmark composed of 500 novel videos, modeled with a distribution similar to our training dataset. By sequestering the annotation of the test set and providing an online evaluation server, we provide a fair benchmark for future development of object trackers. Deep trackers fine-tuned on a fraction of our dataset improve their performance by up to 1.6% on OTB100 and up to 1.7% on TrackingNet Test. We provide an extensive benchmark on TrackingNet by evaluating more than 20 trackers. Our results suggest that object tracking in the wild is far from being solved.
translated by 谷歌翻译
卫星摄像机可以为大型区域提供连续观察,这对于许多遥感应用很重要。然而,由于对象的外观信息不足和缺乏高质量数据集,在卫星视频中实现移动对象检测和跟踪仍然具有挑战性。在本文中,我们首先构建一个具有丰富注释的大型卫星视频数据集,用于移动对象检测和跟踪的任务。该数据集由Jilin-1卫星星座收集,并由47个高质量视频组成,对象检测有1,646,038兴趣的情况和用于对象跟踪的3,711个轨迹。然后,我们引入运动建模基线,以提高检测速率并基于累积多帧差异和鲁棒矩阵完成来减少误报。最后,我们建立了第一个用于在卫星视频中移动对象检测和跟踪的公共基准,并广泛地评估在我们数据集上几种代表方法的性能。还提供了综合实验分析和富有魅力的结论。数据集可在https://github.com/qingyonghu/viso提供。
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译
多年来,为各种对象检测任务开发了数据集。海事域中的对象检测对于船舶的安全和导航至关重要。但是,在海事域中,仍然缺乏公开可用的大规模数据集。为了克服这一挑战,我们提出了Kolomverse,这是一个开放的大型图像数据集,可在Kriso(韩国研究所和海洋工程研究所)的海事域中进行物体检测。我们收集了从韩国21个领土水域捕获的5,845小时的视频数据。通过精心设计的数据质量评估过程,我们从视频数据中收集了大约2,151,470 4K分辨率的图像。该数据集考虑了各种环境:天气,时间,照明,遮挡,观点,背景,风速和可见性。 Kolomverse由五个类(船,浮标,渔网浮标,灯塔和风电场)组成,用于海上对象检测。该数据集的图像为3840美元$ \ times $ 2160像素,据我们所知,它是迄今为止最大的公开数据集,用于海上域中的对象检测。我们进行了对象检测实验,并在几个预训练的最先进的架构上评估了我们的数据集,以显示我们数据集的有效性和实用性。该数据集可在:\ url {https://github.com/maritimedataset/kolomverse}中获得。
translated by 谷歌翻译
RGBT跟踪在计算机视觉社区中获得了兴趣激增,但该研究领域缺乏大型和高度多样性的基准数据集,这对于深度RGBT跟踪器的培训以及RGBT跟踪方法的综合评价至关重要。为此,我们在这项工作中为RGBT跟踪(Lasher)提出了大规模的高多样性基准。 Lasher由1224个可见和热红外视频配对组成,总共超过730K框架对。每个帧对在空间上对齐并用边界框手动注释,使数据集良好并密度注释。 Lasher从广泛的物品类别,相机观点,场景复杂性和环境因素,季节,天气,日夜的环境因素高度多样化。我们对Lasher DataSet的12 RGBT跟踪算法进行了全面的绩效评估,并对RGBT跟踪澄清研究室进行了详细分析。此外,我们释放了解放的Lasher版本,以吸引对对齐的RGBT跟踪的研究兴趣,这是现​​实世界应用中更实用的任务。数据集和评估协议可用于:https://github.com/bugpleaseout/lasher。
translated by 谷歌翻译
Panoptic图像分割是计算机视觉任务,即在图像中查找像素组并为其分配语义类别和对象实例标识符。由于其在机器人技术和自动驾驶中的关键应用,图像细分的研究变得越来越流行。因此,研究社区依靠公开可用的基准数据集来推进计算机视觉中的最新技术。但是,由于将图像标记为高昂的成本,因此缺乏适合全景分割的公开地面真相标签。高标签成本还使得将现有数据集扩展到视频域和多相机设置是一项挑战。因此,我们介绍了Waymo Open DataSet:全景视频全景分割数据集,这是一个大型数据集,它提供了用于自主驾驶的高质量的全景分割标签。我们使用公开的Waymo打开数据集生成数据集,利用各种相机图像集。随着时间的推移,我们的标签是一致的,用于视频处理,并且在车辆上安装的多个摄像头保持一致,以了解全景的理解。具体而言,我们为28个语义类别和2,860个时间序列提供标签,这些标签由在三个不同地理位置驾驶的自动驾驶汽车上安装的五个摄像机捕获,从而导致总共标记为100k标记的相机图像。据我们所知,这使我们的数据集比现有的数据集大量数据集大的数量级。我们进一步提出了一个新的基准,用于全景视频全景分割,并根据DeepLab模型家族建立许多强大的基准。我们将公开制作基准和代码。在https://waymo.com/open上找到数据集。
translated by 谷歌翻译
Generic Object Tracking (GOT) is the problem of tracking target objects, specified by bounding boxes in the first frame of a video. While the task has received much attention in the last decades, researchers have almost exclusively focused on the single object setting. Multi-object GOT benefits from a wider applicability, rendering it more attractive in real-world applications. We attribute the lack of research interest into this problem to the absence of suitable benchmarks. In this work, we introduce a new large-scale GOT benchmark, LaGOT, containing multiple annotated target objects per sequence. Our benchmark allows researchers to tackle key remaining challenges in GOT, aiming to increase robustness and reduce computation through joint tracking of multiple objects simultaneously. Furthermore, we propose a Transformer-based GOT tracker TaMOS capable of joint processing of multiple objects through shared computation. TaMOs achieves a 4x faster run-time in case of 10 concurrent objects compared to tracking each object independently and outperforms existing single object trackers on our new benchmark. Finally, TaMOs achieves highly competitive results on single-object GOT datasets, setting a new state-of-the-art on TrackingNet with a success rate AUC of 84.4%. Our benchmark, code, and trained models will be made publicly available.
translated by 谷歌翻译
In robotics and computer vision communities, extensive studies have been widely conducted regarding surveillance tasks, including human detection, tracking, and motion recognition with a camera. Additionally, deep learning algorithms are widely utilized in the aforementioned tasks as in other computer vision tasks. Existing public datasets are insufficient to develop learning-based methods that handle various surveillance for outdoor and extreme situations such as harsh weather and low illuminance conditions. Therefore, we introduce a new large-scale outdoor surveillance dataset named eXtremely large-scale Multi-modAl Sensor dataset (X-MAS) containing more than 500,000 image pairs and the first-person view data annotated by well-trained annotators. Moreover, a single pair contains multi-modal data (e.g. an IR image, an RGB image, a thermal image, a depth image, and a LiDAR scan). This is the first large-scale first-person view outdoor multi-modal dataset focusing on surveillance tasks to the best of our knowledge. We present an overview of the proposed dataset with statistics and present methods of exploiting our dataset with deep learning-based algorithms. The latest information on the dataset and our study are available at https://github.com/lge-robot-navi, and the dataset will be available for download through a server.
translated by 谷歌翻译
实时机器学习检测算法通常在自动驾驶汽车技术中发现,并依赖优质数据集。这些算法在日常条件以及强烈的阳光下都能正常工作。报告表明,眩光是撞车事故最突出的两个最突出的原因之一。但是,现有的数据集,例如LISA和德国交通标志识别基准,根本不反映Sun Glare的存在。本文介绍了眩光交通标志数据集:在阳光下重大视觉干扰下,具有基于美国的交通标志的图像集合。眩光包含2,157张带有阳光眩光的交通标志图像,从33个美国道路录像带中拉出。它为广泛使用的Lisa流量标志数据集提供了必不可少的丰富。我们的实验研究表明,尽管几种最先进的基线方法在没有太阳眩光的情况下对交通符号数据集进行了训练和测试,但在对眩光进行测试时,它们遭受了极大的痛苦(例如,9%至21%的平均图范围为9%至21%。 ,它明显低于LISA数据集上的性能)。我们还注意到,当对Sun Glare中的交通标志图像进行培训时,当前的架构具有更好的检测准确性(例如,主流算法平均42%的平均地图增益)。
translated by 谷歌翻译
Timely and effective feedback within surgical training plays a critical role in developing the skills required to perform safe and efficient surgery. Feedback from expert surgeons, while especially valuable in this regard, is challenging to acquire due to their typically busy schedules, and may be subject to biases. Formal assessment procedures like OSATS and GEARS attempt to provide objective measures of skill, but remain time-consuming. With advances in machine learning there is an opportunity for fast and objective automated feedback on technical skills. The SimSurgSkill 2021 challenge (hosted as a sub-challenge of EndoVis at MICCAI 2021) aimed to promote and foster work in this endeavor. Using virtual reality (VR) surgical tasks, competitors were tasked with localizing instruments and predicting surgical skill. Here we summarize the winning approaches and how they performed. Using this publicly available dataset and results as a springboard, future work may enable more efficient training of surgeons with advances in surgical data science. The dataset can be accessed from https://console.cloud.google.com/storage/browser/isi-simsurgskill-2021.
translated by 谷歌翻译