Pictionary, the popular sketch-based guessing game, provides an opportunity to analyze shared goal cooperative game play in restricted communication settings. However, some players occasionally draw atypical sketch content. While such content is occasionally relevant in the game context, it sometimes represents a rule violation and impairs the game experience. To address such situations in a timely and scalable manner, we introduce DrawMon, a novel distributed framework for automatic detection of atypical sketch content in concurrently occurring Pictionary game sessions. We build specialized online interfaces to collect game session data and annotate atypical sketch content, resulting in AtyPict, the first ever atypical sketch content dataset. We use AtyPict to train CanvasNet, a deep neural atypical content detection network. We utilize CanvasNet as a core component of DrawMon. Our analysis of post deployment game session data indicates DrawMon's effectiveness for scalable monitoring and atypical sketch content detection. Beyond Pictionary, our contributions also serve as a design guide for customized atypical content response systems involving shared and interactive whiteboards. Code and datasets are available at https://drawm0n.github.io.
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 2nd International Workshop on Reading Music Systems, held in Delft on the 2nd of November 2019.
translated by 谷歌翻译
素描是一种常用于创新过程的自然和有效的视觉通信介质。深度学习模型的最新发展急剧改善了理解和生成视觉内容的机器能力。令人兴奋的发展领域探讨了用于模拟人类草图的深度学习方法,开设创造性应用的机会。本章介绍了开发深受学习驱动的创造性支持工具的三个基本步骤,这些步骤消耗和生成草图:1)在草图和移动用户界面之间生成新配对数据集的数据收集工作; 2)基于草图的用户界面检索系统,适用于最先进的计算机视觉技术; 3)一个对话的草图系统,支持基于自然语言的草图/批判创作过程的新颖互动。在本章中,我们在深度学习和人机互动社区中进行了对相关的事先工作,详细记录了数据收集过程和系统的架构,目前提供了定性和定量结果,并绘制了几个未来研究的景观在这个令人兴奋的地区的方向。
translated by 谷歌翻译
对多人体育广播视频中的关键参与者和行动的全面了解是一个具有挑战性的问题。与新闻或金融视频不同,体育视频有限。虽然对多人体育和玩家的检测的操作识别都有强大的研究,但了解视频帧中的上下文文本仍然是体育视频理解中最有影响力的途径之一。在这项工作中,我们研究体育时钟的极其准确的语义文本检测和识别,以及其中的挑战。我们遵守运动时钟的独特属性,这使得难以利用通用预训练的探测器和识别器,因此可以准确地理解文本以与外部知识对齐的程度。我们提出了一种新的遥远监督技术来自动构建体育时钟数据集。除了合适的数据增强之外,与任何最先进的文本检测和识别模型架构相结合,我们提取极其准确的语义文本。最后,我们分享了我们的计算架构流水线,以扩展工业设置中的该系统,并提出了一个强大的数据集,以验证我们的结果。
translated by 谷歌翻译
遵循机器视觉系统在线自动化质量控制和检查过程的成功之后,这项工作中为两个不同的特定应用提供了一种对象识别解决方案,即,在医院准备在医院进行消毒的手术工具箱中检测质量控制项目,以及检测血管船体中的缺陷,以防止潜在的结构故障。该解决方案有两个阶段。首先,基于单镜头多伯克斯检测器(SSD)的特征金字塔体系结构用于改善检测性能,并采用基于地面真实的统计分析来选择一系列默认框的参数。其次,利用轻量级神经网络使用回归方法来实现定向检测结果。该方法的第一阶段能够检测两种情况下考虑的小目标。在第二阶段,尽管很简单,但在保持较高的运行效率的同时,检测细长目标是有效的。
translated by 谷歌翻译
已经提出了多个草图数据集,以了解人们如何绘制3D对象。但是,这样的数据集通常是小规模的,并且覆盖了一小部分对象或类别。此外,这些数据集包含大多来自专家用户的徒手草图,因此很难比较专家和新手用户的图纸,而这种比较对于告知对任何一个用户组的基于草图的界面更为有效的接口至关重要。这些观察结果激发了我们分析具有和没有足够绘图技能的人的不同程度的素描3D对象。我们邀请了70个新手用户和38位专家用户素描136 3D对象,这些对象是从多个视图中呈现的362张图像。这导致了3,620个徒手多视图草图的新数据集,在某些视图下,它们在其相应的3D对象上注册。我们的数据集比现有数据集大的数量级。我们在三个级别(即在空间和时间特征下以及跨越创建者组的内部和范围内)分析了三个级别的收集数据。我们发现,专业人士和新手的图纸在本质和外在的中风级别上显示出显着差异。我们在两个应用程序中演示了数据集的有用性:(i)徒手式的草图合成,(ii)将其作为基于草图的3D重建的潜在基准。我们的数据集和代码可在https://chufengxiao.github.io/differsketching/上获得。
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
我们提出了一种新的四管齐下的方法,在文献中首次建立消防员的情境意识。我们构建了一系列深度学习框架,彼此之叠,以提高消防员在紧急首次响应设置中进行的救援任务的安全性,效率和成功完成。首先,我们使用深度卷积神经网络(CNN)系统,以实时地分类和识别来自热图像的感兴趣对象。接下来,我们将此CNN框架扩展了对象检测,跟踪,分割与掩码RCNN框架,以及具有多模级自然语言处理(NLP)框架的场景描述。第三,我们建立了一个深入的Q学习的代理,免受压力引起的迷失方向和焦虑,能够根据现场消防环境中观察和存储的事实来制定明确的导航决策。最后,我们使用了一种低计算无监督的学习技术,称为张量分解,在实时对异常检测进行有意义的特征提取。通过这些临时深度学习结构,我们建立了人工智能系统的骨干,用于消防员的情境意识。要将设计的系统带入消防员的使用,我们设计了一种物理结构,其中处理后的结果被用作创建增强现实的投入,这是一个能够建议他们所在地的消防员和周围的关键特征,这对救援操作至关重要在手头,以及路径规划功能,充当虚拟指南,以帮助迷彩的第一个响应者恢复安全。当组合时,这四种方法呈现了一种新颖的信息理解,转移和综合方法,这可能会大大提高消防员响应和功效,并降低寿命损失。
translated by 谷歌翻译
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the chal-
translated by 谷歌翻译
The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.
translated by 谷歌翻译
随着深度卷积神经网络的兴起,对象检测在过去几年中取得了突出的进步。但是,这种繁荣无法掩盖小物体检测(SOD)的不令人满意的情况,这是计算机视觉中臭名昭著的挑战性任务之一,这是由于视觉外观不佳和由小目标的内在结构引起的嘈杂表示。此外,用于基准小对象检测方法基准测试的大规模数据集仍然是瓶颈。在本文中,我们首先对小物体检测进行了详尽的审查。然后,为了催化SOD的发展,我们分别构建了两个大规模的小物体检测数据集(SODA),SODA-D和SODA-A,分别集中在驾驶和空中场景上。 SODA-D包括24704个高质量的交通图像和277596个9个类别的实例。对于苏打水,我们收集2510个高分辨率航空图像,并在9个类别上注释800203实例。众所周知,拟议的数据集是有史以来首次尝试使用针对多类SOD量身定制的大量注释实例进行大规模基准测试。最后,我们评估主流方法在苏打水上的性能。我们预计发布的基准可以促进SOD的发展,并产生该领域的更多突破。数据集和代码将很快在:\ url {https://shaunyuan22.github.io/soda}上。
translated by 谷歌翻译
海洋生态系统及其鱼类栖息地越来越重要,因为它们在提供有价值的食物来源和保护效果方面的重要作用。由于它们的偏僻且难以接近自然,因此通常使用水下摄像头对海洋环境和鱼类栖息地进行监测。这些相机产生了大量数字数据,这些数据无法通过当前的手动处理方法有效地分析,这些方法涉及人类观察者。 DL是一种尖端的AI技术,在分析视觉数据时表现出了前所未有的性能。尽管它应用于无数领域,但仍在探索其在水下鱼类栖息地监测中的使用。在本文中,我们提供了一个涵盖DL的关键概念的教程,该教程可帮助读者了解对DL的工作原理的高级理解。该教程还解释了一个逐步的程序,讲述了如何为诸如水下鱼类监测等挑战性应用开发DL算法。此外,我们还提供了针对鱼类栖息地监测的关键深度学习技术的全面调查,包括分类,计数,定位和细分。此外,我们对水下鱼类数据集进行了公开调查,并比较水下鱼类监测域中的各种DL技术。我们还讨论了鱼类栖息地加工深度学习的新兴领域的一些挑战和机遇。本文是为了作为希望掌握对DL的高级了解,通过遵循我们的分步教程而为其应用开发的海洋科学家的教程,并了解如何发展其研究,以促进他们的研究。努力。同时,它适用于希望调查基于DL的最先进方法的计算机科学家,以进行鱼类栖息地监测。
translated by 谷歌翻译
X-ray imaging technology has been used for decades in clinical tasks to reveal the internal condition of different organs, and in recent years, it has become more common in other areas such as industry, security, and geography. The recent development of computer vision and machine learning techniques has also made it easier to automatically process X-ray images and several machine learning-based object (anomaly) detection, classification, and segmentation methods have been recently employed in X-ray image analysis. Due to the high potential of deep learning in related image processing applications, it has been used in most of the studies. This survey reviews the recent research on using computer vision and machine learning for X-ray analysis in industrial production and security applications and covers the applications, techniques, evaluation metrics, datasets, and performance comparison of those techniques on publicly available datasets. We also highlight some drawbacks in the published research and give recommendations for future research in computer vision-based X-ray analysis.
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
分析文档的布局以识别标题,部分,表,数字等对理解其内容至关重要。基于深度学习的检测文档图像布局结构的方法一直很有前途。然而,这些方法在训练期间需要大量注释的例子,这既昂贵又耗时。我们在这里描述了一个合成文档生成器,它自动产生具有用于空间位置,范围和布局元素类别的标签的现实文档。所提出的生成过程将文档的每个物理组件视为随机变量,并使用贝叶斯网络图模拟其内在依赖项。我们使用随机模板的分层制定允许在保留广泛主题之间的文档之间的参数共享,并且分布特性产生视觉上独特的样本,从而捕获复杂和不同的布局。我们经常说明纯粹在合成文档上培训的深层布局检测模型可以匹配使用真实文档的模型的性能。
translated by 谷歌翻译
对人类对象相互作用的理解在第一人称愿景(FPV)中至关重要。遵循相机佩戴者操纵的对象的视觉跟踪算法可以提供有效的信息,以有效地建模此类相互作用。在过去的几年中,计算机视觉社区已大大提高了各种目标对象和场景的跟踪算法的性能。尽管以前有几次尝试在FPV域中利用跟踪器,但仍缺少对最先进跟踪器的性能的有条理分析。这项研究差距提出了一个问题,即应使用当前的解决方案``现成''还是应进行更多特定领域的研究。本文旨在为此类问题提供答案。我们介绍了FPV中单个对象跟踪的首次系统研究。我们的研究广泛分析了42个算法的性能,包括通用对象跟踪器和基线FPV特定跟踪器。分析是通过关注FPV设置的不同方面,引入新的绩效指标以及与FPV特定任务有关的。这项研究是通过引入Trek-150(由150个密集注释的视频序列组成的新型基准数据集)来实现的。我们的结果表明,FPV中的对象跟踪对当前的视觉跟踪器构成了新的挑战。我们强调了导致这种行为的因素,并指出了可能的研究方向。尽管遇到了困难,但我们证明了跟踪器为需要短期对象跟踪的FPV下游任务带来好处。我们预计,随着新的和FPV特定的方法学会得到研究,通用对象跟踪将在FPV中受欢迎。
translated by 谷歌翻译
由于缺乏自动注释系统,大多数发展城市的城市机构都是数字未标记的。因此,在此类城市中,位置和轨迹服务(例如Google Maps,Uber等)仍然不足。自然场景图像中的准确招牌检测是从此类城市街道检索无错误的信息的最重要任务。然而,开发准确的招牌本地化系统仍然是尚未解决的挑战,因为它的外观包括文本图像和令人困惑的背景。我们提出了一种新型的对象检测方法,该方法可以自动检测招牌,适合此类城市。我们通过合并两种专业预处理方法和一种运行时效高参数值选择算法来使用更快的基于R-CNN的定位。我们采用了一种增量方法,通过使用我们构造的SVSO(Street View Signboard对象)签名板数据集,通过详细评估和与基线进行比较,以达到最终提出的方法,这些方法包含六个发展中国家的自然场景图像。我们在SVSO数据集和Open Image数据集上展示了我们提出的方法的最新性能。我们提出的方法可以准确地检测招牌(即使图像包含多种形状和颜色的多种嘈杂背景的招牌)在SVSO独立测试集上达到0.90 MAP(平均平均精度)得分。我们的实施可在以下网址获得:https://github.com/sadrultoaha/signboard-detection
translated by 谷歌翻译
Furigana是日语写作中使用的发音笔记。能够检测到这些可以帮助提高光学特征识别(OCR)性能,或通过正确显示Furigana来制作日本书面媒体的更准确的数字副本。该项目的重点是在日本书籍和漫画中检测Furigana。尽管已经研究了日本文本的检测,但目前尚无提议检测Furigana的方法。我们构建了一个包含日本书面媒体和Furigana注释的新数据集。我们建议对此类数据的评估度量,该度量与对象检测中使用的评估协议类似,除非它允许对象组通过一个注释标记。我们提出了一种基于数学形态和连接组件分析的Furigana检测方法。我们评估数据集的检测,并比较文本提取的不同方法。我们还分别评估了不同类型的图像,例如书籍和漫画,并讨论每种图像的挑战。所提出的方法在数据集上达到76 \%的F1得分。该方法在常规书籍上表现良好,但在漫画和不规则格式的书籍上的表现较少。最后,我们证明所提出的方法可以在漫画109数据集上提高OCR的性能5 \%。源代码可通过\ texttt {\ url {https://github.com/nikolajkb/furiganadetection}}}
translated by 谷歌翻译
基于无人机(UAV)基于无人机的视觉对象跟踪已实现了广泛的应用,并且由于其多功能性和有效性而引起了智能运输系统领域的越来越多的关注。作为深度学习革命性趋势的新兴力量,暹罗网络在基于无人机的对象跟踪中闪耀,其准确性,稳健性和速度有希望的平衡。由于开发了嵌入式处理器和深度神经网络的逐步优化,暹罗跟踪器获得了广泛的研究并实现了与无人机的初步组合。但是,由于无人机在板载计算资源和复杂的现实情况下,暹罗网络的空中跟踪仍然在许多方面都面临严重的障碍。为了进一步探索基于无人机的跟踪中暹罗网络的部署,这项工作对前沿暹罗跟踪器进行了全面的审查,以及使用典型的无人机板载处理器进行评估的详尽无人用分析。然后,进行板载测试以验证代表性暹罗跟踪器在现实世界无人机部署中的可行性和功效。此外,为了更好地促进跟踪社区的发展,这项工作分析了现有的暹罗跟踪器的局限性,并进行了以低弹片评估表示的其他实验。最后,深入讨论了基于无人机的智能运输系统的暹罗跟踪的前景。领先的暹罗跟踪器的统一框架,即代码库及其实验评估的结果,请访问https://github.com/vision4robotics/siamesetracking4uav。
translated by 谷歌翻译