近年来,使用计算机的运动捕捉技术迅速发展。由于其高效率和优异的性能,它取代了许多传统方法,并且广泛用于许多领域。我们的项目是关于街景视频人体运动捕获和分析。该项目的主要目标是在视频中捕获人类运动,并实时使用3D动画(人类)的运动信息。我们应用了一个神经网络进行运动捕获,并在街景场景下的团结中实现。通过分析运动数据,我们将更好地估计街道状况,这对于自动驾驶汽车等其他高科技应用有用。
translated by 谷歌翻译
居住在美国的每个妇女在8次发育侵袭性乳腺癌的可能性下有大约1。有丝分裂细胞计数是评估乳腺癌侵袭性或等级最常见的测试之一。在该预后,必须通过病理学家使用高分辨率显微镜检查组织病理学图像以计算细胞。不幸的是,可以是一种完整的任务,可重复性差,特别是对于非专家来说。最近深入学习网络适用于能够自动定位这些感兴趣区域的医学应用。然而,这些基于区域的网络缺乏利用通常用作唯一检测方法的完整图像CNN产生的分割特征的能力。因此,所提出的方法利用更快的RCNN进行对象检测,同时使用RGB图像特征的UNET产生的分割特征,以实现在Mitos-Atypia 2014分数上的F分数为0.508,计数数据集,优于最先进的攻击方法。
translated by 谷歌翻译
Visual perception plays an important role in autonomous driving. One of the primary tasks is object detection and identification. Since the vision sensor is rich in color and texture information, it can quickly and accurately identify various road information. The commonly used technique is based on extracting and calculating various features of the image. The recent development of deep learning-based method has better reliability and processing speed and has a greater advantage in recognizing complex elements. For depth estimation, vision sensor is also used for ranging due to their small size and low cost. Monocular camera uses image data from a single viewpoint as input to estimate object depth. In contrast, stereo vision is based on parallax and matching feature points of different views, and the application of deep learning also further improves the accuracy. In addition, Simultaneous Location and Mapping (SLAM) can establish a model of the road environment, thus helping the vehicle perceive the surrounding environment and complete the tasks. In this paper, we introduce and compare various methods of object detection and identification, then explain the development of depth estimation and compare various methods based on monocular, stereo, and RDBG sensors, next review and compare various methods of SLAM, and finally summarize the current problems and present the future development trends of vision technologies.
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
自动检测武器对于改善个人的安全性和福祉是重要的,仍然是由于各种尺寸,武器形状和外观,这是一项艰巨的任务。查看点变化和遮挡也是使这项任务更加困难的原因。此外,目前的物体检测算法处理矩形区域,但是一个细长和长的步枪可以真正地覆盖区域的一部分区域,其余部分可能包含未经紧的细节。为了克服这些问题,我们提出了一种用于定向意识武器检测的CNN架构,其提供具有改进的武器检测性能的面向边界框。所提出的模型不仅通过将角度作为分类问题的角度分成8个类而且提供方向,而是作为回归问题。对于培训我们的武器检测模型,包括总6400件武器图像的新数据集从网上收集,然后用面向定向的边界框手动注释。我们的数据集不仅提供导向的边界框作为地面真相,还提供了水平边界框。我们还以多种现代对象探测器提供我们的数据集,用于在该领域进一步研究。所提出的模型在该数据集上进行评估,并且与搁板对象检测器的比较分析产生了卓越的拟议模型的性能,以标准评估策略测量。数据集和模型实现在此链接上公开可用:https://bit.ly/2tyzicf。
translated by 谷歌翻译
基于无人机(UAV)基于无人机的视觉对象跟踪已实现了广泛的应用,并且由于其多功能性和有效性而引起了智能运输系统领域的越来越多的关注。作为深度学习革命性趋势的新兴力量,暹罗网络在基于无人机的对象跟踪中闪耀,其准确性,稳健性和速度有希望的平衡。由于开发了嵌入式处理器和深度神经网络的逐步优化,暹罗跟踪器获得了广泛的研究并实现了与无人机的初步组合。但是,由于无人机在板载计算资源和复杂的现实情况下,暹罗网络的空中跟踪仍然在许多方面都面临严重的障碍。为了进一步探索基于无人机的跟踪中暹罗网络的部署,这项工作对前沿暹罗跟踪器进行了全面的审查,以及使用典型的无人机板载处理器进行评估的详尽无人用分析。然后,进行板载测试以验证代表性暹罗跟踪器在现实世界无人机部署中的可行性和功效。此外,为了更好地促进跟踪社区的发展,这项工作分析了现有的暹罗跟踪器的局限性,并进行了以低弹片评估表示的其他实验。最后,深入讨论了基于无人机的智能运输系统的暹罗跟踪的前景。领先的暹罗跟踪器的统一框架,即代码库及其实验评估的结果,请访问https://github.com/vision4robotics/siamesetracking4uav。
translated by 谷歌翻译
We focus on the task of amodal 3D object detection in RGB-D images, which aims to produce a 3D bounding box of an object in metric form at its full extent. We introduce Deep Sliding Shapes, a 3D ConvNet formulation that takes a 3D volumetric scene from a RGB-D image as input and outputs 3D object bounding boxes. In our approach, we propose the first 3D Region Proposal Network (RPN) to learn objectness from geometric shapes and the first joint Object Recognition Network (ORN) to extract geometric features in 3D and color features in 2D. In particular, we handle objects of various sizes by training an amodal RPN at two different scales and an ORN to regress 3D bounding boxes. Experiments show that our algorithm outperforms the state-of-the-art by 13.8 in mAP and is 200× faster than the original Sliding Shapes. Source code and pre-trained models are available.
translated by 谷歌翻译
Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
从卷积神经网络的快速发展中受益,汽车牌照检测和识别的性能得到了很大的改善。但是,大多数现有方法分别解决了检测和识别问题,并专注于特定方案,这阻碍了现实世界应用的部署。为了克服这些挑战,我们提出了一个有效而准确的框架,以同时解决车牌检测和识别任务。这是一个轻巧且统一的深神经网络,可以实时优化端到端。具体而言,对于不受约束的场景,采用了无锚方法来有效检测车牌的边界框和四个角,这些框用于提取和纠正目标区域特征。然后,新型的卷积神经网络分支旨在进一步提取角色的特征而不分割。最后,将识别任务视为序列标记问题,这些问题通过连接派时间分类(CTC)解决。选择了几个公共数据集,包括在各种条件下从不同方案中收集的图像进行评估。实验结果表明,所提出的方法在速度和精度上都显着优于先前的最新方法。
translated by 谷歌翻译
X-ray imaging technology has been used for decades in clinical tasks to reveal the internal condition of different organs, and in recent years, it has become more common in other areas such as industry, security, and geography. The recent development of computer vision and machine learning techniques has also made it easier to automatically process X-ray images and several machine learning-based object (anomaly) detection, classification, and segmentation methods have been recently employed in X-ray image analysis. Due to the high potential of deep learning in related image processing applications, it has been used in most of the studies. This survey reviews the recent research on using computer vision and machine learning for X-ray analysis in industrial production and security applications and covers the applications, techniques, evaluation metrics, datasets, and performance comparison of those techniques on publicly available datasets. We also highlight some drawbacks in the published research and give recommendations for future research in computer vision-based X-ray analysis.
translated by 谷歌翻译
由于价格合理的可穿戴摄像头和大型注释数据集的可用性,在过去几年中,Egintric Vision(又名第一人称视觉-FPV)的应用程序在过去几年中蓬勃发展。可穿戴摄像机的位置(通常安装在头部上)允许准确记录摄像头佩戴者在其前面的摄像头,尤其是手和操纵物体。这种内在的优势可以从多个角度研究手:将手及其部分定位在图像中;了解双手涉及哪些行动和活动;并开发依靠手势的人类计算机界面。在这项调查中,我们回顾了使用以自我为中心的愿景专注于手的文献,将现有方法分类为:本地化(其中的手或部分在哪里?);解释(手在做什么?);和应用程序(例如,使用以上为中心的手提示解决特定问题的系统)。此外,还提供了带有手基注释的最突出的数据集的列表。
translated by 谷歌翻译
由于缺乏自动注释系统,大多数发展城市的城市机构都是数字未标记的。因此,在此类城市中,位置和轨迹服务(例如Google Maps,Uber等)仍然不足。自然场景图像中的准确招牌检测是从此类城市街道检索无错误的信息的最重要任务。然而,开发准确的招牌本地化系统仍然是尚未解决的挑战,因为它的外观包括文本图像和令人困惑的背景。我们提出了一种新型的对象检测方法,该方法可以自动检测招牌,适合此类城市。我们通过合并两种专业预处理方法和一种运行时效高参数值选择算法来使用更快的基于R-CNN的定位。我们采用了一种增量方法,通过使用我们构造的SVSO(Street View Signboard对象)签名板数据集,通过详细评估和与基线进行比较,以达到最终提出的方法,这些方法包含六个发展中国家的自然场景图像。我们在SVSO数据集和Open Image数据集上展示了我们提出的方法的最新性能。我们提出的方法可以准确地检测招牌(即使图像包含多种形状和颜色的多种嘈杂背景的招牌)在SVSO独立测试集上达到0.90 MAP(平均平均精度)得分。我们的实施可在以下网址获得:https://github.com/sadrultoaha/signboard-detection
translated by 谷歌翻译
我们提出了一种新的四管齐下的方法,在文献中首次建立消防员的情境意识。我们构建了一系列深度学习框架,彼此之叠,以提高消防员在紧急首次响应设置中进行的救援任务的安全性,效率和成功完成。首先,我们使用深度卷积神经网络(CNN)系统,以实时地分类和识别来自热图像的感兴趣对象。接下来,我们将此CNN框架扩展了对象检测,跟踪,分割与掩码RCNN框架,以及具有多模级自然语言处理(NLP)框架的场景描述。第三,我们建立了一个深入的Q学习的代理,免受压力引起的迷失方向和焦虑,能够根据现场消防环境中观察和存储的事实来制定明确的导航决策。最后,我们使用了一种低计算无监督的学习技术,称为张量分解,在实时对异常检测进行有意义的特征提取。通过这些临时深度学习结构,我们建立了人工智能系统的骨干,用于消防员的情境意识。要将设计的系统带入消防员的使用,我们设计了一种物理结构,其中处理后的结果被用作创建增强现实的投入,这是一个能够建议他们所在地的消防员和周围的关键特征,这对救援操作至关重要在手头,以及路径规划功能,充当虚拟指南,以帮助迷彩的第一个响应者恢复安全。当组合时,这四种方法呈现了一种新颖的信息理解,转移和综合方法,这可能会大大提高消防员响应和功效,并降低寿命损失。
translated by 谷歌翻译
组织学图像中核和腺体的实例分割是用于癌症诊断,治疗计划和生存分析的计算病理学工作流程中的重要一步。随着现代硬件的出现,大规模质量公共数据集的最新可用性以及社区组织的宏伟挑战已经看到了自动化方法的激增,重点是特定领域的挑战,这对于技术进步和临床翻译至关重要。在这项调查中,深入分析了过去五年(2017-2022)中发表的原子核和腺体实例细分的126篇论文,进行了深入分析,讨论了当前方法的局限性和公开挑战。此外,提出了潜在的未来研究方向,并总结了最先进方法的贡献。此外,还提供了有关公开可用数据集的概括摘要以及关于说明每种挑战的最佳性能方法的巨大挑战的详细见解。此外,我们旨在使读者现有研究的现状和指针在未来的发展方向上开发可用于临床实践的方法,从而可以改善诊断,分级,预后和癌症的治疗计划。据我们所知,以前没有工作回顾了朝向这一方向的组织学图像中的实例细分。
translated by 谷歌翻译
Object detection, one of the three main tasks of computer vision, has been used in various applications. The main process is to use deep neural networks to extract the features of an image and then use the features to identify the class and location of an object. Therefore, the main direction to improve the accuracy of object detection tasks is to improve the neural network to extract features better. In this paper, I propose a convolutional module with a transformer[1], which aims to improve the recognition accuracy of the model by fusing the detailed features extracted by CNN[2] with the global features extracted by a transformer and significantly reduce the computational effort of the transformer module by deflating the feature mAP. The main execution steps are convolutional downsampling to reduce the feature map size, then self-attention calculation and upsampling, and finally concatenation with the initial input. In the experimental part, after splicing the block to the end of YOLOv5n[3] and training 300 epochs on the coco dataset, the mAP improved by 1.7% compared with the previous YOLOv5n, and the mAP curve did not show any saturation phenomenon, so there is still potential for improvement. After 100 rounds of training on the Pascal VOC dataset, the accuracy of the results reached 81%, which is 4.6 better than the faster RCNN[4] using resnet101[5] as the backbone, but the number of parameters is less than one-twentieth of it.
translated by 谷歌翻译
语义分割是图像的像素明智标记。由于在像素级别定义了问题,因此确定图像类标签是不可接受的,而是在原始图像像素分辨率下本地化它们是必要的。通过卷积神经网络(CNN)在创建语义,高级和分层图像特征方面的非凡能力推动;在过去十年中提出了几种基于深入的学习的2D语义分割方法。在本调查中,我们主要关注最近的语义细分科学发展,特别是在使用2D图像的基于深度学习的方法。我们开始分析了对2D语义分割的公共图像集和排行榜,概述了性能评估中使用的技术。在研究现场的演变时,我们按时间顺序分类为三个主要时期,即预先和早期的深度学习时代,完全卷积的时代和后FCN时代。我们在技术上分析了解决领域的基本问题的解决方案,例如细粒度的本地化和规模不变性。在借阅我们的结论之前,我们提出了一张来自所有提到的时代的方法表,每个方法都概述了他们对该领域的贡献。我们通过讨论现场当前的挑战以及他们已经解决的程度来结束调查。
translated by 谷歌翻译
大多数最先进的实例级人类解析模型都采用了两阶段的基于锚的探测器,因此无法避免启发式锚盒设计和像素级别缺乏分析。为了解决这两个问题,我们设计了一个实例级人类解析网络,该网络在像素级别上无锚固且可解决。它由两个简单的子网络组成:一个用于边界框预测的无锚检测头和一个用于人体分割的边缘引导解析头。无锚探测器的头继承了像素样的优点,并有效地避免了对象检测应用中证明的超参数的敏感性。通过引入部分感知的边界线索,边缘引导的解析头能够将相邻的人类部分与彼此区分开,最多可在一个人类实例中,甚至重叠的实例。同时,利用了精炼的头部整合盒子级别的分数和部分分析质量,以提高解析结果的质量。在两个多个人类解析数据集(即CIHP和LV-MHP-V2.0)和一个视频实例级人类解析数据集(即VIP)上进行实验,表明我们的方法实现了超过全球级别和实例级别的性能最新的一阶段自上而下的替代方案。
translated by 谷歌翻译
深神网络的对象探测器正在不断发展,并用于多种应用程序,每个应用程序都有自己的要求集。尽管关键安全应用需要高准确性和可靠性,但低延迟任务需要资源和节能网络。不断提出了实时探测器,在高影响现实世界中是必需的,但是它们过分强调了准确性和速度的提高,而其他功能(例如多功能性,鲁棒性,资源和能源效率)则被省略。现有网络的参考基准不存在,设计新网络的标准评估指南也不存在,从而导致比较模棱两可和不一致的比较。因此,我们对广泛的数据集进行了多个实时探测器(基于锚点,关键器和变压器)的全面研究,并报告了一系列广泛指标的结果。我们还研究了变量,例如图像大小,锚固尺寸,置信阈值和架构层对整体性能的影响。我们分析了检测网络的鲁棒性,以防止分配变化,自然腐败和对抗性攻击。此外,我们提供了校准分析来评估预测的可靠性。最后,为了强调现实世界的影响,我们对自动驾驶和医疗保健应用进行了两个独特的案例研究。为了进一步衡量关键实时应用程序中网络的能力,我们报告了在Edge设备上部署检测网络后的性能。我们广泛的实证研究可以作为工业界对现有网络做出明智选择的指南。我们还希望激发研究社区的设计和评估网络的新方向,该网络着重于更大而整体的概述,以实现深远的影响。
translated by 谷歌翻译
深度学习已被广​​泛用于医学图像分割,并且录制了录制了该领域深度学习的成功的大量论文。在本文中,我们使用深层学习技术对医学图像分割的全面主题调查。本文进行了两个原创贡献。首先,与传统调查相比,直接将深度学习的文献分成医学图像分割的文学,并为每组详细介绍了文献,我们根据从粗略到精细的多级结构分类目前流行的文献。其次,本文侧重于监督和弱监督的学习方法,而不包括无监督的方法,因为它们在许多旧调查中引入而且他们目前不受欢迎。对于监督学习方法,我们分析了三个方面的文献:骨干网络的选择,网络块的设计,以及损耗功能的改进。对于虚弱的学习方法,我们根据数据增强,转移学习和交互式分割进行调查文献。与现有调查相比,本调查将文献分类为比例不同,更方便读者了解相关理由,并将引导他们基于深度学习方法思考医学图像分割的适当改进。
translated by 谷歌翻译