多次人类解析的现有方法通常采用两阶段的策略(通常是自下而上和自下而上),这遭受了对先前检测的强烈依赖,或者在集体后过程中高度计算的冗余。在这项工作中,我们使用代表性零件(称为Repparser)提出了一个端到端的多个人类解析框架。与主流方法不同,repparser以新的单阶段的方式解决了多个人的解析,而无需诉诸于人的检测或组后。为此,repparser将解析管道解析为实例感知的内核产生和部分意识到的人类解析,并将其分解为部分。分别负责实例分离和特定于实例的部分分割。特别是,我们通过代表性部分赋予解析管道的能力,因为它们的特征是通过实例感知关键点,并且可以用来动态解析每个人的实例。具体而言,代表性零件是通过共同定位实例中心和估计身体部位区域的关键来获得的。之后,我们通过代表性部分动态预测实例感知的卷积内核,从而将人零件的上下文编码为每个负责将图像特征施放为实例特定表示的内核。furthermore。采用多支出结构来分割每个实例 - 特定的表示单独的部分分割的几个部分感知表示。这样,以代表性零件的指导,重新集中在人实例上,并直接为每个人实例输出解析结果,从而消除了先前检测或发布的要求-grouping。在两个具有挑战性的基准上进行的扩展实验表明,我们提出的repparser是一个简单而有效的框架,并取得了竞争性的表现。
translated by 谷歌翻译
大多数最先进的实例级人类解析模型都采用了两阶段的基于锚的探测器,因此无法避免启发式锚盒设计和像素级别缺乏分析。为了解决这两个问题,我们设计了一个实例级人类解析网络,该网络在像素级别上无锚固且可解决。它由两个简单的子网络组成:一个用于边界框预测的无锚检测头和一个用于人体分割的边缘引导解析头。无锚探测器的头继承了像素样的优点,并有效地避免了对象检测应用中证明的超参数的敏感性。通过引入部分感知的边界线索,边缘引导的解析头能够将相邻的人类部分与彼此区分开,最多可在一个人类实例中,甚至重叠的实例。同时,利用了精炼的头部整合盒子级别的分数和部分分析质量,以提高解析结果的质量。在两个多个人类解析数据集(即CIHP和LV-MHP-V2.0)和一个视频实例级人类解析数据集(即VIP)上进行实验,表明我们的方法实现了超过全球级别和实例级别的性能最新的一阶段自上而下的替代方案。
translated by 谷歌翻译
本文介绍了端到端的实例分段框架,称为SOIT,该段具有实例感知变压器的段对象。灵感来自Detr〜\ Cite {carion2020end},我们的方法视图实例分段为直接设置预测问题,有效地消除了对ROI裁剪,一对多标签分配等许多手工制作组件的需求,以及非最大抑制( nms)。在SOIT中,通过在全局图像上下文下直接地将多个查询直接理解语义类别,边界框位置和像素 - WISE掩码的一组对象嵌入。类和边界盒可以通过固定长度的向量轻松嵌入。尤其是由一组参数嵌入像素方面的掩模以构建轻量级实例感知变压器。之后,实例感知变压器产生全分辨率掩码,而不涉及基于ROI的任何操作。总的来说,SOIT介绍了一个简单的单级实例分段框架,它是无乐和NMS的。 MS Coco DataSet上的实验结果表明,优于最先进的实例分割显着的优势。此外,在统一查询嵌入中的多个任务的联合学习还可以大大提高检测性能。代码可用于\ url {https://github.com/yuxiaodonghri/soit}。
translated by 谷歌翻译
零件级别的属性解析是一项基本但具有挑战性的任务,它需要区域级的视觉理解以提供可解释的身体部位细节。大多数现有方法通过添加具有属性预测头到两阶段检测器的区域卷积神经网络(RCNN)来解决此问题,其中从本地零件框中确定了身体部位的属性。但是,具有极限视觉线索的本地零件框(即仅零件外观)会导致不满意的解析结果,因为身体部位的属性高度依赖于它们之间的全面关系。在本文中,我们建议通过利用丰富的知识来识别嵌入式RCNN(KE-RCNN)来识别属性-hip)和显式知识(例如,``短裤''的一部分不能具有``连帽衫''或``衬里''的属性)。具体而言,KE-RCNN由两个新型组件,即基于隐式知识的编码器(IK-en)和基于知识的显式解码器(EK-DE)组成。前者旨在通过将部分的关系上下文编码到部分框中来增强零件级的表示,而后者则建议通过有关\ textit {part-attribute}关系的先验知识的指导来解码属性。这样,KE-RCNN就是插件播放,可以集成到任何两阶段检测器中,例如attribute-rcnn,cascade-rcnn,基于HRNET的RCNN和基于Swintransformer的RCNN。在两个具有挑战性的基准上进行的广泛实验,例如Fashionpedia和Kinetics-TPS,证明了KE-RCNN的有效性和概括性。特别是,它比所有现有方法都取得了更高的改进,在时尚Pedia上达到了3%的AP,而动力学TPS的ACC约为4%。
translated by 谷歌翻译
人类茂密的估计旨在建立人体2D像素与3D人体模板之间的密集对应关系,是使机器能够了解图像中人员的关键技术。由于实际场景是复杂的,只有部分注释可用,导致无能为力或错误的估计,它仍然构成了几个挑战。在这项工作中,我们提出了一个新颖的框架,以检测图像中多人的密集。我们指的是知识转移网络(KTN)的建议方法解决了两个主要问题:1)如何完善图像表示以减轻不完整的估计,以及2)如何减少由低质量培训标签引起的错误估计(即。 ,有限的注释和班级不平衡标签)。与现有的作品直接传播区域的锥体特征以进行致密估计,KTN使用金字塔表示的改进,同时它可以维持特征分辨率并抑制背景像素,并且这种策略导致准确性大幅提高。此外,KTN通过外部知识增强了基于3D的身体解析的能力,在该知识中,它通过结构性的身体知识图,从足够的注释作为基于3D的身体解析器进行训练。通过这种方式,它大大减少了由低质量注释引起的不利影响。 KTN的有效性通过其优越的性能优于致密coco数据集的最先进方法。关于代表性任务(例如,人体分割,人体部分分割和关键点检测)和两个流行的致密估计管道(即RCNN和全面卷积框架)的广泛消融研究和实验结果,进一步表明了提议方法的概括性。
translated by 谷歌翻译
分割高度重叠的图像对象是具有挑战性的,因为图像上的真实对象轮廓和遮挡边界之间通常没有区别。与先前的实例分割方法不同,我们将图像形成模拟为两个重叠层的组成,并提出了双层卷积网络(BCNET),其中顶层检测到遮挡对象(遮挡器),而底层则渗透到部分闭塞实例(胶囊)。遮挡关系与双层结构的显式建模自然地将遮挡和遮挡实例的边界解散,并在掩模回归过程中考虑了它们之间的相互作用。我们使用两种流行的卷积网络设计(即完全卷积网络(FCN)和图形卷积网络(GCN))研究了双层结构的功效。此外,我们通过将图像中的实例表示为单独的可学习封闭器和封闭者查询,从而使用视觉变压器(VIT)制定双层解耦。使用一个/两个阶段和基于查询的对象探测器具有各种骨架和网络层选择验证双层解耦合的概括能力,如图像实例分段基准(可可,亲戚,可可)和视频所示实例分割基准(YTVIS,OVIS,BDD100K MOTS),特别是对于重闭塞病例。代码和数据可在https://github.com/lkeab/bcnet上找到。
translated by 谷歌翻译
现有的实例分割方法已经达到了令人印象深刻的表现,但仍遭受了共同的困境:一个实例推断出冗余表示(例如,多个框,网格和锚点),这导致了多个重复的预测。因此,主流方法通常依赖于手工设计的非最大抑制(NMS)后处理步骤来选择最佳预测结果,这会阻碍端到端训练。为了解决此问题,我们建议一个称为Uniinst的无盒和无端机实例分割框架,该框架仅对每个实例产生一个唯一的表示。具体而言,我们设计了一种实例意识到的一对一分配方案,即仅产生一个表示(Oyor),该方案根据预测和地面真相之间的匹配质量,动态地为每个实例动态分配一个独特的表示。然后,一种新颖的预测重新排列策略被优雅地集成到框架中,以解决分类评分和掩盖质量之间的错位,从而使学习的表示形式更具歧视性。借助这些技术,我们的Uniinst,第一个基于FCN的盒子和无NMS实例分段框架,实现竞争性能,例如,使用Resnet-50-FPN和40.2 mask AP使用Resnet-101-FPN,使用Resnet-50-FPN和40.2 mask AP,使用Resnet-101-FPN,对抗AP可可测试-DEV的主流方法。此外,提出的实例感知方法对于遮挡场景是可靠的,在重锁定的ochuman基准上,通过杰出的掩码AP优于公共基线。我们的代码将在出版后提供。
translated by 谷歌翻译
Recently, diffusion frameworks have achieved comparable performance with previous state-of-the-art image generation models. Researchers are curious about its variants in discriminative tasks because of its powerful noise-to-image denoising pipeline. This paper proposes DiffusionInst, a novel framework that represents instances as instance-aware filters and formulates instance segmentation as a noise-to-filter denoising process. The model is trained to reverse the noisy groundtruth without any inductive bias from RPN. During inference, it takes a randomly generated filter as input and outputs mask in one-step or multi-step denoising. Extensive experimental results on COCO and LVIS show that DiffusionInst achieves competitive performance compared to existing instance segmentation models. We hope our work could serve as a simple yet effective baseline, which could inspire designing more efficient diffusion frameworks for challenging discriminative tasks. Our code is available in https://github.com/chenhaoxing/DiffusionInst.
translated by 谷歌翻译
We propose a simple yet effective instance segmentation framework, termed CondInst (conditional convolutions for instance segmentation). Top-performing instance segmentation methods such as Mask R-CNN rely on ROI operations (typically ROIPool or ROIAlign) to obtain the final instance masks. In contrast, we propose to solve instance segmentation from a new perspective. Instead of using instancewise ROIs as inputs to a network of fixed weights, we employ dynamic instance-aware networks, conditioned on instances. CondInst enjoys two advantages: 1) Instance segmentation is solved by a fully convolutional network, eliminating the need for ROI cropping and feature alignment.2) Due to the much improved capacity of dynamically-generated conditional convolutions, the mask head can be very compact (e.g., 3 conv. layers, each having only 8 channels), leading to significantly faster inference. We demonstrate a simpler instance segmentation method that can achieve improved performance in both accuracy and inference speed. On the COCO dataset, we outperform a few recent methods including welltuned Mask R-CNN baselines, without longer training schedules needed.
translated by 谷歌翻译
由于字体,大小,颜色和方向的各种文本变化,任意形状的场景文本检测是一项具有挑战性的任务。大多数现有基于回归的方法求助于回归文本区域的口罩或轮廓点以建模文本实例。但是,回归完整的口罩需要高训练的复杂性,并且轮廓点不足以捕获高度弯曲的文本的细节。为了解决上述限制,我们提出了一个名为TextDCT的新颖的轻巧锚文本检测框架,该框架采用离散的余弦变换(DCT)将文本掩码编码为紧凑型向量。此外,考虑到金字塔层中训练样本不平衡的数量,我们仅采用单层头来进行自上而下的预测。为了建模单层头部的多尺度文本,我们通过将缩水文本区域视为正样本,并通过融合来介绍一个新颖的积极抽样策略,并通过融合来设计特征意识模块(FAM),以实现空间意识和规模的意识丰富的上下文信息并关注更重要的功能。此外,我们提出了一种分割的非量最大抑制(S-NMS)方法,该方法可以过滤低质量的掩模回归。在四个具有挑战性的数据集上进行了广泛的实验,这表明我们的TextDCT在准确性和效率上都获得了竞争性能。具体而言,TextDCT分别以每秒17.2帧(FPS)和F-measure的F-MEASIE达到85.1,而CTW1500和Total-Text数据集的F-Measure 84.9分别为15.1 fps。
translated by 谷歌翻译
In contrast to fully supervised methods using pixel-wise mask labels, box-supervised instance segmentation takes advantage of simple box annotations, which has recently attracted increasing research attention. This paper presents a novel single-shot instance segmentation approach, namely Box2Mask, which integrates the classical level-set evolution model into deep neural network learning to achieve accurate mask prediction with only bounding box supervision. Specifically, both the input image and its deep features are employed to evolve the level-set curves implicitly, and a local consistency module based on a pixel affinity kernel is used to mine the local context and spatial relations. Two types of single-stage frameworks, i.e., CNN-based and transformer-based frameworks, are developed to empower the level-set evolution for box-supervised instance segmentation, and each framework consists of three essential components: instance-aware decoder, box-level matching assignment and level-set evolution. By minimizing the level-set energy function, the mask map of each instance can be iteratively optimized within its bounding box annotation. The experimental results on five challenging testbeds, covering general scenes, remote sensing, medical and scene text images, demonstrate the outstanding performance of our proposed Box2Mask approach for box-supervised instance segmentation. In particular, with the Swin-Transformer large backbone, our Box2Mask obtains 42.4% mask AP on COCO, which is on par with the recently developed fully mask-supervised methods. The code is available at: https://github.com/LiWentomng/boxlevelset.
translated by 谷歌翻译
我们为变体视觉任务提供了一个概念上简单,灵活和通用的视觉感知头,例如分类,对象检测,实例分割和姿势估计以及不同的框架,例如单阶段或两个阶段的管道。我们的方法有效地标识了图像中的对象,同时同时生成高质量的边界框或基于轮廓的分割掩码或一组关键点。该方法称为Unihead,将不同的视觉感知任务视为通过变压器编码器体系结构学习的可分配点。给定固定的空间坐标,Unihead将其自适应地分散到了不同的空间点和有关它们的关系的原因。它以多个点的形式直接输出最终预测集,使我们能够在具有相同头部设计的不同框架中执行不同的视觉任务。我们展示了对成像网分类的广泛评估以及可可套件的所有三个曲目,包括对象检测,实例分割和姿势估计。如果没有铃铛和口哨声,Unihead可以通过单个视觉头设计统一这些视觉任务,并与为每个任务开发的专家模型相比,实现可比的性能。我们希望我们的简单和通用的Unihead能够成为可靠的基线,并有助于促进通用的视觉感知研究。代码和型号可在https://github.com/sense-x/unihead上找到。
translated by 谷歌翻译
Dense pose estimation is a dense 3D prediction task for instance-level human analysis, aiming to map human pixels from an RGB image to a 3D surface of the human body. Due to a large amount of surface point regression, the training process appears to be easy to collapse compared to other region-based human instance analyzing tasks. By analyzing the loss formulation of the existing dense pose estimation model, we introduce a novel point regression loss function, named Dense Points} loss to stable the training progress, and a new balanced loss weighting strategy to handle the multi-task losses. With the above novelties, we propose a brand new architecture, named UV R-CNN. Without auxiliary supervision and external knowledge from other tasks, UV R-CNN can handle many complicated issues in dense pose model training progress, achieving 65.0% $AP_{gps}$ and 66.1% $AP_{gpsm}$ on the DensePose-COCO validation subset with ResNet-50-FPN feature extractor, competitive among the state-of-the-art dense human pose estimation methods.
translated by 谷歌翻译
由于其在各种领域的广泛应用,3D对象检测正在接受行业和学术界的增加。在本文中,我们提出了从点云的3D对象检测的基于角度基于卷曲区域的卷积神经网络(PV-RCNNS)。首先,我们提出了一种新颖的3D探测器,PV-RCNN,由两个步骤组成:Voxel-to-keyPoint场景编码和Keypoint-to-Grid ROI特征抽象。这两个步骤深入地将3D体素CNN与基于点的集合的集合进行了集成,以提取辨别特征。其次,我们提出了一个先进的框架,PV-RCNN ++,用于更高效和准确的3D对象检测。它由两个主要的改进组成:有效地生产更多代表性关键点的划分的提案中心策略,以及用于更好地聚合局部点特征的vectorpool聚合,具有更少的资源消耗。通过这两种策略,我们的PV-RCNN ++比PV-RCNN快2倍,同时还在具有150米* 150M检测范围内的大型Waymo Open DataSet上实现更好的性能。此外,我们提出的PV-RCNNS在Waymo Open DataSet和高竞争力的基蒂基准上实现最先进的3D检测性能。源代码可在https://github.com/open-mmlab/openpcdet上获得。
translated by 谷歌翻译
尽管有不同的相关框架,已经通过不同和专门的框架解决了语义,实例和Panoptic分段。本文为这些基本相似的任务提供了统一,简单,有效的框架。该框架,名为K-Net,段段由一组被学习内核持续一致,其中每个内核负责为潜在实例或填充类生成掩码。要解决区分各种实例的困难,我们提出了一个内核更新策略,使每个内核动态和条件在输入图像中的有意义的组上。 K-NET可以以结尾的方式培训,具有二分匹配,其培训和推论是自然的NMS和无框。没有钟声和口哨,K-Net超越了先前发表的全面的全面的单一模型,在ADE20K Val上的MS Coco Test-Dev分割和语义分割上分别与55.2%PQ和54.3%Miou分裂。其实例分割性能也与MS COCO上的级联掩模R-CNN相同,具有60%-90%的推理速度。代码和模型将在https://github.com/zwwwayne/k-net/发布。
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译
Mask r-cnn
分类:
We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without tricks, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code will be made available.
translated by 谷歌翻译
We propose a sparse end-to-end multi-person pose regression framework, termed QueryPose, which can directly predict multi-person keypoint sequences from the input image. The existing end-to-end methods rely on dense representations to preserve the spatial detail and structure for precise keypoint localization. However, the dense paradigm introduces complex and redundant post-processes during inference. In our framework, each human instance is encoded by several learnable spatial-aware part-level queries associated with an instance-level query. First, we propose the Spatial Part Embedding Generation Module (SPEGM) that considers the local spatial attention mechanism to generate several spatial-sensitive part embeddings, which contain spatial details and structural information for enhancing the part-level queries. Second, we introduce the Selective Iteration Module (SIM) to adaptively update the sparse part-level queries via the generated spatial-sensitive part embeddings stage-by-stage. Based on the two proposed modules, the part-level queries are able to fully encode the spatial details and structural information for precise keypoint regression. With the bipartite matching, QueryPose avoids the hand-designed post-processes and surpasses the existing dense end-to-end methods with 73.6 AP on MS COCO mini-val set and 72.7 AP on CrowdPose test set. Code is available at https://github.com/buptxyb666/QueryPose.
translated by 谷歌翻译
The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these "hard" keypoints. More specifically, our algorithm includes two stages: Glob-alNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the "simple" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the "hard" keypoints by integrating all levels of feature representations from the Global-Net together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve stateof-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19% relative improvement compared with 60.5 from the COCO 2016 keypoint challenge. Code 1 and the detection results are publicly available for further research.
translated by 谷歌翻译
Recent one-stage object detectors follow a per-pixel prediction approach that predicts both the object category scores and boundary positions from every single grid location. However, the most suitable positions for inferring different targets, i.e., the object category and boundaries, are generally different. Predicting all these targets from the same grid location thus may lead to sub-optimal results. In this paper, we analyze the suitable inference positions for object category and boundaries, and propose a prediction-target-decoupled detector named PDNet to establish a more flexible detection paradigm. Our PDNet with the prediction decoupling mechanism encodes different targets separately in different locations. A learnable prediction collection module is devised with two sets of dynamic points, i.e., dynamic boundary points and semantic points, to collect and aggregate the predictions from the favorable regions for localization and classification. We adopt a two-step strategy to learn these dynamic point positions, where the prior positions are estimated for different targets first, and the network further predicts residual offsets to the positions with better perceptions of the object properties. Extensive experiments on the MS COCO benchmark demonstrate the effectiveness and efficiency of our method. With a single ResNeXt-64x4d-101-DCN as the backbone, our detector achieves 50.1 AP with single-scale testing, which outperforms the state-of-the-art methods by an appreciable margin under the same experimental settings.Moreover, our detector is highly efficient as a one-stage framework. Our code is public at https://github.com/yangli18/PDNet.
translated by 谷歌翻译