理解手对象交互的关键组成部分是识别活动对象的能力 - 由人类手动操纵的对象。为了准确定位活动对象,任何方法都必须使用由每个图像像素编码的信息,例如它是否属于手,对象或背景。要利用每个像素作为确定活动对象的边界框的证据,我们提出了一种像素明智的投票功能。我们的Pixel-Wise投票函数将初始边界框作为输入,并生成作为输出的活动对象的改进边界框。投票函数设计成使得输入边界盒内部的每个像素用于改进的边界框,并且选择具有大多数投票的框作为输出。我们调用了在投票函数中生成的边界框的集合,关键框字段,因为它表征了与当前边界框中的关系定义的边界框的字段。虽然我们的投票功能能够改进活动对象的边界框,但一轮投票通常不足以准确地本地化活动对象。因此,我们反复应用投票函数来顺序地改善边界框的位置。然而,由于已知重复应用一步预测器(即,使用我们的投票函数的自动回归处理)可以导致数据分配换档,我们使用强化学习(RL)缓解此问题。我们采用标准RL来学习投票功能参数,并表明它通过标准的监督学习方法提供了有意义的改进。我们在两个大型数据集上执行实验:100欧元和麦克巴诺,分别在最先进的情况下提高8%和30%的AP50性能。
translated by 谷歌翻译
This paper addresses the challenge of 6DoF pose estimation from a single RGB image under severe occlusion or truncation. Many recent works have shown that a two-stage approach, which first detects keypoints and then solves a Perspective-n-Point (PnP) problem for pose estimation, achieves remarkable performance. However, most of these methods only localize a set of sparse keypoints by regressing their image coordinates or heatmaps, which are sensitive to occlusion and truncation. Instead, we introduce a Pixel-wise Voting Network (PVNet) to regress pixel-wise unit vectors pointing to the keypoints and use these vectors to vote for keypoint locations using RANSAC. This creates a flexible representation for localizing occluded or truncated keypoints. Another important feature of this representation is that it provides uncertainties of keypoint locations that can be further leveraged by the PnP solver. Experiments show that the proposed approach outperforms the state of the art on the LINEMOD, Occlusion LINEMOD and YCB-Video datasets by a large margin, while being efficient for real-time pose estimation. We further create a Truncation LINEMOD dataset to validate the robustness of our approach against truncation. The code will be avaliable at https://zju-3dv.github.io/pvnet/.
translated by 谷歌翻译
本文介绍了Houghnet,这是一种单阶段,无锚,基于投票的,自下而上的对象检测方法。受到广义的霍夫变换的启发,霍尼特通过在该位置投票的总和确定了某个位置的物体的存在。投票是根据对数极极投票领域的近距离和长距离地点收集的。由于这种投票机制,Houghnet能够整合近距离和远程的班级条件证据以进行视觉识别,从而概括和增强当前的对象检测方法,这通常仅依赖于本地证据。在可可数据集中,Houghnet的最佳型号达到$ 46.4 $ $ $ ap $(和$ 65.1 $ $ $ ap_ {50} $),与自下而上的对象检测中的最先进的作品相同,超越了最重要的一项 - 阶段和两阶段方法。我们进一步验证了提案在其他视觉检测任务中的有效性,即视频对象检测,实例分割,3D对象检测和人为姿势估计的关键点检测以及其他“图像”图像生成任务的附加“标签”,其中集成的集成在所有情况下,我们的投票模块始终提高性能。代码可在https://github.com/nerminsamet/houghnet上找到。
translated by 谷歌翻译
Single-frame InfraRed Small Target (SIRST) detection has been a challenging task due to a lack of inherent characteristics, imprecise bounding box regression, a scarcity of real-world datasets, and sensitive localization evaluation. In this paper, we propose a comprehensive solution to these challenges. First, we find that the existing anchor-free label assignment method is prone to mislabeling small targets as background, leading to their omission by detectors. To overcome this issue, we propose an all-scale pseudo-box-based label assignment scheme that relaxes the constraints on scale and decouples the spatial assignment from the size of the ground-truth target. Second, motivated by the structured prior of feature pyramids, we introduce the one-stage cascade refinement network (OSCAR), which uses the high-level head as soft proposals for the low-level refinement head. This allows OSCAR to process the same target in a cascade coarse-to-fine manner. Finally, we present a new research benchmark for infrared small target detection, consisting of the SIRST-V2 dataset of real-world, high-resolution single-frame targets, the normalized contrast evaluation metric, and the DeepInfrared toolkit for detection. We conduct extensive ablation studies to evaluate the components of OSCAR and compare its performance to state-of-the-art model-driven and data-driven methods on the SIRST-V2 benchmark. Our results demonstrate that a top-down cascade refinement framework can improve the accuracy of infrared small target detection without sacrificing efficiency. The DeepInfrared toolkit, dataset, and trained models are available at https://github.com/YimianDai/open-deepinfrared to advance further research in this field.
translated by 谷歌翻译
在本文中,我们提出了简单的关注机制,我们称之为箱子。它可以实现网格特征之间的空间交互,从感兴趣的框中采样,并提高变压器的学习能力,以获得几个视觉任务。具体而言,我们呈现拳击手,短暂的框变压器,通过从输入特征映射上的参考窗口预测其转换来参加一组框。通过考虑其网格结构,拳击手通过考虑其网格结构来计算这些框的注意力。值得注意的是,Boxer-2D自然有关于其注意模块内容信息的框信息的原因,使其适用于端到端实例检测和分段任务。通过在盒注意模块中旋转的旋转的不变性,Boxer-3D能够从用于3D端到端对象检测的鸟瞰图平面产生识别信息。我们的实验表明,拟议的拳击手-2D在Coco检测中实现了更好的结果,并且在Coco实例分割上具有良好的和高度优化的掩模R-CNN可比性。 Boxer-3D已经为Waymo开放的车辆类别提供了令人信服的性能,而无需任何特定的类优化。代码将被释放。
translated by 谷歌翻译
Recent one-stage object detectors follow a per-pixel prediction approach that predicts both the object category scores and boundary positions from every single grid location. However, the most suitable positions for inferring different targets, i.e., the object category and boundaries, are generally different. Predicting all these targets from the same grid location thus may lead to sub-optimal results. In this paper, we analyze the suitable inference positions for object category and boundaries, and propose a prediction-target-decoupled detector named PDNet to establish a more flexible detection paradigm. Our PDNet with the prediction decoupling mechanism encodes different targets separately in different locations. A learnable prediction collection module is devised with two sets of dynamic points, i.e., dynamic boundary points and semantic points, to collect and aggregate the predictions from the favorable regions for localization and classification. We adopt a two-step strategy to learn these dynamic point positions, where the prior positions are estimated for different targets first, and the network further predicts residual offsets to the positions with better perceptions of the object properties. Extensive experiments on the MS COCO benchmark demonstrate the effectiveness and efficiency of our method. With a single ResNeXt-64x4d-101-DCN as the backbone, our detector achieves 50.1 AP with single-scale testing, which outperforms the state-of-the-art methods by an appreciable margin under the same experimental settings.Moreover, our detector is highly efficient as a one-stage framework. Our code is public at https://github.com/yangli18/PDNet.
translated by 谷歌翻译
场景文本检测的具有挑战性的领域需要复杂的数据注释,这是耗时和昂贵的。弱监管等技术可以减少所需的数据量。本文提出了一种薄弱的现场文本检测监控方法,这是利用加强学习(RL)。RL代理收到的奖励由神经网络估算,而不是从地面真理标签推断出来。首先,我们增强了具有多种培训优化的文本检测的现有监督RL方法,允许我们将性能差距缩放到基于回归的算法。然后,我们将拟议的系统在现实世界数据的漏洞和半监督培训中使用。我们的结果表明,在弱监督环境中培训是可行的。但是,我们发现在半监督设置中使用我们的模型,例如,将标记的合成数据与未经发布的实际数据相结合,产生最佳结果。
translated by 谷歌翻译
大多数最先进的实例级人类解析模型都采用了两阶段的基于锚的探测器,因此无法避免启发式锚盒设计和像素级别缺乏分析。为了解决这两个问题,我们设计了一个实例级人类解析网络,该网络在像素级别上无锚固且可解决。它由两个简单的子网络组成:一个用于边界框预测的无锚检测头和一个用于人体分割的边缘引导解析头。无锚探测器的头继承了像素样的优点,并有效地避免了对象检测应用中证明的超参数的敏感性。通过引入部分感知的边界线索,边缘引导的解析头能够将相邻的人类部分与彼此区分开,最多可在一个人类实例中,甚至重叠的实例。同时,利用了精炼的头部整合盒子级别的分数和部分分析质量,以提高解析结果的质量。在两个多个人类解析数据集(即CIHP和LV-MHP-V2.0)和一个视频实例级人类解析数据集(即VIP)上进行实验,表明我们的方法实现了超过全球级别和实例级别的性能最新的一阶段自上而下的替代方案。
translated by 谷歌翻译
Modern object detectors rely heavily on rectangular bounding boxes, such as anchors, proposals and the final predictions, to represent objects at various recognition stages. The bounding box is convenient to use but provides only a coarse localization of objects and leads to a correspondingly coarse extraction of object features. In this paper, we present RepPoints (representative points), a new finer representation of objects as a set of sample points useful for both localization and recognition. Given ground truth localization and recognition targets for training, RepPoints learn to automatically arrange themselves in a manner that bounds the spatial extent of an object and indicates semantically significant local areas. They furthermore do not require the use of anchors to sample a space of bounding boxes. We show that an anchor-free object detector based on RepPoints can be as effective as the state-of-the-art anchor-based detection methods, with 46.5 AP and 67.4 AP 50 on the COCO test-dev detection benchmark, using ResNet-101 model. Code is available at https://github.com/microsoft/RepPoints.
translated by 谷歌翻译
Three-dimensional objects are commonly represented as 3D boxes in a point-cloud. This representation mimics the well-studied image-based 2D bounding-box detection but comes with additional challenges. Objects in a 3D world do not follow any particular orientation, and box-based detectors have difficulties enumerating all orientations or fitting an axis-aligned bounding box to rotated objects. In this paper, we instead propose to represent, detect, and track 3D objects as points. Our framework, CenterPoint, first detects centers of objects using a keypoint detector and regresses to other attributes, including 3D size, 3D orientation, and velocity. In a second stage, it refines these estimates using additional point features on the object. In CenterPoint, 3D object tracking simplifies to greedy closest-point matching. The resulting detection and tracking algorithm is simple, efficient, and effective. CenterPoint achieved state-of-theart performance on the nuScenes benchmark for both 3D detection and tracking, with 65.5 NDS and 63.8 AMOTA for a single model. On the Waymo Open Dataset, Center-Point outperforms all previous single model methods by a large margin and ranks first among all Lidar-only submissions. The code and pretrained models are available at https://github.com/tianweiy/CenterPoint.
translated by 谷歌翻译
边界盒注释表单是可视对象本地化任务最常用的方法。然而,边界盒注释依赖于大量的精确注释的边界盒,这是昂贵的,艰苦的,因此在实际情况下是不可能的,对于某些应用而言,关心尺寸的一些应用甚至是多余的。因此,我们通过将每个人作为粗略点(COARSOPPOINT)向每个人提供注释来提出一种基于点的基于点的框架,该框架可以是对象范围内的任何点,而不是精确的边界框。然后将该人的位置预测为图像中的2D坐标。大大简化了数据注释管道。然而,COARSOUNTPOINT注释不可避免地导致标签可靠性降低(标签不确定性)和训练期间的网络混淆。因此,我们提出了一种点自我细化方法,它以自重节奏的方式迭代地更新点注释。拟议的细化系统减轻了标签不确定性,逐步提高了本地化绩效。实验表明,我们的方法可实现对象本地化性能,同时保存注释成本高达80 $ \%$。代码括在补充材料中。
translated by 谷歌翻译
Figure 1: Results obtained from our single image, monocular 3D object detection network MonoDIS on a KITTI3D test image with corresponding birds-eye view, showing its ability to estimate size and orientation of objects at different scales.
translated by 谷歌翻译
Current 3D object detection methods are heavily influenced by 2D detectors. In order to leverage architectures in 2D detectors, they often convert 3D point clouds to regular grids (i.e., to voxel grids or to bird's eye view images), or rely on detection in 2D images to propose 3D boxes. Few works have attempted to directly detect objects in point clouds. In this work, we return to first principles to construct a 3D detection pipeline for point cloud data and as generic as possible. However, due to the sparse nature of the data -samples from 2D manifolds in 3D space -we face a major challenge when directly predicting bounding box parameters from scene points: a 3D object centroid can be far from any surface point thus hard to regress accurately in one step. To address the challenge, we propose VoteNet, an end-to-end 3D object detection network based on a synergy of deep point set networks and Hough voting. Our model achieves state-of-the-art 3D detection on two large datasets of real 3D scans, ScanNet and SUN RGB-D with a simple design, compact model size and high efficiency. Remarkably, VoteNet outperforms previous methods by using purely geometric information without relying on color images.
translated by 谷歌翻译
两阶段和基于查询的实例分段方法取得了显着的结果。然而,他们的分段面具仍然非常粗糙。在本文中,我们呈现了用于高质量高效的实例分割的掩模转发器。我们的掩模转发器代替常规密集的张量,而不是在常规密集的张量上进行分解,并表示作为Quadtree的图像区域。我们基于变换器的方法仅处理检测到的错误易于树节点,并并行自我纠正其错误。虽然这些稀疏的像素仅构成总数的小比例,但它们对最终掩模质量至关重要。这允许掩模转换器以低计算成本预测高精度的实例掩模。广泛的实验表明,掩模转发器在三个流行的基准上优于当前实例分段方法,显着改善了COCO和BDD100K上的大型+3.0掩模AP的+3.0掩模AP的大余量和CityScapes上的+6.6边界AP。我们的代码和培训的型号将在http://vis.xyz/pub/transfiner提供。
translated by 谷歌翻译
Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. While the probabilities for class labels naturally reflect classification confidence, localization confidence is absent. This makes properly localized bounding boxes degenerate during iterative regression or even suppressed during NMS. In the paper we propose IoU-Net learning to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes. Furthermore, an optimization-based bounding box refinement method is proposed, where the predicted IoU is formulated as the objective. Extensive experiments on the MS-COCO dataset show the effectiveness of IoU-Net, as well as its compatibility with and adaptivity to several state-of-the-art object detectors.
translated by 谷歌翻译
由于基于相交的联盟(IOU)优化维持最终IOU预测度量和损失的一致性,因此它已被广泛用于单级2D对象检测器的回归和分类分支。最近,几种3D对象检测方法采用了基于IOU的优化,并用3D iou直接替换了2D iou。但是,由于复杂的实施和效率低下的向后操作,3D中的这种直接计算非常昂贵。此外,基于3D IOU的优化是优化的,因为它对旋转很敏感,因此可能导致训练不稳定性和检测性能恶化。在本文中,我们提出了一种新型的旋转旋转iou(RDIOU)方法,该方法可以减轻旋转敏感性问题,并在训练阶段与3D IOU相比产生更有效的优化目标。具体而言,我们的RDIOU通过将旋转变量解耦为独立术语,但保留3D iou的几何形状来简化回归参数的复杂相互作用。通过将RDIOU纳入回归和分类分支,鼓励网络学习更精确的边界框,并同时克服分类和回归之间的错位问题。基准Kitti和Waymo开放数据集的广泛实验验证我们的RDIOU方法可以为单阶段3D对象检测带来实质性改进。
translated by 谷歌翻译
We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task [11] that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster -50 fps on a Titan X (Pascal) GPU -and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by [28,29] that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm.For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent 26] when they are all used without postprocessing. During post-processing, a pose refinement step can be used to boost the accuracy of these two methods, but at 10 fps or less, they are much slower than our method.
translated by 谷歌翻译
The detection of human body and its related parts (e.g., face, head or hands) have been intensively studied and greatly improved since the breakthrough of deep CNNs. However, most of these detectors are trained independently, making it a challenging task to associate detected body parts with people. This paper focuses on the problem of joint detection of human body and its corresponding parts. Specifically, we propose a novel extended object representation that integrates the center location offsets of body or its parts, and construct a dense single-stage anchor-based Body-Part Joint Detector (BPJDet). Body-part associations in BPJDet are embedded into the unified representation which contains both the semantic and geometric information. Therefore, BPJDet does not suffer from error-prone association post-matching, and has a better accuracy-speed trade-off. Furthermore, BPJDet can be seamlessly generalized to jointly detect any body part. To verify the effectiveness and superiority of our method, we conduct extensive experiments on the CityPersons, CrowdHuman and BodyHands datasets. The proposed BPJDet detector achieves state-of-the-art association performance on these three benchmarks while maintains high accuracy of detection. Code will be released to facilitate further studies.
translated by 谷歌翻译
物体检测通常需要在现代深度学习方法中基于传统或锚盒的滑动窗口分类器。但是,这些方法中的任何一个都需要框中的繁琐配置。在本文中,我们提供了一种新的透视图,其中检测对象被激励为高电平语义特征检测任务。与边缘,角落,斑点和其他特征探测器一样,所提出的探测器扫描到全部图像的特征点,卷积自然适合该特征点。但是,与这些传统的低级功能不同,所提出的探测器用于更高级别的抽象,即我们正在寻找有物体的中心点,而现代深层模型已经能够具有如此高级别的语义抽象。除了Blob检测之外,我们还预测了中心点的尺度,这也是直接的卷积。因此,在本文中,通过卷积简化了行人和面部检测作为直接的中心和规模预测任务。这样,所提出的方法享有一个无盒设置。虽然结构简单,但它对几个具有挑战性的基准呈现竞争准确性,包括行人检测和面部检测。此外,执行交叉数据集评估,证明所提出的方法的卓越泛化能力。可以访问代码和模型(https://github.com/liuwei16/csp和https://github.com/hasanirtiza/pedestron)。
translated by 谷歌翻译
分割高度重叠的图像对象是具有挑战性的,因为图像上的真实对象轮廓和遮挡边界之间通常没有区别。与先前的实例分割方法不同,我们将图像形成模拟为两个重叠层的组成,并提出了双层卷积网络(BCNET),其中顶层检测到遮挡对象(遮挡器),而底层则渗透到部分闭塞实例(胶囊)。遮挡关系与双层结构的显式建模自然地将遮挡和遮挡实例的边界解散,并在掩模回归过程中考虑了它们之间的相互作用。我们使用两种流行的卷积网络设计(即完全卷积网络(FCN)和图形卷积网络(GCN))研究了双层结构的功效。此外,我们通过将图像中的实例表示为单独的可学习封闭器和封闭者查询,从而使用视觉变压器(VIT)制定双层解耦。使用一个/两个阶段和基于查询的对象探测器具有各种骨架和网络层选择验证双层解耦合的概括能力,如图像实例分段基准(可可,亲戚,可可)和视频所示实例分割基准(YTVIS,OVIS,BDD100K MOTS),特别是对于重闭塞病例。代码和数据可在https://github.com/lkeab/bcnet上找到。
translated by 谷歌翻译