在本文中,我们对检测变压器(DETR)感兴趣,这是一种基于变压器编码器编码器架构的端到端对象检测方法,而无需手工制作的后处理,例如NMS。受到有条件的Detr的启发,这是一种具有快速训练收敛性的改进的DETR,对内部解码器层提出了盒子查询(最初称为空间查询),我们将对象查询重新将对象查询重新布置为盒子查询的格式,该格式是参考参考嵌入的组成点和框相对于参考点的转换。该重新制定表明在更快地使用R-CNN中广泛研究的DETR中的对象查询与锚固框之间的联系。此外,我们从图像内容中学习了盒子查询,从而进一步提高了通过快速训练收敛的有条件DETR的检测质量。此外,我们采用轴向自我注意的想法来节省内存成本并加速编码器。所得的检测器(称为条件DETR V2)取得比条件DETR更好的结果,可节省内存成本并更有效地运行。例如,对于DC $ 5 $ -Resnet- $ 50 $骨干,我们的方法在可可$ Val $ set上获得了$ 44.8 $ ap,$ 16.4 $ fps和有条件的detr相比,它运行了$ 1.6 \ tims $ $ $ $ $,节省$ 74 $ \ \ \ \ \ \ \ \ \ \ \ \ \ $ 74美元总体内存成本的百分比,并提高$ 1.0 $ ap得分。
translated by 谷歌翻译
The DETR object detection approach applies the transformer encoder and decoder architecture to detect objects and achieves promising performance. In this paper, we present a simple approach to address the main problem of DETR, the slow convergence, by using representation learning technique. In this approach, we detect an object bounding box as a pair of keypoints, the top-left corner and the center, using two decoders. By detecting objects as paired keypoints, the model builds up a joint classification and pair association on the output queries from two decoders. For the pair association we propose utilizing contrastive self-supervised learning algorithm without requiring specialized architecture. Experimental results on MS COCO dataset show that Pair DETR can converge at least 10x faster than original DETR and 1.5x faster than Conditional DETR during training, while having consistently higher Average Precision scores.
translated by 谷歌翻译
DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10× less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https:// github.com/fundamentalvision/Deformable-DETR.
translated by 谷歌翻译
在本文中,我们提出了一种用于基于变压器的对象检测的新型查询设计。在以前的基于变压器的检测器中,对象查询是一组学习的嵌入。但是,每个学习的嵌入都没有明确的物理意义,我们无法解释它将在哪里关注。由于每个对象查询的预测时隙没有特定模式,难以优化。换句话说,每个对象查询不会专注于特定区域。为了解决这些问题,在我们的查询设计中,对象查询基于锚点,其广泛用于基于CNN的检测器。所以每个对象查询都侧重于锚点附近的对象。此外,我们的查询设计可以在一个位置预测多个对象来解决难度:“一个区域,多个对象”。此外,我们设计了一个注意力,可以降低内存成本,同时实现比DETR中的标准注意力相似或更好的性能。由于查询设计和注意力变化,所提出的探测器,我们称之为锚点DETR,可以实现更好的性能,并比DEDR更快地运行10美元\ Times $更少的训练时期。例如,当使用Reset50-DC5功能进行培训50时,它在MSCOCO DataSet上实现44.2 AP。对MSCOCO基准的广泛实验证明了所提出的方法的有效性。代码可用于\ url {https://github.com/megvii-research/anchordetr}。
translated by 谷歌翻译
视觉变压器(VIT)正在改变对象检测方法的景观。 VIT的自然使用方法是用基于变压器的骨干替换基于CNN的骨干,该主链很简单有效,其价格为推理带来了可观的计算负担。更微妙的用法是DEDR家族,它消除了对物体检测中许多手工设计的组件的需求,但引入了一个解码器,要求超长时间进行融合。结果,基于变压器的对象检测不能在大规模应用中占上风。为了克服这些问题,我们提出了一种新型的无解码器基于完全变压器(DFFT)对象检测器,这是第一次在训练和推理阶段达到高效率。我们通过居中两个切入点来简化反对检测到仅编码单级锚点的密集预测问题:1)消除训练感知的解码器,并利用两个强的编码器来保留单层特征映射预测的准确性; 2)探索具有有限的计算资源的检测任务的低级语义特征。特别是,我们设计了一种新型的轻巧的面向检测的变压器主链,该主链有效地捕获了基于良好的消融研究的丰富语义的低级特征。 MS Coco基准测试的广泛实验表明,DFFT_SMALL的表现优于2.5%AP,计算成本降低28%,$ 10 \ $ 10 \乘以$ 10 \乘以$较少的培训时期。与尖端的基于锚的探测器视网膜相比,DFFT_SMALL获得了超过5.5%的AP增益,同时降低了70%的计算成本。
translated by 谷歌翻译
我们将Dino(\ textbf {d} etr与\ textbf {i} mpred de \ textbf {n} oising hand \ textbf {o} r boxes),一种最先进的端到端对象检测器。 % 在本文中。 Dino通过使用一种对比度方法来降级训练,一种用于锚定初始化的混合查询选择方法以及对盒子预测的两次方案,通过使用对比的方式来改善性能和效率的模型。 Dino在$ 12 $时代获得$ 49.4 $ ap,$ 12.3 $ ap in Coco $ 24 $时期,带有Resnet-50骨干和多尺度功能,可显着改善$ \ textbf {+6.0} $ \ textbf {ap}和ap {ap}和ap}和$ \ textbf {+2.7} $ \ textbf {ap}与以前的最佳detr样模型相比,分别是dn-detr。 Dino在模型大小和数据大小方面都很好地缩放。没有铃铛和哨子,在对objects365数据集进行了swinl骨架的预训练后,Dino在两个Coco \ texttt {val2017}($ \ textbf {63.2} $ \ textbf {ap ap})和\ testtt { -dev}(\ textbf {$ \ textbf {63.3} $ ap})。与排行榜上的其他模型相比,Dino大大降低了其模型大小和预训练数据大小,同时实现了更好的结果。我们的代码将在\ url {https://github.com/ideacvr/dino}提供。
translated by 谷歌翻译
Passive millimeter-wave (PMMW) is a significant potential technique for human security screening. Several popular object detection networks have been used for PMMW images. However, restricted by the low resolution and high noise of PMMW images, PMMW hidden object detection based on deep learning usually suffers from low accuracy and low classification confidence. To tackle the above problems, this paper proposes a Task-Aligned Detection Transformer network, named PMMW-DETR. In the first stage, a Denoising Coarse-to-Fine Transformer (DCFT) backbone is designed to extract long- and short-range features in the different scales. In the second stage, we propose the Query Selection module to introduce learned spatial features into the network as prior knowledge, which enhances the semantic perception capability of the network. In the third stage, aiming to improve the classification performance, we perform a Task-Aligned Dual-Head block to decouple the classification and regression tasks. Based on our self-developed PMMW security screening dataset, experimental results including comparison with State-Of-The-Art (SOTA) methods and ablation study demonstrate that the PMMW-DETR obtains higher accuracy and classification confidence than previous works, and exhibits robustness to the PMMW images of low quality.
translated by 谷歌翻译
Recently, the dominant DETR-based approaches apply central-concept spatial prior to accelerate Transformer detector convergency. These methods gradually refine the reference points to the center of target objects and imbue object queries with the updated central reference information for spatially conditional attention. However, centralizing reference points may severely deteriorate queries' saliency and confuse detectors due to the indiscriminative spatial prior. To bridge the gap between the reference points of salient queries and Transformer detectors, we propose SAlient Point-based DETR (SAP-DETR) by treating object detection as a transformation from salient points to instance objects. In SAP-DETR, we explicitly initialize a query-specific reference point for each object query, gradually aggregate them into an instance object, and then predict the distance from each side of the bounding box to these points. By rapidly attending to query-specific reference region and other conditional extreme regions from the image features, SAP-DETR can effectively bridge the gap between the salient point and the query-based Transformer detector with a significant convergency speed. Our extensive experiments have demonstrated that SAP-DETR achieves 1.4 times convergency speed with competitive performance. Under the standard training scheme, SAP-DETR stably promotes the SOTA approaches by 1.0 AP. Based on ResNet-DC-101, SAP-DETR achieves 46.9 AP.
translated by 谷歌翻译
在本文中,我们提出了简单的关注机制,我们称之为箱子。它可以实现网格特征之间的空间交互,从感兴趣的框中采样,并提高变压器的学习能力,以获得几个视觉任务。具体而言,我们呈现拳击手,短暂的框变压器,通过从输入特征映射上的参考窗口预测其转换来参加一组框。通过考虑其网格结构,拳击手通过考虑其网格结构来计算这些框的注意力。值得注意的是,Boxer-2D自然有关于其注意模块内容信息的框信息的原因,使其适用于端到端实例检测和分段任务。通过在盒注意模块中旋转的旋转的不变性,Boxer-3D能够从用于3D端到端对象检测的鸟瞰图平面产生识别信息。我们的实验表明,拟议的拳击手-2D在Coco检测中实现了更好的结果,并且在Coco实例分割上具有良好的和高度优化的掩模R-CNN可比性。 Boxer-3D已经为Waymo开放的车辆类别提供了令人信服的性能,而无需任何特定的类优化。代码将被释放。
translated by 谷歌翻译
We present in this paper a novel denoising training method to speedup DETR (DEtection TRansformer) training and offer a deepened understanding of the slow convergence issue of DETR-like methods. We show that the slow convergence results from the instability of bipartite graph matching which causes inconsistent optimization goals in early training stages. To address this issue, except for the Hungarian loss, our method additionally feeds ground-truth bounding boxes with noises into Transformer decoder and trains the model to reconstruct the original boxes, which effectively reduces the bipartite graph matching difficulty and leads to a faster convergence. Our method is universal and can be easily plugged into any DETR-like methods by adding dozens of lines of code to achieve a remarkable improvement. As a result, our DN-DETR results in a remarkable improvement ($+1.9$AP) under the same setting and achieves the best result (AP $43.4$ and $48.6$ with $12$ and $50$ epochs of training respectively) among DETR-like methods with ResNet-$50$ backbone. Compared with the baseline under the same setting, DN-DETR achieves comparable performance with $50\%$ training epochs. Code is available at \url{https://github.com/FengLi-ust/DN-DETR}.
translated by 谷歌翻译
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (2 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models will be available for further research.
translated by 谷歌翻译
DETR是使用变压器编码器 - 解码器架构的第一端到端对象检测器,并在高分辨率特征映射上展示竞争性能但低计算效率。随后的工作变形Detr,通过更换可变形的关注来提高DEDR的效率,这实现了10倍的收敛性和改进的性能。可变形DETR使用多尺度特征来改善性能,但是,与DETR相比,编码器令牌的数量增加了20倍,编码器注意的计算成本仍然是瓶颈。在我们的初步实验中,我们观察到,即使只更新了编码器令牌的一部分,检测性能也几乎没有恶化。灵感来自该观察,我们提出了稀疏的DETR,其仅选择性更新预期的解码器预期的令牌,从而有效地检测模型。此外,我们表明在编码器中的所选令牌上应用辅助检测丢失可以提高性能,同时最小化计算开销。即使在Coco数据集上只有10%的编码器令牌,我们验证稀疏DETR也可以比可变形DETR实现更好的性能。尽管只有编码器令牌稀疏,但总计算成本减少了38%,与可变形的Detr相比,每秒帧(FPS)增加42%。代码可在https://github.com/kakaobrain/sparse-dett
translated by 谷歌翻译
最近提出的检测变压器(DETR)已建立了一个完全端到端的范式以进行对象检测。但是,DETR遭受慢训练的融合,这阻碍了其对各种检测任务的适用性。我们观察到,由于对象查询和编码图像特征之间的语义不一致,DETR的缓慢收敛在很大程度上归因于将对象查询与相关区域匹配的困难。通过此观察,我们设计了与DETR ++(SAM-DETR ++)设计的语义对齐匹配,以加速DETR的收敛并改善检测性能。 SAM-DETR ++的核心是一个插件模块,该模块将对象查询和编码图像功能投射到相同的功能嵌入空间中,在该空间中,每个对象查询都可以轻松地与具有相似语义的相关区域匹配。此外,SAM-DETR ++搜索了多个代表性关键点,并利用其功能以具有增强的表示能力的语义对齐匹配。此外,SAM-DETR ++可以根据设计的语义对准匹配,以粗到5的方式有效地融合多尺度特征。广泛的实验表明,所提出的SAM-DETR ++实现了优越的收敛速度和竞争性检测准确性。此外,作为一种插件方法,SAM-DETR ++可以以更好的性能补充现有的DITR收敛解决方案,仅使用12个训练时代获得44.8%的AP和49.1%的AP,并使用Resnet-50上的CoCo Val2017上的50个训练时代获得50个训练时期。代码可在https://github.com/zhanggongjie/sam-detr上找到。
translated by 谷歌翻译
如果没有图像中的密集瓷砖锚点或网格点,稀疏的R-CNN可以通过以级联的训练方式更新的一组对象查询和建议框来实现有希望的结果。但是,由于性质稀疏以及查询与其参加地区之间的一对一关系,它在很大程度上取决于自我注意力,这通常在早期训练阶段不准确。此外,在密集对象的场景中,对象查询与许多无关的物体相互作用,从而降低了其独特性并损害了性能。本文提议在不同的框之间使用iOU作为自我注意力的价值路由的先验。原始注意力矩阵乘以从提案盒中计算出的相同大小的矩阵,并确定路由方案,以便可以抑制无关的功能。此外,为了准确提取分类和回归的功能,我们添加了两个轻巧投影头,以根据对象查询提供动态通道掩码,并且它们随动态convs的输出而繁殖,从而使结果适合两个不同的任务。我们在包括MS-Coco和CrowdHuman在内的不同数据集上验证了所提出的方案,这表明它可显着提高性能并提高模型收敛速度。
translated by 谷歌翻译
Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture. Code is available at https://github.com/jozhang97/DETA.
translated by 谷歌翻译
多尺度功能已被证明在对象检测方面非常有效,大多数基于Convnet的对象检测器采用特征金字塔网络(FPN)作为利用多尺度功能的基本组件。但是,对于最近提出的基于变压器的对象探测器,直接结合多尺度功能会导致由于处理高分辨率特征的注意机制的高复杂性,因此导致了高度的计算开销。本文介绍了迭代多尺度特征聚合(IMFA) - 一种通用范式,可有效利用基于变压器的对象检测器中的多尺度特征。核心想法是从仅几个关键位置利用稀疏的多尺度特征,并且通过两种新颖的设计实现了稀疏的特征。首先,IMFA重新安排变压器编码器数据管道,因此可以根据检测预测进行迭代更新编码的功能。其次,在先前检测预测的指导下,IMFA稀疏的量表自适应特征可从几个关键点位置进行精制检测。结果,采样的多尺度特征稀疏,但仍然对对象检测非常有益。广泛的实验表明,提出的IMFA在略有计算开销的情况下显着提高了基于变压器的对象检测器的性能。项目页面:https://github.com/zhanggongjie/imfa。
translated by 谷歌翻译
虽然用变压器(DETR)的检测越来越受欢迎,但其全球注意力建模需要极其长的培训期,以优化和实现有前途的检测性能。现有研究的替代方案主要开发先进的特征或嵌入设计来解决培训问题,指出,基于地区的兴趣区域(ROI)的检测细化可以很容易地帮助减轻DETR方法培训的难度。基于此,我们在本文中介绍了一种新型的经常性闪闪发光的解码器(Rego)。特别是,REGO采用多级复发处理结构,以帮助更准确地逐渐关注前景物体。在每个处理阶段,从ROI的闪烁特征提取视觉特征,其中来自上阶段的检测结果的放大边界框区域。然后,引入了基于一瞥的解码器,以提供基于前一级的瞥见特征和注意力建模输出的精细检测结果。在实践中,Refo可以很容易地嵌入代表性的DETR变体,同时保持其完全端到端的训练和推理管道。特别地,Refo帮助可变形的DETR在MSCOCO数据集上实现44.8AP,只有36个训练时期,与需要500和50时期的第一DETR和可变形的DETR相比,分别可以分别实现相当的性能。实验还表明,Rego始终如一地提升不同DETR探测器的性能高达7%的相对增益,在相同的50次训练时期。代码可通过https://github.com/zhechen/deformable-detr-rego获得。
translated by 谷歌翻译
In this paper we present Mask DINO, a unified object detection and segmentation framework. Mask DINO extends DINO (DETR with Improved Denoising Anchor Boxes) by adding a mask prediction branch which supports all image segmentation tasks (instance, panoptic, and semantic). It makes use of the query embeddings from DINO to dot-product a high-resolution pixel embedding map to predict a set of binary masks. Some key components in DINO are extended for segmentation through a shared architecture and training process. Mask DINO is simple, efficient, and scalable, and it can benefit from joint large-scale detection and segmentation datasets. Our experiments show that Mask DINO significantly outperforms all existing specialized segmentation methods, both on a ResNet-50 backbone and a pre-trained model with SwinL backbone. Notably, Mask DINO establishes the best results to date on instance segmentation (54.5 AP on COCO), panoptic segmentation (59.4 PQ on COCO), and semantic segmentation (60.8 mIoU on ADE20K) among models under one billion parameters. Code is available at \url{https://github.com/IDEACVR/MaskDINO}.
translated by 谷歌翻译
检测变压器已在富含样品的可可数据集上实现了竞争性能。但是,我们显示他们中的大多数人在小型数据集(例如CityScapes)上遭受了大量的性能下降。换句话说,检测变压器通常是渴望数据的。为了解决这个问题,我们通过逐步过渡从数据效率的RCNN变体到代表性的DETR,从经验中分析影响数据效率的因素。经验结果表明,来自本地图像区域的稀疏特征采样可容纳关键。基于此观察结果,我们通过简单地简单地交替如何在跨意义层构建键和价值序列,从而减少现有检测变压器的数据问题,并对原始模型进行最小的修改。此外,我们引入了一种简单而有效的标签增强方法,以提供更丰富的监督并提高数据效率。实验表明,我们的方法可以很容易地应用于不同的检测变压器,并在富含样品和样品的数据集上提高其性能。代码将在\ url {https://github.com/encounter1997/de-detrs}上公开提供。
translated by 谷歌翻译