We address 2D floorplan reconstruction from 3D scans. Existing approaches typically employ heuristically designed multi-stage pipelines. Instead, we formulate floorplan reconstruction as a single-stage structured prediction task: find a variable-size set of polygons, which in turn are variable-length sequences of ordered vertices. To solve it we develop a novel Transformer architecture that generates polygons of multiple rooms in parallel, in a holistic manner without hand-crafted intermediate stages. The model features two-level queries for polygons and corners, and includes polygon matching to make the network end-to-end trainable. Our method achieves a new state-of-the-art for two challenging datasets, Structured3D and SceneCAD, along with significantly faster inference than previous methods. Moreover, it can readily be extended to predict additional information, i.e., semantic room types and architectural elements like doors and windows. Our code and models will be available at: https://github.com/ywyue/RoomFormer.
translated by 谷歌翻译
本文介绍了一种用于结构重建的新型关注的神经网络,其将2D光栅图像作为输入,并重建描绘底层几何结构的平面图。该方法检测角落,并以端到端的方式对角之间进行分类边缘候选。我们的贡献是整体边缘分类架构,其中1)通过其端点的三角位置编码初始化边缘候选的特征; 2)通过可变形的关注将图像特征融合到每个边缘候选; 3)采用两个重量分配变压器解码器,用于在图形边缘候选方面学习整体结构模式; 4)通过掩盖的学习策略培训。拐角探测器是边缘分类架构的变体,适用于作为转角候选的像素上操作。我们对两种结构化重建任务进行实验:户外建筑架构和室内平面平面图形重建。广泛的定性和量化评估表明了我们对现有技术的方法的优势。我们将分享代码和模型。
translated by 谷歌翻译
在本文中,我们提出了简单的关注机制,我们称之为箱子。它可以实现网格特征之间的空间交互,从感兴趣的框中采样,并提高变压器的学习能力,以获得几个视觉任务。具体而言,我们呈现拳击手,短暂的框变压器,通过从输入特征映射上的参考窗口预测其转换来参加一组框。通过考虑其网格结构,拳击手通过考虑其网格结构来计算这些框的注意力。值得注意的是,Boxer-2D自然有关于其注意模块内容信息的框信息的原因,使其适用于端到端实例检测和分段任务。通过在盒注意模块中旋转的旋转的不变性,Boxer-3D能够从用于3D端到端对象检测的鸟瞰图平面产生识别信息。我们的实验表明,拟议的拳击手-2D在Coco检测中实现了更好的结果,并且在Coco实例分割上具有良好的和高度优化的掩模R-CNN可比性。 Boxer-3D已经为Waymo开放的车辆类别提供了令人信服的性能,而无需任何特定的类优化。代码将被释放。
translated by 谷歌翻译
3D场景从点云层的理解对各种机器人应用起着重要作用。遗憾的是,目前的最先进的方法使用单独的神经网络进行对象检测或房间布局估计等不同任务。这种方案具有两个限制:1)存储和运行多个网络以用于不同任务的网络对于典型的机器人平台昂贵。 2)忽略单独输出的内在结构,潜在地侵犯。为此,我们使用点云输入提出了第一变压器架构,其同时预测3D对象和布局。与估计布局关键点或边缘的现有方法不同,我们将单独参数化为一组四边形。因此,所提出的架构被称为p(oint)q(UAD)-Transformer。除了新颖的四边形表示之外,我们提出了一种量身定制的物理约束损失功能,阻碍对象布局干扰。公共基准SCANNet上的定量和定性评估表明,所提出的PQ变换器成功地共同解析了3D对象和布局,以准实时(8.91 FPS)速率运行而无需效率为导向的优化。此外,新的物理限制损失可以改善强力基线,房间布局的F1分数明显促进了37.9%至57.9%。
translated by 谷歌翻译
我们提出了一种适用于许多场景中的新方法,理解了适应Monte Carlo Tree Search(MCTS)算法的问题,该算法最初旨在学习玩高州复杂性的游戏。从生成的建议库中,我们的方法共同选择并优化了最小化目标项的建议。在我们的第一个从点云中进行平面图重建的应用程序中,我们的方法通过优化将深度网络预测的适应性组合到房间形状上的目标函数,选择并改进了以2D多边形为模型的房间建议。我们还引入了一种新型的可区分方法来渲染这些建议的多边形形状。我们对最近且具有挑战性的结构3D和Floor SP数据集的评估对最先进的表现有了显着改进,而没有对平面图配置施加硬性约束也没有假设。在我们的第二个应用程序中,我们扩展了从颜色图像重建一般3D房间布局并获得准确的房间布局的方法。我们还表明,可以轻松扩展我们的可区分渲染器,以渲染3D平面多边形和多边形嵌入。我们的方法在MatterPort3D-Layout数据集上显示了高性能,而无需在房间布局配置上引入硬性约束。
translated by 谷歌翻译
时间动作检测(TAD)旨在确定未修剪视频中每个动作实例的语义标签和边界。先前的方法通过复杂的管道来解决此任务。在本文中,我们提出了一个具有简单集的预测管道的端到端时间动作检测变压器(TADTR)。给定一组名为“动作查询”的可学习嵌入,Tadtr可以从每个查询的视频中自适应提取时间上下文,并直接预测动作实例。为了适应TAD的变压器,我们提出了三个改进,以提高其所在地意识。核心是一个时间可变形的注意模块,在视频中有选择地参加一组稀疏的密钥片段。片段的完善机制和动作回归头旨在完善预测实例的边界和信心。 TADTR需要比以前的检测器更低的计算成本,同时保留了出色的性能。作为一个独立的检测器,它在Thumos14(56.7%地图)和HACS段(32.09%地图)上实现了最先进的性能。结合一个额外的动作分类器,它在ActivityNet-1.3上获得了36.75%的地图。我们的代码可在\ url {https://github.com/xlliu7/tadtr}上获得。
translated by 谷歌翻译
speed among all existing VIS models, and achieves the best result among methods using single model on the YouTube-VIS dataset. For the first time, we demonstrate a much simpler and faster video instance segmentation framework built upon Transformers, achieving competitive accuracy. We hope that VisTR can motivate future research for more video understanding tasks.
translated by 谷歌翻译
变压器是一种基于关注的编码器解码器架构,彻底改变了自然语言处理领域。灵感来自这一重大成就,最近在将变形式架构调整到计算机视觉(CV)领域的一些开创性作品,这已经证明了他们对各种简历任务的有效性。依靠竞争力的建模能力,与现代卷积神经网络相比在本文中,我们已经为三百不同的视觉变压器进行了全面的审查,用于三个基本的CV任务(分类,检测和分割),提出了根据其动机,结构和使用情况组织这些方法的分类。 。由于培训设置和面向任务的差异,我们还在不同的配置上进行了评估了这些方法,以便于易于和直观的比较而不是各种基准。此外,我们已经揭示了一系列必不可少的,但可能使变压器能够从众多架构中脱颖而出,例如松弛的高级语义嵌入,以弥合视觉和顺序变压器之间的差距。最后,提出了三个未来的未来研究方向进行进一步投资。
translated by 谷歌翻译
DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10× less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https:// github.com/fundamentalvision/Deformable-DETR.
translated by 谷歌翻译
在这项工作中,我们呈现SEQFormer,这是一个令人沮丧的视频实例分段模型。 SEQFormer遵循Vision变换器的原理,该方法模型视频帧之间的实例关系。然而,我们观察到一个独立的实例查询足以捕获视频中的时间序列,但应该独立地使用每个帧进行注意力机制。为此,SEQFormer在每个帧中定位一个实例,并聚合时间信息以学习视频级实例的强大表示,其用于动态地预测每个帧上的掩模序列。实例跟踪自然地实现而不进行跟踪分支或后处理。在YouTube-VIS数据集上,SEQFormer使用Reset-50个骨干和49.0 AP实现47.4个AP,其中Reset-101骨干,没有响铃和吹口哨。此类成果分别显着超过了以前的最先进的性能4.6和4.4。此外,与最近提出的Swin变压器集成,SEQFormer可以实现59.3的高得多。我们希望SEQFormer可能是一个强大的基线,促进了视频实例分段中的未来研究,同时使用更强大,准确,整洁的模型来实现该字段。代码和预先训练的型号在https://github.com/wjf5203/seqformer上公开使用。
translated by 谷歌翻译
我们提出了一种新的表结构识别方法(TSR)方法,称为TSRFormer,以稳健地识别来自各种表图像的几何变形的复杂表的结构。与以前的方法不同,我们将表分离线预测作为线回归问题,而不是图像分割问题,并提出了一种新的两阶段基于基于DETR的分离器预测方法,称为\ textbf {sep} arator \ textbf {re} re} tr} ansformer(sepretr),直接预测与表图像的分离线。为了使两阶段的DETR框架有效地有效地在分离线预测任务上工作,我们提出了两个改进:1)一种先前增强的匹配策略,以解决慢速收敛问题的detr; 2)直接来自高分辨率卷积特征图的样本特征的新的交叉注意模块,以便以低计算成本实现高定位精度。在分离线预测之后,使用简单的基于关系网络的单元格合并模块来恢复跨越单元。借助这些新技术,我们的TSRFormer在包括SCITSR,PubTabnet和WTW在内的多个基准数据集上实现了最先进的性能。此外,我们已经验证了使用复杂的结构,无边界的单元,大空间,空的或跨越的单元格以及在更具挑战性的现实世界内部数据集中扭曲甚至弯曲的形状的桌子的鲁棒性。
translated by 谷歌翻译
3D对象检测通过将点云作为唯一的输入来取得了显着的进展。但是,点云通常遭受不完整的几何结构和缺乏语义信息,这使得检测器难以准确地对检测到的对象进行分类。在这项工作中,我们专注于如何有效利用来自图像的对象级信息来提高基于点的3D检测器的性能。我们提出DEMF,这是一种简单而有效的方法,将图像信息融合到点特征中。给定一组点特征和图像特征图,DEMF通过将3D点的投影2D位置作为参考来自适应地汇总图像特征。我们在挑战性的Sun RGB-D数据集上评估了我们的方法,从而提高了最新的结果(+2.1 map@0.25和+2.3map@0.5)。代码可从https://github.com/haoy945/demf获得。
translated by 谷歌翻译
我们为RGB视频提供了基于变压器的神经网络体系结构,用于多对象3D重建。它依赖于表示知识的两种替代方法:作为特征的全局3D网格和一系列特定的2D网格。我们通过专用双向注意机制在两者之间逐步交换信息。我们利用有关图像形成过程的知识,以显着稀疏注意力重量矩阵,从而使我们的体系结构在记忆和计算方面可行。我们在3D特征网格的顶部附上一个detr风格的头,以检测场景中的对象并预测其3D姿势和3D形状。与以前的方法相比,我们的体系结构是单阶段,端到端可训练,并且可以从整体上考虑来自多个视频帧的场景,而无需脆弱的跟踪步骤。我们在挑战性的SCAN2CAD数据集上评估了我们的方法,在该数据集中,我们的表现要优于RGB视频的3D对象姿势估算的最新最新方法; (2)将多视图立体声与RGB-D CAD对齐结合的强大替代方法。我们计划发布我们的源代码。
translated by 谷歌翻译
蒙面自动编码在图像和语言领域的自我监督学习方面取得了巨大的成功。但是,基于面具的预处理尚未显示出对点云理解的好处,这可能是由于PointNet(PointNet)无法正确处理训练的标准骨架,而不是通过训练期间掩盖引入的测试分配不匹配。在本文中,我们通过提出一个判别性掩码式变压器框架,maskPoint}来弥合这一差距。我们的关键想法是将点云表示为离散的占用值(1如果点云的一部分;如果不是的,则为0),并在蒙版对象点和采样噪声点之间执行简单的二进制分类作为代理任务。这样,我们的方法是对点云中的点采样差异的强大,并促进了学习丰富的表示。我们在几个下游任务中评估了验证的模型,包括3D形状分类,分割和现实词对象检测,并展示了最新的结果,同时获得了明显的预读速度(例如,扫描仪上的4.1倍)先前的最新变压器基线。代码可在https://github.com/haotian-liu/maskpoint上找到。
translated by 谷歌翻译
变压器在自然语言处理中的成功最近引起了计算机视觉领域的关注。由于能够学习长期依赖性,变压器已被用作广泛使用的卷积运算符的替代品。事实证明,这种替代者在许多任务中都取得了成功,其中几种最先进的方法依靠变压器来更好地学习。在计算机视觉中,3D字段还见证了使用变压器来增加3D卷积神经网络和多层感知器网络的增加。尽管许多调查都集中在视力中的变压器上,但由于与2D视觉相比,由于数据表示和处理的差异,3D视觉需要特别注意。在这项工作中,我们介绍了针对不同3D视觉任务的100多种变压器方法的系统和彻底审查,包括分类,细分,检测,完成,姿势估计等。我们在3D Vision中讨论了变形金刚的设计,该设计使其可以使用各种3D表示形式处理数据。对于每个应用程序,我们强调了基于变压器的方法的关键属性和贡献。为了评估这些方法的竞争力,我们将它们的性能与12个3D基准测试的常见非转化方法进行了比较。我们通过讨论3D视觉中变压器的不同开放方向和挑战来结束调查。除了提出的论文外,我们的目标是频繁更新最新的相关论文及其相应的实现:https://github.com/lahoud/3d-vision-transformers。
translated by 谷歌翻译
暂时视频接地(TVG)旨在根据自然语言查询将时间段定位在未修饰的视频中。在这项工作中,我们提出了一个名为TVG探索和匹配的新范式,该范式无缝地统一了两种TVG方法:无提案和基于提案的方法;前者探索了直接查找细分市场的搜索空间,后者将预定义的提案与地面真相相匹配。为了实现这一目标,我们将TVG视为一个设定的预测问题,并设计了可端到端的可训练的语言视频变压器(LVTR),该视频变压器(LVTR)利用了丰富的上下文化和平行解码的建筑优势来设置预测。总体培训时间表与两次扮演不同角色的关键损失,即时间定位损失和设定指导损失的平衡。这两个损失允许每个建议可以回归目标细分并确定目标查询。更具体地说,LVTR首先探索搜索空间以使初始建议多样化,然后将建议与相应的目标匹配,以细粒度的方式对齐它们。探索和匹配方案成功地结合了两种互补方法的优势,而无需将先验知识(例如,非最大抑制)编码到TVG管道中。结果,LVTR在两个TVG基准(ActivityCaptions and Charades-sta)上设定了新的最新结果,其推理速度是两倍。代码可在https://github.com/sangminwoo/explore-and-match上找到。
translated by 谷歌翻译
This paper presents an extreme floorplan reconstruction task, a new benchmark for the task, and a neural architecture as a solution. Given a partial floorplan reconstruction inferred or curated from panorama images, the task is to reconstruct a complete floorplan including invisible architectural structures. The proposed neural network 1) encodes an input partial floorplan into a set of latent vectors by convolutional neural networks and a Transformer; and 2) reconstructs an entire floorplan while hallucinating invisible rooms and doors by cascading Transformer decoders. Qualitative and quantitative evaluations demonstrate effectiveness of our approach over the benchmark of 701 houses, outperforming the state-of-the-art reconstruction techniques. We will share our code, models, and data.
translated by 谷歌翻译
Point cloud learning has lately attracted increasing attention due to its wide applications in many areas, such as computer vision, autonomous driving, and robotics. As a dominating technique in AI, deep learning has been successfully used to solve various 2D vision problems. However, deep learning on point clouds is still in its infancy due to the unique challenges faced by the processing of point clouds with deep neural networks. Recently, deep learning on point clouds has become even thriving, with numerous methods being proposed to address different problems in this area. To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds. It covers three major tasks, including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation. It also presents comparative results on several publicly available datasets, together with insightful observations and inspiring future research directions.
translated by 谷歌翻译
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (2 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models will be available for further research.
translated by 谷歌翻译
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional feature encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works. We hope this effort will ignite further interest in the community to solve current challenges towards the application of transformer models in computer vision.
translated by 谷歌翻译